Category Archives: Online Security Blog

The latest news and insights from Google on security and safety on the Internet

Research Grants to support Google VRP Bug Hunters during COVID-19



In 2015, we launched our Vulnerability Research Grant program, which allows us to recognize the time and efforts of security researchers, including the situations where they don't find any vulnerabilities. To support our community of security researchers and to help protect our users around the world during COVID-19, we are announcing a temporary expansion of our Vulnerability Research Grant efforts.

In light of new challenges caused by the coronavirus outbreak, we are expanding this initiative by  creating a COVID-19 grant fund. As of today, every Google VRP Bug Hunter who submitted at least two remunerated reports from 2018 through April 2020 will be eligible for a $1,337 research grant. We are dedicating these grants to support our researchers during this time. We are committed to protecting our users and we want to encourage the research community to help us identify threats and to prevent potential vulnerabilities in our products.

We understand the individual challenges COVID-19 has placed on the research community are different for everyone and we hope that these grants will allow us to support our Bug Hunters during these uncertain times. Even though our grants are intended to recognize the efforts of our frequent researchers regardless of their results, as always, bugs found during the grant are eligible for regular rewards per the Vulnerability Reward Program (VRP) rules. We are aware that some of our partners might not be interested in monetary grants. In such cases, we will offer the option to donate the grant to an established COVID-19 related charity and within our discretion, will monetarily match these charitable donations.

For those of you who recently joined us or are planning to start, it’s never too late. We are committed to continue the Vulnerability Research Grant program throughout 2020, so stay tuned for future announcements and follow us on @GoogleVRP!

Introducing our new book “Building Secure and Reliable Systems”



For years, I’ve wished that someone would write a book like this. Since their publication, I’ve often admired and recommended the Google Site Reliability Engineering (SRE) books—so I was thrilled to find that a book focused on security and reliability was already underway when I arrived at Google. Ever since I began working in the tech industry, across organizations of varying sizes, I’ve seen people struggling with the question of how security should be organized: Should it be centralized or federated? Independent or embedded? Operational or consultative? Technical or governing? The list goes on and on.

Both SRE and security have strong dependencies on classic software engineering teams. Yet both differ from classic software engineering teams in fundamental ways:
  • Site Reliability Engineers (SREs) and security engineers tend to break and fix, as well as build.
  • Their work encompasses operations, in addition to development.
  • SREs and security engineers are specialists, rather than classic software engineers.
  • They are often viewed as roadblocks, rather than enablers.
  • They are frequently siloed, rather than integrated in product teams.
For many years, my colleagues and I have argued that security should be a first-class and embedded quality of software. I believe that embracing an SRE- inspired approach is a logical step in that direction. As my understanding of the intersection between security and SRE has deepened, I’ve become even more certain that it’s important to more thoroughly integrate security practices into the full lifecycle of software and data services. The nature of the modern hybrid cloud—much of which is based on open source software frameworks that offer interconnected data and microservices—makes tightly integrated security and resilience capabilities even more important.

At the same time, enterprises are at a critical point where cloud computing, various forms of machine learning, and a complicated cybersecurity landscape are together determining where an increasingly digital world is going, how quickly it will get there, and what risks are involved.

The operational and organizational approaches to security in large enterprises have varied dramatically over the past 20 years. The most prominent instantiations include fully centralized chief information security officers and core infrastructure operations that encompass firewalls, directory services, proxies, and much more—teams that have grown to hundreds or thousands of employees. On the other end of the spectrum, federated business information security teams have either the line of business or technical expertise required to support or govern a named list of functions or business operations. Somewhere in the middle, committees, metrics, and regulatory requirements might govern security policies, and embedded Security Champions might either play a relationship management role or track issues for a named organizational unit. Recently, I’ve seen teams riffing on the SRE model by evolving the embedded role into something like a site security engineer, or into a specific Agile scrum role for specialist security teams.

For good reasons, enterprise security teams have largely focused on confidentiality. However, organizations often recognize data integrity and availability to be equally important, and address these areas with different teams and different controls. The SRE function is a best-in-class approach to reliability. However, it also plays a role in the real-time detection of and response to technical issues—including security- related attacks on privileged access or sensitive data. Ultimately, while engineering teams are often organizationally separated according to specialized skill sets, they have a common goal: ensuring the quality and safety of the system or application.

In a world that is becoming more dependent upon technology every year, a book about approaches to security and reliability drawn from experiences at Google and across the industry is an important contribution to the evolution of software development, systems management, and data protection. As the threat landscape evolves, a dynamic and integrated approach to defense is now a basic necessity. In my previous roles, I looked for a more formal exploration of these questions; I hope that a variety of teams inside and outside of security organizations find this discussion useful as approaches and tools evolve. This project has reinforced my belief that the topics it covers are worth discussing and promoting in the industry—particularly as more organizations adopt DevOps, DevSecOps, SRE, and hybrid cloud architectures along with their associated operating models. At a minimum, this book is another step in the evolution and enhancement of system and data security in an increasingly digital world.

The new book can be downloaded for free from the Google SRE website, or purchased as a physical copy from your preferred retailer.

Announcing our first GCP VRP Prize winner and updates to 2020 program




Last year, we announced a yearly Google Cloud Platform (GCP) VRP Prize to promote security research of GCP. Since then, we’ve received many interesting entries as part of this new initiative from the security research community. Today, we are announcing the winner as well as several updates to our program for 2020.

After careful evaluation of all the submissions, we are excited to announce our winner of the 2019 GCP VRP prize: Wouter ter Maat, who submitted a write-up about Google Cloud Shell vulnerabilities. You can read his winning write-up here.

There were several other excellent reports submitted to our GCP VRP in 2019. To learn more about them watch this video by LiveOverflow, which explains some of the top submissions in detail.

To encourage more security researchers to look for vulnerabilities in GCP and to better reward our top bug hunters, we're tripling the total amount of the GCP VRP Prize this year. We will pay out a total of $313,337 for the top vulnerability reports in GCP products submitted in 2020. The following prize amounts will be distributed between the top 6 submissions:
  • 1st prize: $133,337
  • 2nd prize: $73,331
  • 3rd prize: $73,331
  • 4th prize: $31,337
  • 5th prize: $1,001
  • 6th prize: $1,000

Like last year, submissions should have public write-ups in order to be eligible for the prize. The number of vulnerability reports in a single write-up is not a factor. You can even make multiple submissions, one for each write-up. These prizes are only for vulnerabilities found in GCP products. If you have budget constraints regarding access to testing environments, you can use the free tier of GCP. Note that this prize is not a replacement of our Vulnerability Reward Program (VRP), and that we will continue to pay security researchers under the VRP for disclosing security issues that affect Google services, including GCP. Complete details, terms and conditions about the prize can be found here.


Thank you to everyone who submitted entries in 2019! Make sure to nominate your VRP reports and write-ups for the 2020 GCP VRP prize here before December 31, 2020 at 11:59 GMT.

How Google Play Protect kept users safe in 2019


Through 2019, Google Play Protect continued to improve the security for 2.5 billion Android devices. Built into Android, Play Protect scans over 100 billion apps every day for malware and other harmful apps. This past year, Play Protect prevented over 1.9 billion malware installs from unknown sources. Throughout 2019 there were many improvements made to Play Protect to bring the best of Google to Android devices to keep users safe. Some of the new features launched in 2019 include:
Advanced similarity detection
Play Protect now warns you about variations of known malware right on the device. On-device protections warn users about Potentially Harmful Apps (PHAs) at install time for a faster response. Since October 2019, Play Protect issued 380,000 warnings for install attempts using this system.
Warnings for apps targeting lower Android versions
Malware developers intentionally target devices running long outdated versions of Android to abuse exploits that have recently been patched. In 2018, Google Play started requiring new apps and app updates be built for new versions of the Android OS. This strategy ensures that users downloading apps from Google Play recieve apps that take advantage of the latest privacy and security improvements in the OS.
In 2019, we improved on this strategy with warnings to the user. Play Protect now notifies users when they install an app that is designed for outdated versions. The user can then make an informed decision to proceed with the installation or stop the app from being installed so they can look for an alternative that target the most current version of Android.
Uploading rare apps for scanning
The Android app ecosystem is growing at an exponential rate. Millions of new app versions are created and shared outside of Google Play daily posing a unique scaling challenge. Knowledge of new and rare apps is essential to provide the best protection possible.
We added a new feature that lets users help the fight against malware by sending apps Play Protect hasn't seen before for scanning during installation. The upload to Google’s scanning services preserves the privacy of the user and enables Play Protect to improve the protection for all users.
Integration with Google’s Files app
Google’s Files app is used by hundreds of millions of people every month to manage the storage on their device, share files safely, and clean up clutter and duplicate files. This year, we integrated Google Play Protect notifications within the app so that users are prompted to scan and remove any harmful applications that may be installed.
Play Protect visual updates
The Google Play Store has over 2 billion monthly active users coming to safely find the right app, game, and other digital content. This year the team was excited to roll out a complete visual redesign. With this change, Play Protect made several user-facing updates to deliver a cleaner, more prominent experience including a reminder to enable app-scanning in My apps & games to improve security.
The mobile threat landscape is always changing and so Google Play Protect must keep adapting and improving to protect our users. Visit developers.google.com/android/play-protect to stay informed on all the new exciting features and improvements being added to Google Play Protect.
Acknowledgements: Aaron Josephs, Ben Gruver, James Kelly, Rodrigo Farell, Wei Jin and William Luh

How Google does certificate lifecycle management


Over the last few years, we’ve seen the use of Transport Layer Security (TLS) on the web increase to more than 96% of all traffic seen by a Chrome browser on Chrome OS. That’s an increase of over 35% in just four years, as reported in our Google Transparency Report. Whether you’re a web developer, a business, or a netizen, this is a collective achievement that’s making the Internet a safer place for everyone.

Percentage of pages loaded over HTTPS in Chrome by platform (Google Transparency Report)

The way TLS is deployed has also changed. The maximum certificate validity for public certificates has gone from 5 years to 2 years (CA/Browser Forum), and that will drop to 1 year in the near future. To reduce the number of outages caused by manual certificate enrollments, the Internet Engineering Task Force (IETF) has standardized Automatic Certificate Management Environment (ACME). ACME enables Certificate Authorities (CAs) to offer TLS certificates for the public web in an automated and interoperable way. 

As we round off this exciting tour of recent TLS history, we’d be remiss if we didn’t mention Let’s Encrypt - the first publicly trusted non-profit CA. Their focus on automation and TLS by default has been foundational to this massive increase in TLS usage. In fact, Let’s Encrypt just issued their billionth (!) certificate. Google has been an active supporter of Let’s Encrypt because we believe the work they do to make TLS accessible is important for the security and resilience of the Internet's infrastructure. Keep rocking, Let’s Encrypt!

Simplifying certificate lifecycle management for Google’s users

These are important strides we are making collectively in the security community. At the same time, these efforts mean we are moving to shorter-lived keys to improve security, which in-turn requires more frequent certificate renewals. Further, infrastructure deployments are getting more heterogeneous. Web traffic is served from multiple datacenters, often from different providers. This makes it hard to manually keep tabs on which certificates need renewing and ensuring new certificates are deployed correctly. So what is the way forward? 

With the adoption numbers cited above, it’s clear that TLS, Web PKI, and certificate lifecycle management are foundational to every product we and our customers build and deploy. This is why we have been expanding significant effort to enable TLS by default for our products and services, while also automating certificate renewals to make certificate lifecycle management more reliable, globally scalable, and trustworthy for our customers. Our goal is simple: We want to ensure TLS just works out of the box regardless of which Google service you use.

In support of that goal, we have enabled automatic management of TLS certificates for Google services using an internal-only ACME service, Google Trust Services. This applies to our own products and services, as well as for our customers across Alphabet and Google Cloud. As a result, our users no longer need to worry about things like certificate expiration, because we automatically refresh the certificates for our customers. Some implementation highlights include:

  • All Blogger blogs, Google Sites, and Google My Business sites now get HTTPS by default for their custom domains.
  • Google Cloud customers get the benefits of Managed TLS on their domains. So:
    • Developers building with Firebase, Cloud Run, and AppEngine automatically get HTTPS for their applications.
    • When deploying applications with Google Kubernetes Engine or behind Google Cloud Load Balancing (GCLB), certificate management is taken care of if customers choose to use Google-managed certificates. This also makes TLS use with these products easy and reliable.
Performance, scalability, and reliability are foundational requirements for Google services. We have established our own publicly trusted CA, Google Trust Services to ensure we can meet those criteria for our products and services. At the same time, we believe in user choice. So even as we make it easier for you to use Google Trust Services, we have also made it possible across Google’s products and services to use Let’s Encrypt. This choice can be made easily through the creation of a CAA record indicating your preference.

While everyone appreciates TLS working out of the box, we also know power users have specialized needs. This is why we have provided rich capabilities in Google Cloud Load Balancing to let customers control policies around TLS termination. 

In addition, through our work on Certificate Transparency in collaboration with other organizations, we have made it easier for our customers to protect their and their customers’ brands by monitoring the WebPKI ecosystem for certificates issued for their domains or those that look similar to their domains, so they can take proactive measures to stop any abuse before it becomes an issue. For example, Facebook used Certificate Transparency Logs to catch a number of phishing websites that tried to impersonate their services. 

We recognize how important security, privacy, and reliability are to you and have been investing across our product portfolio to ensure that when it comes to TLS, you have the tools you need to deploy with confidence. Going forward, we look forward to a continued partnership to make the Internet a safer place together.

FuzzBench: Fuzzer Benchmarking as a Service


We are excited to launch FuzzBench, a fully automated, open source, free service for evaluating fuzzers. The goal of FuzzBench is to make it painless to rigorously evaluate fuzzing research and make fuzzing research easier for the community to adopt.
Fuzzing is an important bug finding technique. At Google, we’ve found tens of thousands of bugs (1, 2) with fuzzers like libFuzzer and AFL. There are numerous research papers that either improve upon these tools (e.g. MOpt-AFL, AFLFast, etc) or introduce new techniques (e.g. Driller, QSYM, etc) for bug finding. However, it is hard to know how well these new tools and techniques generalize on a large set of real world programs. Though research normally includes evaluations, these often have shortcomings—they don't use a large and diverse set of real world benchmarks, use few trials, use short trials, or lack statistical tests to illustrate if findings are significant. This is understandable since full scale experiments can be prohibitively expensive for researchers. For example, a 24-hour, 10-trial, 10 fuzzer, 20 benchmark experiment would require 2,000 CPUs to complete in a day.
To help solve these issues the OSS-Fuzz team is launching FuzzBench, a fully automated, open source, free service. FuzzBench provides a framework for painlessly evaluating fuzzers in a reproducible way. To use FuzzBench, researchers can simply integrate a fuzzer and FuzzBench will run an experiment for 24 hours with many trials and real world benchmarks. Based on data from this experiment, FuzzBench will produce a report comparing the performance of the fuzzer to others and give insights into the strengths and weaknesses of each fuzzer. This should allow researchers to focus more of their time on perfecting techniques and less time setting up evaluations and dealing with existing fuzzers.
Integrating a fuzzer with FuzzBench is simple as most integrations are less than 50 lines of code (example). Once a fuzzer is integrated, it can fuzz almost all 250+ OSS-Fuzz projects out of the box. We have already integrated ten fuzzers, including AFL, LibFuzzer, Honggfuzz, and several academic projects such as QSYM and Eclipser.
Reports include statistical tests to give an idea how likely it is that performance differences between fuzzers are simply due to chance, as well as the raw data so researchers can do their own analysis. Performance is determined by the amount of covered program edges, though we plan on adding crashes as a performance metric. You can view a sample report here.
How to Participate
Our goal is to develop FuzzBench with community contributions and input so that it becomes the gold standard for fuzzer evaluation. We invite members of the fuzzing research community to contribute their fuzzers and techniques, even while they are in development. Better evaluations will lead to more adoption and greater impact for fuzzing research.
We also encourage contributions of better ideas and techniques for evaluating fuzzers. Though we have made some progress on this problem, we have not solved it and we need the community’s help in developing these best practices.
Please join us by contributing to the FuzzBench repo on GitHub.

Helping Developers with Permission Requests


User trust is critical to the success of developers of every size. On the Google Play Store, we aim to help developers boost the trust of their users, by surfacing signals in the Developer Console about how to improve their privacy posture. Towards this aim, we surface a message to developers when we think their app is asking for permission that is likely unnecessary.
This is important because numerous studies have shown that user trust can be affected when the purpose of a permission is not clear.1 In addition, research has shown that when users are given a choice between similar apps, and one of them requests fewer permissions than the other, they choose the app with fewer permissions.2
Determining whether or not a permission request is necessary can be challenging. Android developers request permissions in their apps for many reasons - some related to core functionality, and others related to personalization, testing, advertising, and other factors. To do this, we identify a peer set of apps with similar functionality and compare a developer’s permission requests to that of their peers. If a very large percentage of these similar apps are not asking for a permission, and the developer is, we then let the developer know that their permission request is unusual compared to their peers. Our determination of the peer set is more involved than simply using Play Store categories. Our algorithm combines multiple signals that feed Natural Language Processing (NLP) and deep learning technology to determine this set. A full explanation of our method is outlined in our recent publication, entitled “Reducing Permissions Requests in Mobile Apps” that appeared in the Internet Measurement Conference (IMC) in October 2019.3 (Note that the threshold for surfacing the warning signal, as stated in this paper, is subject to change.)
We surface this information to developers in the Play Console and we let the developer make the final call as to whether or not the permission is truly necessary. It is possible that the developer has a feature unlike all of its peers. Once a developer removes a permission, they won’t see the warning any longer. Note that the warning is based on our computation of the set of peer apps similar to the developers. This is an evolving set, frequently recomputed, so the message may go away if there is an underlying change to the set of peers apps and their behavior. Similarly, even if a developer is not currently seeing a warning about a permission, they might in the future if the underlying peer set and its behavior changes. An example warning is depicted below.

This warning also helps to remind developers that they are not obligated to include all of the permission requests occurring within the libraries they include inside their apps. We are pleased to say that in the first year after deployment of this advice signal nearly 60% of warned apps removed permissions. Moreover, this occurred across all Play Store categories and all app popularity levels. The breadth of this developer response impacted over 55 billion app installs.3 This warning is one component of Google’s larger strategy to help protect users and help developers achieve good security and privacy practices, such as Project Strobe, our guidelines on permissions best practices, and our requirements around safe traffic handling.
Acknowledgements
Giles Hogben, Android Play Dashboard and Pre-Launch Report teams

References

[1] Modeling Users’ Mobile App Privacy Preferences: Restoring Usability in a Sea of Permission Settings, by J. Lin B. Liu, N. Sadeh and J. Hong. In Proceedings of Usenix Symposium on Privacy & Security (SOUPS) 2014.
[2] Using Personal Examples to Improve Risk Communication for Security & Privacy Decisions, by M. Harbach, M. Hettig, S. Weber, and M. Smith. In Proceedings of the SIGCHI Conference on Human Computing Factors in Computing Systems, 2014.
[3] Reducing Permission Requests in Mobile Apps, by S. T. Peddinti, I. Bilogrevic, N. Taft, M Pelikan, U. Erlingsson, P. Anthonysamy and G. Hogben. In Proceedings of ACM Internet Measurement Conference (IMC) 2019.

Data Encryption on Android with Jetpack Security

Posted by Jon Markoff, Staff Developer Advocate, Android Security

Illustration by Virginia Poltrack

Have you ever tried to encrypt data in your app? As a developer, you want to keep data safe, and in the hands of the party intended to use. But if you’re like most Android developers, you don’t have a dedicated security team to help encrypt your app’s data properly. By searching the web to learn how to encrypt data, you might get answers that are several years out of date and provide incorrect examples.

The Jetpack Security (JetSec) crypto library provides abstractions for encrypting Files and SharedPreferences objects. The library promotes the use of the AndroidKeyStore while using safe and well-known cryptographic primitives. Using EncryptedFile and EncryptedSharedPreferences allows you to locally protect files that may contain sensitive data, API keys, OAuth tokens, and other types of secrets.

Why would you want to encrypt data in your app? Doesn’t Android, since 5.0, encrypt the contents of the user's data partition by default? It certainly does, but there are some use cases where you may want an extra level of protection. If your app uses shared storage, you should encrypt the data. In the app home directory, your app should encrypt data if your app handles sensitive information including but not limited to personally identifiable information (PII), health records, financial details, or enterprise data. When possible, we recommend that you tie this information to biometrics for an extra level of protection.

Jetpack Security is based on Tink, an open-source, cross-platform security project from Google. Tink might be appropriate if you need general encryption, hybrid encryption, or something similar. Jetpack Security data structures are fully compatible with Tink.

Key Generation

Before we jump into encrypting your data, it’s important to understand how your encryption keys will be kept safe. Jetpack Security uses a master key, which encrypts all subkeys that are used for each cryptographic operation. JetSec provides a recommended default master key in the MasterKeys class. This class uses a basic AES256-GCM key which is generated and stored in the AndroidKeyStore. The AndroidKeyStore is a container which stores cryptographic keys in the TEE or StrongBox, making them hard to extract. Subkeys are stored in a configurable SharedPreferences object.

Primarily, we use the AES256_GCM_SPEC specification in Jetpack Security, which is recommended for general use cases. AES256-GCM is symmetric and generally fast on modern devices.


val keyAlias = MasterKeys.getOrCreate(MasterKeys.AES256_GCM_SPEC)

For apps that require more configuration, or handle very sensitive data, it’s recommended to build your KeyGenParameterSpec, choosing options that make sense for your use. Time-bound keys with BiometricPrompt can provide an extra level of protection against rooted or compromised devices.

Important options:

  • userAuthenticationRequired() and userAuthenticationValiditySeconds() can be used to create a time-bound key. Time-bound keys require authorization using BiometricPrompt for both encryption and decryption of symmetric keys.
  • unlockedDeviceRequired() sets a flag that helps ensure key access cannot happen if the device is not unlocked. This flag is available on Android Pie and higher.
  • Use setIsStrongBoxBacked(), to run crypto operations on a stronger separate chip. This has a slight performance impact, but is more secure. It’s available on some devices that run Android Pie or higher.

Note: If your app needs to encrypt data in the background, you should not use time-bound keys or require that the device is unlocked, as you will not be able to accomplish this without a user present.


// Custom Advanced Master Key
val advancedSpec = KeyGenParameterSpec.Builder(
"master_key",
KeyProperties.PURPOSE_ENCRYPT or KeyProperties.PURPOSE_DECRYPT
).apply {
setBlockModes(KeyProperties.BLOCK_MODE_GCM)
setEncryptionPaddings(KeyProperties.ENCRYPTION_PADDING_NONE)
setKeySize(256)
setUserAuthenticationRequired(true)
setUserAuthenticationValidityDurationSeconds(15) // must be larger than 0
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.P) {
setUnlockedDeviceRequired(true)
setIsStrongBoxBacked(true)
}
}.build()

val advancedKeyAlias = MasterKeys.getOrCreate(advancedSpec)

Unlocking time-bound keys

You must use BiometricPrompt to authorize the device if your key was created with the following options:

  • userAuthenticationRequired is true
  • userAuthenticationValiditySeconds > 0

After the user authenticates, the keys are unlocked for the amount of time set in the validity seconds field. The AndroidKeystore does not have an API to query key settings, so your app must keep track of these settings. You should build your BiometricPrompt instance in the onCreate() method of the activity where you present the dialog to the user.

BiometricPrompt code to unlock time-bound keys

// Activity.onCreate

val promptInfo = PromptInfo.Builder()
.setTitle("Unlock?")
.setDescription("Would you like to unlock this key?")
.setDeviceCredentialAllowed(true)
.build()

val biometricPrompt = BiometricPrompt(
this, // Activity
ContextCompat.getMainExecutor(this),
authenticationCallback
)

private val authenticationCallback = object : AuthenticationCallback() {
override fun onAuthenticationSucceeded(
result: AuthenticationResult
) {
super.onAuthenticationSucceeded(result)
// Unlocked -- do work here.
}
override fun onAuthenticationError(
errorCode: Int, errString: CharSequence
) {
super.onAuthenticationError(errorCode, errString)
// Handle error.
}
}

To use:
biometricPrompt.authenticate(promptInfo)

Encrypt Files

Jetpack Security includes an EncryptedFile class, which removes the challenges of encrypting file data. Similar to File, EncryptedFile provides a FileInputStream object for reading and a FileOutputStream object for writing. Files are encrypted using Streaming AEAD, which follows the OAE2 definition. The data is divided into chunks and encrypted using AES256-GCM in such a way that it's not possible to reorder.

val secretFile = File(filesDir, "super_secret")
val encryptedFile = EncryptedFile.Builder(
secretFile,
applicationContext,
advancedKeyAlias,
FileEncryptionScheme.AES256_GCM_HKDF_4KB)
.setKeysetAlias("file_key") // optional
.setKeysetPrefName("secret_shared_prefs") // optional
.build()

encryptedFile.openFileOutput().use { outputStream ->
// Write data to your encrypted file
}

encryptedFile.openFileInput().use { inputStream ->
// Read data from your encrypted file
}

Encrypt SharedPreferences

If your application needs to save Key-value pairs - such as API keys - JetSec provides the EncryptedSharedPreferences class, which uses the same SharedPreferences interface that you’re used to.

Both keys and values are encrypted. Keys are encrypted using AES256-SIV-CMAC, which provides a deterministic cipher text; values are encrypted with AES256-GCM and are bound to the encrypted key. This scheme allows the key data to be encrypted safely, while still allowing lookups.

EncryptedSharedPreferences.create(
"my_secret_prefs",
advancedKeyAlias,
applicationContext,
PrefKeyEncryptionScheme.AES256_SIV,
PrefValueEncryptionScheme.AES256_GCM
).edit {
// Update secret values
}

More Resources

FileLocker is a sample app on the Android Security GitHub samples page. It’s a great example of how to use File encryption using Jetpack Security.

Happy Encrypting!

Improving Malicious Document Detection in Gmail with Deep Learning




Gmail protects your incoming mail against spam, phishing attempts, and malware. Our existing machine learning models are highly effective at doing this, and in conjunction with our other protections, they help block more than 99.9% of threats from reaching Gmail inboxes.

One of our key protections is our malware scanner that processes more than 300 billion attachments each week to block harmful content. 63% percent of the malicious documents we block differ from day to day. To stay ahead of this constantly evolving threat, we recently added a new generation of document scanners that rely on deep learning to improve our detection capabilities. We’re sharing the details of this technology and its early success this week at RSA 2020.

Since the new scanner launched at the end of 2019, we have increased our daily detection coverage of Office documents that contain malicious scripts by 10%. Our technology is especially helpful at detecting adversarial, bursty attacks. In these cases, our new scanner has improved our detection rate by 150%. Under the hood, our new scanner uses a distinct TensorFlow deep-learning model trained with TFX (TensorFlow Extended) and a custom document analyzer for each file type. The document analyzers are responsible for parsing the document, identifying common attack patterns, extracting macros, deobfuscating content, and performing feature extraction.


Strengthening our document detection capabilities is one of our key focus areas, as malicious documents represent 58% of the malware targeting Gmail users. We are still actively developing this technology, and right now, we only use it to scan Office documents.

Our new scanner runs in parallel with existing detection capabilities, all of which contribute to the final verdict of our decision engine to block a malicious document. Combining different scanners is one of the cornerstones of our defense-in-depth approach to help protect users and ensure our detection system is resilient to adversarial attacks.

We will continue to actively expand the use of artificial intelligence to protect our users’ inboxes, and to stay ahead of attacks.

Disruptive ads enforcement and our new approach


As part of our ongoing efforts — along with help from newly developed technologies — today we’re announcing nearly 600 apps have been removed from the Google Play Store and banned from our ad monetization platforms, Google AdMob and Google Ad Manager, for violating our disruptive ads policy and disallowed interstitial policy.
Mobile ad fraud is an industry-wide challenge that can appear in many different forms with a variety of methods, and it has the potential to harm users, advertisers and publishers. At Google, we have dedicated teams focused on detecting and stopping malicious developers that attempt to defraud the mobile ecosystem. As part of these efforts we take action against those who create seemingly innocuous apps, but which actually violate our ads policies.
We define disruptive ads as ads that are displayed to users in unexpected ways, including impairing or interfering with the usability of device functions. While they can occur in-app, one form of disruptive ads we’ve seen on the rise is something we call out-of-context ads, which is when malicious developers serve ads on a mobile device when the user is not actually active in their app.
This is an invasive maneuver that results in poor user experiences that often disrupt key device functions and this approach can lead to unintentional ad clicks that waste advertiser spend. For example, imagine being unexpectedly served a full-screen ad when you attempt to make a phone call, unlock your phone, or while using your favorite map app’s turn-by-turn navigation.
Malicious developers continue to become more savvy in deploying and masking disruptive ads, but we’ve developed new technologies of our own to protect against this behavior. We recently developed an innovative machine-learning based approach to detect when apps show out-of-context ads, which led to the enforcement we’re announcing today.
As we move forward, we will continue to invest in new technologies to detect and prevent emerging threats that can generate invalid traffic, including disruptive ads, and to find more ways to adapt and evolve our platform and ecosystem policies to ensure that users and advertisers are protected from bad behavior.