Category Archives: Online Security Blog

The latest news and insights from Google on security and safety on the Internet

Hosted S/MIME by Google provides enhanced security for Gmail in the enterprise


We are constantly working to meet the needs of our enterprise customers, including enhanced security for their communications. Our aim is to offer a secure method to transport sensitive information despite insecure channels with email today and without compromising Gmail extensive protections for spam, phishing and malware.

Why hosted S/MIME?
Client-side S/MIME has been around for many years. However, its adoption has been limited because it is difficult to deploy (end users have to manually install certificates to their email applications) and the underlying email service cannot efficiently protect against spam, malware and phishing because client-side S/MIME makes the email content opaque.

With Google’s new hosted S/MIME solution, once an incoming encrypted email with S/MIME is received, it is stored using Google's encryption. This means that all normal processing of the email can happen, including extensive protections for spam/phishing/malware, admin services (such as vault retention, auditing and email routing rules), and high value end user features such as mail categorization, advanced search and Smart Reply. For the vast majority of emails, this is the safest solution - giving the benefit of strong authentication and encryption in transit - without losing the safety and features of Google's processing.

Using hosted S/MIME provides an added layer of security compared to using SMTP over TLS to send emails. TLS only guarantees to the sender’s service that the first hop transmission is encrypted and to the recipient that the last hop was encrypted. But in practice, emails often take many hops (through forwarders, mailing lists, relays, appliances, etc). With hosted S/MIME, the message itself is encrypted. This facilitates secure transit all the way down to the recipient’s mailbox.

S/MIME also adds verifiable account-level signatures authentication (versus only domain-based signature with DKIM). This means that email receivers can ensure that incoming email is actually from the sending account, not just a matching domain, and that the message has not been tampered with after it was sent.

How to use hosted S/MIME?
S/MIME requires every email address to have a suitable certificate attached to it. By default, Gmail requires the certificate to be from a publicly trusted root Certificate Authority (CA) which meets strong cryptographic standards. System administrators will have the option to lower these requirements for their domains.

To use hosted S/MIME, companies need to upload their own certificates (with private keys) to Gmail, which can be done by end users via Gmail settings or by admins in bulk via the Gmail API.

From there, using hosted S/MIME is a seamless experience for end users. When receiving a digitally signed message, Gmail automatically associates the public key with the contact of the sender. By default, Gmail automatically signs and encrypts outbound messages if there is a public S/MIME key available for the recipient. Although users have the option to manually remove encryption, admins can set up rules that override their action.

Hosted S/MIME is supported on Gmail web/iOS/Android, on Inbox and on clients connected to the Gmail service via IMAP. Users can exchange signed and encrypted emails with recipients using hosted S/MIME or client-side S/MIME.

Which companies should consider using hosted S/MIME?
Hosted S/MIME provides a solution that is easy to manage for administrators and seamless for end users. Companies that want security in transit and digital signature/non-repudiation at the account level should consider using hosted S/MIME. This is a need for many companies working with sensitive/confidential information.

Hosted S/MIME is available for G Suite Enterprise edition users.

Better and more usable protection from phishing


Despite constant advancements in online safety, phishing — one of the web’s oldest and simplest attacks — remains a tough challenge for the security community. Subtle tricks and good old-fashioned con-games can cause even the most security-conscious users to reveal their passwords or other personal information to fraudsters.
New advancements in phishing protection

This is why we’re excited about the news for G Suite customers: the launch of Security Key enforcement. Now, G Suite administrators can better protect their employees by enabling Two-Step Verification (2SV) using only Security Keys as the second factor, making this protection the norm rather than just an option. 2SV with only a Security Key offers the highest level of protection from phishing. Instead of entering a unique code as a second factor at sign-in, Security Keys send us cryptographic proof that users are on a legitimate Google site and that they have their Security Keys with them. Since most hijackers are remote, their efforts are thwarted because they cannot get physical possession of the Security Key.

Users can also take advantage of new Bluetooth low energy (BLE) Security Key support, which makes using 2SV Security Key protection easier on mobile devices. BLE Security Keys, which work on both Android and iOS, improve upon the usability of other form factors.
A long history of phishing protections

We’ve helped protect users from phishing for many years. We rolled out 2SV back in 2011, and later strengthened it in 2014 with the addition of Security Keys. These launches complement our many layers of phishing protections — Safe Browsing warnings, Gmail spam filters, and account sign-in challenges — as well as our work with industry groups like the FIDO Alliance and M3AAWG to develop standards and combat phishing across the industry. In the coming months, we’ll build on these protections and offer users the opportunity to further protect their personal Google Accounts.

Vulnerability Rewards Program: 2016 Year in Review



We created our Vulnerability Rewards Program in 2010 because researchers should be rewarded for protecting our users. Their discoveries help keep our users, and the internet at large, as safe as possible.

The amounts we award vary, but our message to researchers does not; each one represents a sincere ‘thank you’.

As we have for 2014 and 2015, we’re again sharing a yearly wrap-up of the Vulnerability Rewards Program.


What was new?

In short — a lot. Here’s a quick rundown:

Previously by-invitation only, we opened up Chrome's Fuzzer Program to submissions from the public. The program allows researchers to run fuzzers at large scale, across thousands of cores on Google hardware, and receive reward payments automatically.


On the product side, we saw amazing contributions from Android researchers all over the world, less than a year after Android launched its VRP. We also expanded our overall VRP to include more products, including OnHub and Nest devices.


We increased our presence at events around the world, like pwn2own and Pwnfest. The vulnerabilities responsibly disclosed at these events enabled us to quickly provide fixes to the ecosystem and keep customers safe. At both events, we were able to close down a vulnerability in Chrome within days of being notified of the issue.


Stories that stood out

As always, there was no shortage of inspiring, funny, and quirky anecdotes from the 2016 year in VRP.
  • We met Jasminder Pal Singh at Nullcon in India. Jasminder is a long-time contributor to the VRP, but this research is a side project for him. He spends most of his time growing Jasminder Web Services Point, the startup he operates with six other colleagues and friends. The team consists of: two web developers, one graphic designer, a developer for Android and iOS respectively, one Linux administrator, and a Content Manager/Writer. Jasminder’s VRP rewards fund the startup. The number of reports we receive from researchers in India is growing, and we’re growing the VRP’s presence there with additional conference sponsorships, trainings, and more.

Jasminder (back right) and his team
  • Jon Sawyer worked with his colleague Sean Beaupre from Streamlined Mobile Solutions, and friend Ben Actis to submit three Android vulnerability reports. A resident of Clallam County, Washington, Jon donated their $8,000 reward to their local Special Olympics team, the Orcas. Jon told us the reward was particularly meaningful because his son, Benji, plays on the team. He said: “Special Olympics provides a sense of community, accomplishment, and free health services at meets. They do incredible things for these people, at no cost for the athletes or their parents. Our donation is going to supply them with new properly fitting uniforms, new equipment, cover some facility rental fees (bowling alley, gym, track, swimming pool) and most importantly help cover the biggest cost, transportation.”
  • VRP researchers sometimes attach videos that demonstrate the bug. While making a great proof-of-concept video is a skill in itself, our researchers raised it to another level this year. Check out this video Frans Rosén sent us. It’s perfectly synchronized to the background music! We hope this trend continues in 2017 ;-)

Researchers’ individual contributions, and our relationship with the community, have never been more important. A hearty thank you to everyone that contributed to the VRP in 2016 — we’re excited to work with you (and others!) in 2017 and beyond. 

*Josh Armour (VRP Program Manager), Andrew Whalley (Chrome VRP), and Quan To (Android VRP) contributed mightily to help lead these Google-wide efforts.

Vulnerability Rewards Program: 2016 Year in Review



We created our Vulnerability Rewards Program in 2010 because researchers should be rewarded for protecting our users. Their discoveries help keep our users, and the internet at large, as safe as possible.

The amounts we award vary, but our message to researchers does not; each one represents a sincere ‘thank you’.

As we have for 2014 and 2015, we’re again sharing a yearly wrap-up of the Vulnerability Rewards Program.


What was new?

In short — a lot. Here’s a quick rundown:

Previously by-invitation only, we opened up Chrome's Fuzzer Program to submissions from the public. The program allows researchers to run fuzzers at large scale, across thousands of cores on Google hardware, and receive reward payments automatically.


On the product side, we saw amazing contributions from Android researchers all over the world, less than a year after Android launched its VRP. We also expanded our overall VRP to include more products, including OnHub and Nest devices.


We increased our presence at events around the world, like pwn2own and Pwnfest. The vulnerabilities responsibly disclosed at these events enabled us to quickly provide fixes to the ecosystem and keep customers safe. At both events, we were able to close down a vulnerability in Chrome within days of being notified of the issue.


Stories that stood out

As always, there was no shortage of inspiring, funny, and quirky anecdotes from the 2016 year in VRP.
  • We met Jasminder Pal Singh at Nullcon in India. Jasminder is a long-time contributor to the VRP, but this research is a side project for him. He spends most of his time growing Jasminder Web Services Point, the startup he operates with six other colleagues and friends. The team consists of: two web developers, one graphic designer, a developer for Android and iOS respectively, one Linux administrator, and a Content Manager/Writer. Jasminder’s VRP rewards fund the startup. The number of reports we receive from researchers in India is growing, and we’re growing the VRP’s presence there with additional conference sponsorships, trainings, and more.

Jasminder (back right) and his team
  • Jon Sawyer worked with his colleague Sean Beaupre from Streamlined Mobile Solutions, and friend Ben Actis to submit three Android vulnerability reports. A resident of Clallam County, Washington, Jon donated their $8,000 reward to their local Special Olympics team, the Orcas. Jon told us the reward was particularly meaningful because his son, Benji, plays on the team. He said: “Special Olympics provides a sense of community, accomplishment, and free health services at meets. They do incredible things for these people, at no cost for the athletes or their parents. Our donation is going to supply them with new properly fitting uniforms, new equipment, cover some facility rental fees (bowling alley, gym, track, swimming pool) and most importantly help cover the biggest cost, transportation.”
  • VRP researchers sometimes attach videos that demonstrate the bug. While making a great proof-of-concept video is a skill in itself, our researchers raised it to another level this year. Check out this video Frans Rosén sent us. It’s perfectly synchronized to the background music! We hope this trend continues in 2017 ;-)

Researchers’ individual contributions, and our relationship with the community, have never been more important. A hearty thank you to everyone that contributed to the VRP in 2016 — we’re excited to work with you (and others!) in 2017 and beyond. 

*Josh Armour (VRP Program Manager), Andrew Whalley (Chrome VRP), and Quan To (Android VRP) contributed mightily to help lead these Google-wide efforts.

Vulnerability Rewards Program: 2016 Year in Review



We created our Vulnerability Rewards Program in 2010 because researchers should be rewarded for protecting our users. Their discoveries help keep our users, and the internet at large, as safe as possible.

The amounts we award vary, but our message to researchers does not; each one represents a sincere ‘thank you’.

As we have for 2014 and 2015, we’re again sharing a yearly wrap-up of the Vulnerability Rewards Program.


What was new?

In short — a lot. Here’s a quick rundown:

Previously by-invitation only, we opened up Chrome's Fuzzer Program to submissions from the public. The program allows researchers to run fuzzers at large scale, across thousands of cores on Google hardware, and receive reward payments automatically.


On the product side, we saw amazing contributions from Android researchers all over the world, less than a year after Android launched its VRP. We also expanded our overall VRP to include more products, including OnHub and Nest devices.


We increased our presence at events around the world, like pwn2own and Pwnfest. The vulnerabilities responsibly disclosed at these events enabled us to quickly provide fixes to the ecosystem and keep customers safe. At both events, we were able to close down a vulnerability in Chrome within days of being notified of the issue.


Stories that stood out

As always, there was no shortage of inspiring, funny, and quirky anecdotes from the 2016 year in VRP.
  • We met Jasminder Pal Singh at Nullcon in India. Jasminder is a long-time contributor to the VRP, but this research is a side project for him. He spends most of his time growing Jasminder Web Services Point, the startup he operates with six other colleagues and friends. The team consists of: two web developers, one graphic designer, a developer for Android and iOS respectively, one Linux administrator, and a Content Manager/Writer. Jasminder’s VRP rewards fund the startup. The number of reports we receive from researchers in India is growing, and we’re growing the VRP’s presence there with additional conference sponsorships, trainings, and more.

Jasminder (back right) and his team
  • Jon Sawyer worked with his colleague Sean Beaupre from Streamlined Mobile Solutions, and friend Ben Actis to submit three Android vulnerability reports. A resident of Clallam County, Washington, Jon donated their $8,000 reward to their local Special Olympics team, the Orcas. Jon told us the reward was particularly meaningful because his son, Benji, plays on the team. He said: “Special Olympics provides a sense of community, accomplishment, and free health services at meets. They do incredible things for these people, at no cost for the athletes or their parents. Our donation is going to supply them with new properly fitting uniforms, new equipment, cover some facility rental fees (bowling alley, gym, track, swimming pool) and most importantly help cover the biggest cost, transportation.”
  • VRP researchers sometimes attach videos that demonstrate the bug. While making a great proof-of-concept video is a skill in itself, our researchers raised it to another level this year. Check out this video Frans Rosén sent us. It’s perfectly synchronized to the background music! We hope this trend continues in 2017 ;-)

Researchers’ individual contributions, and our relationship with the community, have never been more important. A hearty thank you to everyone that contributed to the VRP in 2016 — we’re excited to work with you (and others!) in 2017 and beyond. 

*Josh Armour (VRP Program Manager), Andrew Whalley (Chrome VRP), and Quan To (Android VRP) contributed mightily to help lead these Google-wide efforts.

The foundation of a more secure web


In support of our work to implement HTTPS across all of our products (https://www.google.com/transparencyreport/https/) we have been operating our own subordinate Certificate Authority (GIAG2), issued by a third-party. This has been a key element enabling us to more rapidly handle the SSL/TLS certificate needs of Google products.

As we look forward to the evolution of both the web and our own products it is clear HTTPS will continue to be a foundational technology. This is why we have made the decision to expand our current Certificate Authority efforts to include the operation of our own Root Certificate Authority. To this end, we have established Google Trust Services (https://pki.goog/), the entity we will rely on to operate these Certificate Authorities on behalf of Google and Alphabet.

The process of embedding Root Certificates into products and waiting for the associated versions of those products to be broadly deployed can take time. For this reason we have also purchased two existing Root Certificate Authorities, GlobalSign R2 and R4. These Root Certificates will enable us to begin independent certificate issuance sooner rather than later.

We intend to continue the operation of our existing GIAG2 subordinate Certificate Authority. This change will enable us to begin the process of migrating to our new, independent infrastructure.

Google Trust Services now operates the following Root Certificates:
Due to timing issues involved in establishing an independently trusted Root Certificate Authority, we have also secured the option to cross sign our CAs using:
If you are building products that intend to connect to a Google property moving forward you need to at minimum include the above Root Certificates. With that said even though we now operate our own roots, we may still choose to operate subordinate CAs under third-party operated roots.


For this reason if you are developing code intended to connect to a Google property, we still recommend you include a wide set of trustworthy roots. Google maintains a sample PEM file at (https://pki.goog/roots.pem) which is periodically updated to include the Google Trust Services owned and operated roots as well as other roots that may be necessary now, or in the future to communicate with and use Google Products and Services.

App Security Improvements: Looking back at 2016

Posted by Rahul Mishra, Android Security Program Manager
[Cross-posted from the Android Developers Blog]

In April 2016, the Android Security team described how the Google Play App Security Improvement (ASI) program has helped developers fix security issues in 100,000 applications. Since then, we have detected and notified developers of 11 new security issues and provided developers with resources and guidance to update their apps. Because of this, over 90,000 developers have updated over 275,000 apps!
ASI now notifies developers of 26 potential security issues. To make this process more transparent, we introduced a new page where developers can find information about all these security issues in one place. This page includes links to help center articles containing instructions and additional support contacts. Developers can use this page as a resource to learn about new issues and keep track of all past issues.

Developers can also refer to our security best practices documents and security checklist, which are aimed at improving the understanding of general security concepts and providing examples that can help tackle app-specific issues.

How you can help:
For feedback or questions, please reach out to us through the Google PlayDeveloper Help Center.
security+asi@android.com.

Silence speaks louder than words when finding malware

Posted by Megan Ruthven, Software Engineer

[Cross-posted from the Android Developers Blog]

In Android Security, we're constantly working to better understand how to make Android devices operate more smoothly and securely. One security solution included on all devices with Google Play is Verify apps. Verify apps checks if there are Potentially Harmful Apps (PHAs) on your device. If a PHA is found, Verify apps warns the user and enables them to uninstall the app.

But, sometimes devices stop checking up with Verify apps. This may happen for a non-security related reason, like buying a new phone, or, it could mean something more concerning is going on. When a device stops checking up with Verify apps, it is considered Dead or Insecure (DOI). An app with a high enough percentage of DOI devices downloading it, is considered a DOI app. We use the DOI metric, along with the other security systems to help determine if an app is a PHA to protect Android users. Additionally, when we discover vulnerabilities, we patch Android devices with our security update system.

This blog post explores the Android Security team's research to identify the security-related reasons that devices stop working and prevent it from happening in the future.
Flagging DOI Apps

To understand this problem more deeply, the Android Security team correlates app install attempts and DOI devices to find apps that harm the device in order to protect our users.
With these factors in mind, we then focus on 'retention'. A device is considered retained if it continues to perform periodic Verify apps security check ups after an app download. If it doesn't, it's considered potentially dead or insecure (DOI). An app's retention rate is the percentage of all retained devices that downloaded the app in one day. Because retention is a strong indicator of device health, we work to maximize the ecosystem's retention rate.

Therefore, we use an app DOI scorer, which assumes that all apps should have a similar device retention rate. If an app's retention rate is a couple of standard deviations lower than average, the DOI scorer flags it. A common way to calculate the number of standard deviations from the average is called a Z-score. The equation for the Z-score is below.

  • N = Number of devices that downloaded the app.
  • x = Number of retained devices that downloaded the app.
  • p = Probability of a device downloading any app will be retained.

In this context, we call the Z-score of an app's retention rate a DOI score. The DOI score indicates an app has a statistically significant lower retention rate if the Z-score is much less than -3.7. This means that if the null hypothesis is true, there is much less than a 0.01% chance the magnitude of the Z-score being as high. In this case, the null hypothesis means the app accidentally correlated with lower retention rate independent of what the app does.
This allows for percolation of extreme apps (with low retention rate and high number of downloads) to the top of the DOI list. From there, we combine the DOI score with other information to determine whether to classify the app as a PHA. We then use Verify apps to remove existing installs of the app and prevent future installs of the app.

Difference between a regular and DOI app download on the same device.


Results in the wild
Among others, the DOI score flagged many apps in three well known malware families— Hummingbad, Ghost Push, and Gooligan. Although they behave differently, the DOI scorer flagged over 25,000 apps in these three families of malware because they can degrade the Android experience to such an extent that a non-negligible amount of users factory reset or abandon their devices. This approach provides us with another perspective to discover PHAs and block them before they gain popularity. Without the DOI scorer, many of these apps would have escaped the extra scrutiny of a manual review.
The DOI scorer and all of Android's anti-malware work is one of multiple layers protecting users and developers on Android. For an overview of Android's security and transparency efforts, check out our page.


Security Through Transparency


Encryption is a foundational technology for the web. We’ve spent a lot of time working through the intricacies of making encrypted apps easy to use and in the process, realized that a generic, secure way to discover a recipient's public keys for addressing messages correctly is important. Not only would such a thing be beneficial across many applications, but nothing like this exists as a generic technology.

A solution would need to reliably scale to internet size while providing a way to establish secure communications through untrusted servers. It became clear that if we combined insights from Certificate Transparency and CONIKS we could build a system with the properties we wanted and more.

The result is Key Transparency, which we’re making available as an open-source prototype today.

Why Key Transparency is useful

Existing methods of protecting users against server compromise require users to manually verify recipients’ accounts in-person. This simply hasn’t worked. The PGP web-of-trust for encrypted email is just one example: over 20 years after its invention, most people still can't or won’t use it, including its original author. Messaging apps, file sharing, and software updates also suffer from the same challenge.

One of our goals with Key Transparency was to simplify this process and create infrastructure that allows making it usable by non-experts. The relationship between online personas and public keys should be automatically verifiable and publicly auditable. Users should be able to see all the keys that have been attached to an account, while making any attempt to tamper with the record publicly visible. This also ensures that senders will always use the same keys that account owners are verifying.

Key Transparency is a general-use, transparent directory that makes it easy for developers to create systems of all kinds with independently auditable account data. It can be used in a variety of scenarios where data needs to be encrypted or authenticated. It can be used to make security features that are easy for people to understand while supporting important user needs like account recovery.

Looking ahead
It’s still very early days for Key Transparency. With this first open source release, we’re continuing a conversation with the crypto community and other industry leaders, soliciting feedback, and working toward creating a standard that can help advance security for everyone.

We’d also like to thank our many collaborators during Key Transparency’s multi-year development, including the CONIKS team, Open Whisper Systems, as well as the security engineering teams at Yahoo! and internally at Google.

Our goal is to evolve Key Transparency into an open-source, generic, scalable, and interoperable directory of public keys with an ecosystem of mutually auditing directories. We welcome your apps, input, and contributions to this new technology at KeyTransparency.org.

Enigma, The Sequel


Last year we helped launch USENIX Enigma, a conference focused on bringing together security and privacy experts from academia, industry, and public service. After a successful first run, we’re supporting Enigma again this year and looking forward to more great talks and an ever-engaging hallway track!

Our speakers this year include practitioners from many tech industry leaders, researchers and professors at universities from around the world, and civil servants working at agencies in the U.S. and abroad. In addition to sessions focused specifically on software security, spam and abuse, and usability, the program will cover interdisciplinary topics, like the intersection of security and artificial intelligence, neuroscience, and society.

I’m also very proud to have some of my Google colleagues speaking at Enigma:

  • Sunny Consolvo will present results from a qualitative study of the privacy and security practices and challenges of survivors of intimate partner abuse. Sunny will also share how technology creators can better support the victims and survivors of such abuse.
  • Damian Menscher has been battling botnets for a decade at Google and has witnessed the evolution of Distributed Denial-of-Service (DDoS) scaling and attack ingenuity. Damian will describe his operation of a DDoS honeypot, and share specific things he learned while protecting KrebsOnSecurity.com.
  • Emily Schechter will be talking about driving HTTPS adoption on the web. She’ll go over some of the unexpected speed bumps major web sites have encountered, share how Chrome approaches feature changes that encourage HTTPS usage, and discuss what’s next to get to a default encrypted web.
As we did last year, my program co-chair, David Brumley, and I are developing a program full of short, thought-provoking presentations followed by lively discussion. Our program committee has worked closely with each speaker to help them craft the best version of their talk. Everyone is excited to share them, and there’s still time to register! I hope to see many of you in Oakland at USENIX Enigma later this month.

PS: Warning! Self-serving Google notice ahead… We’re hiring! We believe that most security researchers do what they do because they love what they do. What we offer is a place to do what you love—but in the open, on real-world problems, and without distraction. Please reach out to us if you’re interested.