Tag Archives: Security & Identity

Introducing VPC Flow Logs—network transparency in near real-time



Logging and monitoring are the cornerstones of network and security operations. Whether it’s performance analysis or network forensics, logging and monitoring let you identify traffic and access patterns that may present security or operational risks to the organization. Today, we’re upping the ante for network operations on Google Cloud Platform (GCP) with the introduction of VPC Flow Logs, increasing transparency into your network and allowing you to track network flows all the way down to an individual virtual interface, in near-real-time.

If you’re familiar with network operations, think of VPC Flow Logs like NetFlow, but with additional features. VPC Flow Logs provides responsive flow-level network telemetry for GCP environments, creating logs in five-second intervals. It also allows you to collect network telemetry at various levels. You can choose to collect telemetry for a particular VPC network or subnet or drill down further to monitor a specific VM Instance or virtual interface.
VPC Flow Logs can capture telemetry data from a wide variety of sources. It can track:

  • Internal VPC Traffic 
  • Flows between your VPC and on-premises deployments over both VPNs and Google Cloud Interconnects 
  • Flows between your servers and any internet endpoint 
  • Flows between your servers and any Google services

The logs generated by this process include a variety of data points, including a 5-tuple definition and timestamps, performance metrics such as throughput and RTT, and endpoint definitions such as VPC and geo annotations. VPC Flow Logs natively lets you export this data in a highly secure manner to Stackdriver Logging or BigQuery. Or using Cloud Pub/Sub, you can export these logs to any number of real-time analytics or SIEM platforms.

Better network and security operations

Having VPC Flow Logs in your toolbox can help you with a wide range of operational tasks. Here are just a few.

  • Network monitoring - VPC Flow Logs allows you to monitor your applications from the perspective of your network. From performance to debugging and troubleshooting, VPC Flow Logs can tell you how your applications are performing, to help you keep them up and running, and identify what changed should an issue arise.
  • Optimizing network usage and egress - By providing visibility into both your application’s inter-region traffic and your traffic usage globally, VPC Flow Logs lets you optimize your network costs by optimizing your bandwidth utilization, load balancing and content distribution.
  • Network forensics and security analytics - VPC Flow Logs also helps you perform network forensics when investigating suspicious behavior such as traffic from access from abnormal sources or unexpected volumes of data migration. The logs also help you ensure compliance.
  • Real-time security analysis - With the Cloud Pub/Sub API, you can easily export your logs into any SIEM ecosystem that you may already be using.

All this happens with near real-time accuracy (updates every 5 seconds vs. minutes), with absolutely no performance impact on your deployment.

Partner Eco-system


One of our key goals with VPC Flow Logs was to allow you to export your flow logs to partner systems for real-time analysis and notifications. At launch, we integrate with two leading logging and analytics platforms: Cisco Stealthwatch and Sumo Logic.
"Our integration with VPC Flow Logs lets customers send their network and security telemetry into Cisco Stealthwatch Cloud without deploying agents or collectors, thereby providing exceptionally fast and easy access to Stealthwatch multicloud security services and a holistic security view across on-premises and public cloud. This integration provides customers with excellent security visibility and threat detection in their GCP environment, and is the latest example of how we are partnering with Google to deliver great value to our joint customers." 
Jeremy Oakey, Senior Director, Product Management, Cisco Cloud Platform and Solutions Group. 

To learn more about VPC Flow Logs, including how to get started and pricing, please visit the documentation and product page.

Exploring container security: Node and container operating systems



Editor’s note: This is the second in a series of blog posts on container security at Google.

When deploying containers, your container images should be free of known vulnerabilities, and have a bare minimum of functionality. This reduces the attack surface, preventing bad actors from taking advantage of unnecessary openings in your infrastructure.

Unlike other deployment mechanisms, with containers, there are actually two operating systems that you need to secure—the container’s node/host OS, i.e., the operating system on which you run the container; and the container image that runs inside the container itself. On Google Kubernetes Engine, our managed container service, as well as for other hosted services on Google Cloud Platform (GCP), we manage the node OS for you. And when it comes to the container image, we give you several options to choose from.

Out of the box, Kubernetes Engine provides the following options for your node OS and container images:

  • For the node OS, you can choose between Container-optimized OS (COS) or Ubuntu 
  • For the container image, Google Container Registry has readily available images for Debian and Ubuntu, and of course, you can also bring your own image!

It’s great to have choices—but choice can also be overwhelming. Let’s take a deeper look at the security properties of these options, and what’s included in Kubernetes Engine.

Node OS: Container-optimized OS (COS) 

Container-optimized OS (COS) is a relatively new OS that we developed to enhance the security and performance of services running in Google Cloud, especially containers. In fact, COS underpins Kubernetes Engine, Cloud SQL, Cloud Machine Learning Engine and several other Google services.

Based on Chromium OS, COS implements several security design principles to provide a manageable platform for running production services. Some of these design aspects include:

  • Minimal OS footprint. COS is optimized to run containers on GCP. As such, we only enable features and include packages that are absolutely necessary to support running containers. Since containers package their own dependencies, this allows us to greatly reduce the OS attack surface and also improves performance.
  • Read-only root filesystem. The COS root filesystem is always mounted as read-only. Additionally, its checksum is verified by the kernel on each boot. This means that the kernel refuses to boot if the root filesystem has been tampered with. Additionally, several other mounts are non-executable by default.
  • Stateless configuration. While having a read-only root filesystem is good for security, it makes the system unusable for all practical purposes (e.g., we need to be able to create and add users in order to log in to the system). To address this, we customized the root filesystem such that /etc/ is stateless. This allows you to write configuration settings at run time, but those settings do not persist across reboots. Thus, every time a COS node reboots, it starts from a clean slate. Certain areas, such as users’ home directories, logs, and docker images, persist across reboots, as they're not part of the root filesystem.
  • Security-hardened kernel. COS enables several security-hardening kernel features, including some from the ChromiumOS Linux Security Module (LSM). For example, by using a combination of LoadPin (one such LSM that comes from ChromiumOS) and the read-only rootfs and rootfs-verification, you can prevent attackers from compromising the kernel by loading custom kernel modules. Additionally, Linux features such as IMA, AUDIT, APPARMOR, etc. make it difficult to hide attempts at circumventing security policies.
  • Sane security-centric defaults. COS provides another level of hardening simply by providing sane default values for several features. This includes things such as sysctl settings that disable ptrace and unprivileged bpf, a locked down firewall, and so on. These sane defaults, when automatically applied to a fleet of instances, go a long way toward securing the entire cluster/project/organization. 
  • Auto-updates. COS’s automatic updates feature allows timely delivery of security patches to running VMs. When COS is managed by Kubernetes Engine, Node-Auto-Upgrade strikes a balance between security and stability.

In addition to various hardening features in the OS itself, the COS team also employs best practices when developing, building and deploying these OS images to Google Cloud. Some of these include:

  • Built from source at Google. Each package in COS, including the Linux kernel itself, is built from source from ChromiumOS code repositories. This means that we know exactly what is going into the OS, who checked it in, in which version it was introduced, etc. This also lets us quickly patch and update any package in case a vulnerability is discovered, at any level.
  • Continuous vulnerability scanning and response. A CVE-scanning system alerts us whenever a vulnerability is discovered in any component of the OS. The COS team then responds with priority to make patched releases available for our users. The COS team also works with Google’s incident response team to make wider security patches available quickly in COS, e.g., patched COS images were available on Google Cloud before the recent Spectre and Meltdown vulnerabilities were publicly announced.
  • Testing and qualification process. Before a new COS image is published to Google Cloud, it undergoes extensive testing at multiple levels—including kernel fuzz testing by syzkaller, cluster-level Kubernetes tests, and several performance benchmarks. This ensures the stability and quality of our releases.

We are also actively working on several improvements in the area of node-OS security. You can learn more in the COS security documentation.

Kubernetes Engine uses COS as the OS for all master nodes. By default, COS is also used for your workload’s node OS. Unless you have specific requirements, we recommend you use COS for its security properties.

Container image: Debian and Ubuntu


Similarly to our node OS, we maintain our own container images for running hosted services. Google Cloud uses Debian and Ubuntu as a base image, for services like Google App Engine or Google Cloud Functions. Likewise, Debian and Ubuntu are both popular choices for container images.

From a security perspective, it doesn’t matter which container image you use, the important thing is to scan it regularly for known vulnerabilities. We maintain our Debian and Ubuntu base images with regular patching and testing and can rebuild them from scratch reproducibly. If you’re building your own containers, you’re welcome to use our base images too!

 See you next week, as we cover a new topic in our container security series at Google.

Oro: How GCP smoothed our path to PCI DSS compliance



Editor’s note: We recently we made a bunch of security announcements, and today we’re sharing a story from Oro, Inc., which runs its OroCommerce e-commerce service on Google Cloud Platform, and was pleasantly surprised by the ease and speed with which they were able to demonstrate PCI DSS compliance. Read on for Oro’s information security officer’s take on achieving PCI DSS compliance in the cloud.

Building and running an e-commerce website poses many challenges. You want your website to be easy to use, have an attractive design and an intuitive user interface. It must scale during peak seasons like Black Friday and Cyber Monday. But equally, if not more important, is information security. E-commerce websites are frequent targets because they handle financial transactions and payment card industry (PCI) information such as credit and debit card numbers. They also connect into many other systems, so it must meet many strict infosec industry standards.

If you have an e-commerce website, achieving PCI DSS compliance is critical. As a Chief Information Security Officer (CISO), Chief Information Officer (CIO), Chief Technology Officer (CTO) or other Infosec specialist, you may be concerned about PCI compliance on cloud infrastructures. Here at Oro, the company behind the OroCommerce B2B eCommerce platform, we addressed our PCI DSS compliance requirements by using Google Cloud Platform (GCP) as our Infrastructure-as-a-Service (IaaS) platform, and pass the benefits on to our OroCommerce customers. Achieving PCI DSS compliance may not be as easy as googling the closest pizza shops or gas stations, but Google Cloud’s IaaS platform certainly simplifies the process, ensuring you have everything needed to be compliant.

Using cloud and IaaS wasn’t always our top choice for building a PCI DSS-compliant website. Initially, our customers were reluctant to put their precious data into another party’s hands and store it somewhere in a foggy cloud. But nowadays, attitudes have changed. GCP provided us with strong support and a variety of tools to help build a PCI DSS compliant solution.
We had an excellent experience partnering and working with Google to complete the PCI DSS certification on our platform-as-a-service (PaaS) that hosts customized OroCommerce sites for Oro customers. We're proud to partner with Google Cloud to offer our customers a secure environment.


Building PCI DSS compliant infrastructure

At its core, building a PCI DSS compliant infrastructure requires:

  • The platform used to build your service must be PCI DSS compliant. This is a direct compliance requirement. 
  • Your platform must provide all the tools and methods used to build secure networks.

Google helped with both of these. The first point was easy, since all GCP services are PCI DSS compliant. In addition, Google provided us with a Shared Responsibility document that lists all PCI DSS requirements. This document explains the details of how Google achieves compliance and what Google customers need to do above and beyond that to support a compliant environment. This document not only has legal value but if used as a checklist, it can be a useful tool when going for PCI DSS certification.

For example, Google supports PCI DSS requirement #9, which mandates the physical security of a hosting environment including the need for guards, hard disk drive shredders, surveillance, etc. Hearing that Google takes the responsibility to protect both hardware and data from physical theft or damage was very reassuring. We rely on GCP tools to protect against inappropriate access and ensure day-to-day information security.
Another key requirement of a secure network architecture (and PCI DSS) is to hide all internal nodes from external access, control all incoming and outgoing traffic, and use network segregation for different application tiers. OroCommerce fulfills these requirements by using Google’s Virtual Private Cloud, firewall rules, advanced load balancers and Cloud Identity and Access Management (IAM) for authentication control. Google Site Reliability Engineers (SRE) have secure connections into production nodes inside the isolated production network using Google’s 2-step authentication mechanisms.

Also, we found that we can safely use Google-provided Compute Engine images based on up-to-date and secure Linux distributions. This frees the sysadmin from hardening of the OS, so they can pay more attention to vulnerability management and other important tasks.

While the importance of a secure infrastructure, access control, and network configuration is well-known, it’s also important to build and maintain a reliable logging and monitoring system. The PCI DSS standard puts an emphasis on audit trails and logs. To be compliant, you must closely monitor environments for suspicious activity and collect all needed data for a predetermined length of time to investigate any incidents. We found the combination of Stackdriver Monitoring and Logging, plus big data services such as BigQuery, helped us meet our monitoring, storing and log analysis needs. With Stackdriver, we monitor our production systems and detect anomalies in a thorough and timely manner, spending less time on configuration and support. We use BigQuery to analyze our logs so engineers can easily figure out what happened during a particular period of time.

Back in 2017 when we started to work on getting PCI DSS compliance for OroCommerce, we expected to spend a huge amount of time and resources on this process. But as we moved forward, we figured out how much GCP helped us to meet our goal. Having achieved PCI DSS compliance, it’s clear that choosing GCP for our infrastructure was the right decision.

Exploring container security: An overview



Containers are increasingly being used to deploy applications, and with good reason, given their portability, simple scalability and lower management burden. However, the security of containerized applications is still not well understood. How does container security differ from that of traditional VMs? How can we use the features of container management platforms to improve security?

This is the first in a series of blog posts that will cover container security on Google Cloud Platform (GCP), and how we help you secure your containers running in Google Kubernetes Engine. The posts in the series will cover the following topics:
  • Container networking security 
  • New security features in Kubernetes Engine 1.10
  • Image security The container software supply chain 
  • Container runtime security 
  • Multitenancy 
Container security is a huge topic. To kick off the the series, here’s an overview of container security and how we think about it at Google.

At Google, we divide container security into three main areas:
  1. Infrastructure security, i.e., does the platform provide the necessary container security features? This is how you use Kubernetes security features to protect your identities, secrets, and network; and how Kubernetes Engine uses native GCP functionality, like IAM, audit logging and networking, to bring the best of Google security to your workloads. 
  2. Software supply chain, i.e., is my container image secure to deploy? This is how you make sure your container images are vulnerability-free, and that the images you built aren't modified before they're deployed. 
  3. Runtime security, i.e., is my container secure to run? This is how you identify a container acting maliciously in production, and take action to protect your workload.
Let’s dive a bit more into each of these.

Infrastructure security


Container infrastructure security is about ensuring that your developers have the tools they need to securely build containerized services. This covers a wide variety of areas, including:
  • Identity, authorization and authentication: How do my users assert their identities in my containers and prove they are who they say they are, and how do I manage these permissions?
    • In Kubernetes, Role-Based Access Control (RBAC) allows the use of fine-grained permissions to control access to resources such as the kubelet. (RBAC is enabled by default since Kubernetes 1.8.)
    • In Kubernetes Engine, you can use IAM permissions to control access to Kubernetes resources at the project level. You can still use RBAC to restrict access to Kubernetes resources within a specific cluster.
  • Logging: How are changes to my containers logged, and can they be audited?
    • In Kubernetes, Audit Logging automatically captures API audit logs. You can configure audit logging based on whether the event is metadata, a request or a request response.
    • Kubernetes Engine integrates with Cloud Audit Logging, and you can view audit logs in Stackdriver Logging or in the GCP Activity console. The most commonly audited operations are logged by default, and you can view and filter these.
  • Secrets: How does Kubernetes store secrets, and how do containerized applications access them?
  • Networking: How should I segment containers in a network, and what traffic flows should I allow?
    • In Kubernetes, you can use network policies to specify how to segment the pod network. When created, network policies define with which pods and endpoints a particular pod can communicate.
    • In Kubernetes Engine, you can create a network policy, currently in beta, and manage these for your entire cluster. You can also create Private Clusters, in beta, to use only private IPs for your master and nodes.
These are just some of the tools that Kubernetes uses to secure your cluster the way you want, making it easier to maintain the security of your cluster.

Software supply chain 


Managing the software supply chain, including container image layers that you didn't create, is about ensuring that you know exactly what’s being deployed in your environment, and that it belongs there. In particular, that means giving your developers access to images and packagers that are known to be free of vulnerabilities, to avoid introducing known vulnerabilities into your environment.

A container runs on a server's OS kernel but in a sandboxed environment. A container's image typically includes its own operating system tools and libraries. So when you think about software security, there are in fact many layers of images and packages to secure:
  • The host OS, which is running the container 
  • The container image, and any other dependencies you need to run the container. Note that these are not necessarily images you built yourself—container images included from public repositories like Docker Hub also fall into this category 
  • The application code itself, which runs inside the container. This is outside of the scope of container security, but you should follow best practices and scan your code for known vulnerabilities. Be sure to review your code for security vulnerabilities and consider more advanced techniques such as fuzzing to find vulnerabilities. The OWASP Top Ten web application security risks is a good resource for knowing what to avoid. 

Runtime security 


Lastly, runtime security is about ensuring that your security response team can detect and respond to security threats to containers running in your environment. There are a few desirable capabilities here:
  • Detection of abnormal behaviour from the baseline, leveraging syscalls, network calls and other available information 
  • Remediation of a potential threat, for example, via container isolation on a different network, pausing the container, or restarting it 
  • Forensics to identify the event, based on detailed logs and the containers’ image during the event 
  • Run-time policies and isolation, limiting what kinds of behaviour are allowed in your environment 
All of these capabilities are fairly nascent across the industry, and there are many different ways today to perform runtime security.

A container isn’t a strong security boundary 


There’s one myth worth clearing up: containers do not provide an impermeable security boundary, nor do they aim to. They provide some restrictions on access to shared resources on a host, but they don’t necessarily prevent a malicious attacker from circumventing these restrictions. Although both containers and VMs encapsulate an application, the container is a boundary for the application, but the VM is a boundary for the application and its resources, including resource allocation.

If you're running an untrusted workload on Kubernetes Engine and need a strong security boundary, you should fall back on the isolation provided by the Google Cloud Platform project. For workloads sharing the same level of trust, you may get by with multi-tenancy, where a container is run on the same node as other containers or another node in the same cluster.

Upcoming talks at KubeCon EU 


In addition to this blog post series, we’ll be giving several talks on container security at KubeCon Europe in Copenhagen. If you’ll be at the show, make sure to add these to your calendar:
Note that everything discussed above is really just focused at the container level; you still need a secure platform underlying this infrastructure, and you need application security to protect the applications you build in containers. To learn more about Google Cloud’s security, see the Google Infrastructure Security Design Overview whitepaper.

Stay tuned for next week’s installment about image security!

Monitor your GCP environment with Cloud Security Command Center



Last week, we announced the release of Cloud Security Command CenterAlpha (Cloud SCC), a new security data analysis and monitoring platform for Google Cloud Platform (GCP). Cloud SCC, now available in alpha, helps enterprises gather security information, identify threats and take action on them.

As the use of cloud services continues to grow, clear visibility into the security status of an organization’s cloud services and resources is more important than ever. Businesses need the right data and actionable insights to stop threats before security incidents do any damage. Cloud SCC takes inventory of your cloud assets, flags unwanted changes to those assets and uses a number of unique detectors to identify risky areas in your environment. Its findings are populated into a single, centralized dashboard and data platform so that you can quickly get a read on the security health of your cloud applications and data.
Cloud SCC aggregates security information in a single, centralized dashboard
In this blog post, we’ll take a deeper look into the capabilities and features of Cloud Security Command Center.

Gain visibility into your cloud services and resources


Cloud SCC gives enterprises consolidated visibility into their cloud assets across App Engine, Compute Engine, Cloud Storage, and Datastore. Using asset inventory, you can view resources for the entire GCP organization or just for particular projects. Cloud SCC performs ongoing discovery scans which allows you to see asset history to understand exactly what changed in your environment and act on unauthorized modifications.
Cloud SCC gives you broad visibility cloud assets at the org and project level
Cloud SCC also features security “marks” that let you personalize how your security information is displayed, organized and managed in order to meet the unique requirements of your organization. With security marks, you can annotate your assets and then search, select, or filter using the mark—for example, you can filter out projects that you group together using the same mark.

Leverage powerful security insights from Google and leading security partners


Cloud SCC generates curated insights that provide you with a unique view of threats to your cloud assets. For example, security teams can answer questions like “Which cloud storage buckets contain PII?”, “Do I have any buckets that are open to the Internet?” and “Which cloud applications are vulnerable to XSS vulnerabilities?” With increasingly frequent reports of sensitive data being inadvertently exposed, gaining visibility into these key risk areas is especially important for enterprises. Cloud SCC integrates with Google Cloud security tools and leading security partners to give you these valuable security insights.

Detection from Google

Cloud SCC integrates with a number of Google Cloud security tools. With information from the DLP API, you can find out which storage buckets contain sensitive and regulated data, help prevent unintended exposure, and ensure access is based on need-to-know. You can also pull in information from Cloud Security Scanner which uncovers common vulnerabilities such as cross-site-scripting (XSS) and Flash injection that put your Google App Engine applications at risk. Using Forseti, an open source security toolkit for GCP, you can identify misconfigured access control policies and respond right away.
These Cloud SCC views show permission changes detected by Forseti, an open source GCP security toolkit
Administrators can also identify threats like botnets, cryptocurrency mining, and suspicious network traffic in your projects and virtual machine (VM) instances with built-in anomaly detection developed by the Google security team.
Cloud SCC features built-in anomaly detection from Google to identify threats to your cloud environment
This Cloud SCC "card" shows sensitive data uncovered by the DLP API
Detection from security partners

Using Cloud SCC, you can leverage intelligence from your existing security tools such as Cloudflare, CrowdStrike, Dome9, Palo Alto Networks, Qualys, and RedLock into Cloud Security Command Center to help detect DDoS attacks, compromised endpoints, compliance policy violations, network attacks, and instance vulnerabilities and threats. Our partner solutions cover a broad set of enterprise security needs and we’ll continue to add new partnerships to our network in the future.

Take advantage of an open and flexible platform


Cloud Security Command Center features a REST API which gives you the flexibility to work with your existing security systems and workflows. Using the API, enterprises can easily integrate the full range of their own threat detection capabilities—once the data sources are forwarded to Cloud Security Command Center, they can be viewed just like the Google-provided Command Center detectors. In addition, you can take advantage of the Pub/Sub notification integration to receive Cloud SCC alerts via Gmail, SMS, and Jira.

Try Cloud Security Command Center today


We’re excited to bring the Cloud SCC security monitoring platform to the suite of GCP security services. To learn more, check out the product documentation, or get started today by signing up for the Cloud SCC alpha program.

Getting to know Cloud Armor — defense at scale for internet-facing services



We know that you have the tough job of quickly and responsively serving your users, while also simultaneously defending your internet-facing services from malicious attacks. That’s why we announced Cloud Armor, a new DDoS and application defense service, at the CEO Security Forum in New York last week. It’s based on the same technologies and global infrastructure that we use to protect Google services like Search, Gmail and YouTube.

Economy of scale


Our security experts work around the clock to defend Google’s core services from a wide variety of malicious attacks. Metrics from DDoS attacks targeting Google services over the past decade reveal attack volumes have increased exponentially across several axes: bits per second (bps), packets per second (pps) and HTTP(S) queries per second (qps).
click to enlarge

“Absorbing the largest attacks requires the bandwidth needed to watch half a million YouTube videos at the same time... in HD.”  
Dr. Damian Menscher, DDoS Defense, Google
To defend against this threat, we deploy edge infrastructure and security systems to mitigate attacks targeting our services—and this same infrastructure underpins Google Cloud. With global HTTP(S) load balancing, the first Google Cloud Platform (GCP) service to support Cloud Armor, you get built-in defense against infrastructure DDoS attacks. No additional configuration, other than to configure load balancing, is required.

Defense is a collaborative effort.
“We work closely with several industry groups to track emerging threats, allowing us to both protect ourselves and others. In addition, we host krebsonsecurity.com and other frequent targets to ensure we are among the first to see new attack methods. This lets us design defenses and dismantle botnets before they have a chance to grow.”  
Dr. Damian Menscher, DDoS Defense, Google
Sharing resources across Google and Google Cloud services allows us to easily absorb the largest attacks, and also ensure that an attack on one customer doesn’t affect others.

Cloud Armor: Policy driven application defense at scale


Cloud Armor works in conjunction with global HTTP(S) load balancing and enables you to deploy and customize defenses for your internet-facing applications. Similar to global HTTP(S) load balancing, Cloud Armor is delivered at the edge of Google’s network, helping to block attacks close to their source. It's built on three pillars: a policy framework, a rich rules language and global enforcement infrastructure.

click to enlarge

"Cloud Armor is a great example of how Google continues to innovate on its pervasive defense-in-depth security strategy, providing a rich layer of security control that can be managed at the network edge."  
 Matt Hite, Network Engineer, Evernote

Cloud Armor features and functionality


With Cloud Armor, you can:
  • defend your services against infrastructure DDoS attacks via HTTP(S) load balancing 
  • configure security policies, specify rules and order of evaluation for these rules 
  • allow, block, preview and log traffic 
  • deploy IP whitelists and blacklists for both IPv4 and IPv6 traffic 
  • create custom rules using a rich rules language to match traffic based on any combination of Layer 3 and HTTP(S) request parameters and allow or block this traffic (in alpha
  • enable geolocation-based control, and application-aware defense for SQL Injection (SQLi) and Cross-site Scripting (XSS) attacks (in alpha)
With the above foundation in place, we look forward to expanding Cloud Armor’s capabilities in the coming months.

Cloud Armor Security policy framework


Cloud Armor configuration is driven by security policies. To deploy Cloud Armor, you must create a security policy, add rules, and then attach this policy to one or more HTTP(S) load balancing backend services.

A Cloud Armor security policy is comprised of one or more rules, where each rule specifies the parameters to look for in the traffic, the action to take if the traffic matches these parameters, and a priority value that determines the position of this rule in the policy hierarchy.

click to enlarge


Cloud Armor allows you to create multiple policies per project. You can customize the defense for a subset of backend services by creating a policy specifically for these services.
click to enlarge
Below, we show how to configure IP Blacklists and whitelists using Cloud Armor:


Cloud Armor Rules Language (in alpha)


Cloud Armor rules language enables you to customize defenses for your specific requirements. Often attackers use multiple well-known and custom malicious patterns to attempt bringing your service down. Custom rules enable you to configure specific attack patterns to look for in the traffic and then block this traffic at scale.

Here’s an example of a custom rule to defend against an attack seen to be originating from US and containing a specific cookie and user-agent.

Configuration using gCloud CLI:


Configuration using console:
click to enlarge
For the most common application-aware attacks, Cloud Armor provides two pre-configured rules: Cross-site Scripting (‘xss-canary’) and SQL Injection (‘sqli-canary’) defenses. In the example below, we configure an SQL injection defense rule in policy “sql-injection-dev” using gCloud CLI:
Below, you can see the SQLi defense rule, along with other rules, in the policy:
You can request Alpha access to these features by signing up using this form.

Visibility into blocked and allowed traffic


You can view the allowed and blocked traffic in Stackdriver as shown below:

Partner ecosystem


We have a rich ecosystem of security providers who offer solutions that complement Cloud Armor’s capabilities. You can use these in conjunction with global HTTP(S) load balancing and Cloud Armor to build a comprehensive security solution. Learn more about our security partners here.

Get started today


Cloud Armor is for everyone deploying internet-facing services in Google Cloud. Learn more by visiting the Cloud Armor website. We look forward to your feedback!

Take charge of your sensitive data with the Cloud Data Loss Prevention (DLP) API



This week, we announced the general availability of the Cloud Data Loss Prevention (DLP) API, a Google Cloud security service that helps you discover, classify and redact sensitive data at rest and in real-time.

When it comes to properly handling sensitive data, the first step is knowing where it exists in your data workloads. This not only helps enterprises more tightly secure their data, it’s a fundamental component of reducing risk in today’s regulatory environment, where the mismanagement of sensitive information can come with real costs.

The DLP API is a flexible and robust tool that helps identify sensitive data like credit card numbers, social security numbers, names and other forms of personally identifiable information (PII). Once you know where this data lives, the service gives you the option to de-identify that data using techniques like redaction, masking and tokenization. These features help protect sensitive data while allowing you to still use it for important business functions like running analytics and customer support operations. On top of that, the DLP API is designed to plug into virtually any workload—whether in the cloud or on-prem—so that you can easily stream in data and take advantage of our inspection and de-identification capabilities.

In light of data privacy regulations like GDPR, it’s important to have tools that can help you uncover and secure personal data. The DLP API is also built to work with your sensitive workloads and is supported by Google Cloud’s security and compliance standards. For example, it’s a covered product under our Cloud HIPAA Business Associate Agreement (BAA), which means you can use it alongside our healthcare solutions to help secure PII.

To illustrate how easy it is to plug DLP into your workloads, we’re introducing a new tutorial that uses the DLP API and Cloud Functions to help you automate the classification of data that’s uploaded to Cloud Storage. This function uses DLP findings to determine what action to take on sensitive files, such as moving them to a restricted bucket to help prevent accidental exposure.

In short, the DPI API is a useful tool for managing sensitive data—and you can take it for a spin today for up to 1 GB at no charge. Now, let’s take a deeper look at its capabilities and features.

Identify sensitive data with flexible predefined and custom detectors

Backed by a variety of techniques including machine learning, pattern matching, mathematical checksums and context analysis, the DLP API provides over 70 predefined detectors (or “infotypes”) for sensitive data like PII and GCP service account credentials.

You can also define your own custom types using:
  • Dictionaries — find new types or augment the predefined infotypes 
  • Regex patterns — find your own patterns and define a default likelihood score 
  • Detection rules — enhance your custom dictionaries and regex patterns with rules that can boost or reduce the likelihood score based on nearby context or indicator hotwords like “banking,” “taxpayer,” and “passport.”

Stream data from virtually anywhere

Are you building a customer support chat app and want to make sure you don’t inadvertently collect sensitive data? Do you manage data that’s on-prem or stored on another cloud provider? The DLP API “content” mode allows you to stream data from virtually anywhere. This is a useful feature for working with large batches to classify or dynamically de-identify data in real-time. With content mode, you can scan data before it’s stored or displayed, and control what data is streamed to where.

Native discovery for Google Cloud storage products

The DLP API has native support for data classification in Cloud Storage, Cloud Datastore and BigQuery. Just point the API at your Cloud Storage bucket or BigQuery table, and we handle the rest. The API supports:
  • Periodic scans — trigger a scan job to run daily or weekly 
  • Notifications — launch jobs and receive Cloud Pub/Sub notifications when they finish; this is great for serverless workloads using Cloud Functions
  • Integration with Cloud Security Command CenterAlpha
  • SQL data analysis — write the results of your DLP scan into the BigQuery dataset of your choice, then use the power of SQL to analyze your findings. You can build custom reports in Google Data Studio or export the data to your preferred data visualization or analysis system.
A summary report of DLP findings on recent scans


Redact data from free text and structured data at the same time

With the DLP API, you can stream unstructured free text, use our powerful classification engine to find different sensitive elements and then redact them according to your needs. You can also stream in tabular text and redact it based on the record types or column names. Or do both at the same time, while keeping integrity and consistency across your data. For example, you can take a social security number that’s classified in a comment field as well as in a structured column, and it generates the same token or hash.

Extend beyond redaction with a full suite of de-identification tools


From simple redaction to more advanced format-preserving tokenization, the DLP API offers a variety of techniques to help you redact sensitive elements from your data while preserving its utility.

Below are a few supported techniques:


Transformation type
Description
Replacement
Replaces each input value with infoType name or a user customized value
Redaction
Redacts a value by removing it
Mask or partial mask
Masks a string either fully or partially by replacing a given number of characters with a specified fixed character
Pseudonymization
with cryptographic hash
Replaces input values with a string generated using a given data encryption key
Pseudonymization
with format preserving token
Replaces an input value with a “token,” or surrogate value, of the same length using format-preserving encryption (FPE) with the FFX mode of operation
Bucket values
Masks input values by replacing them with “buckets,” or ranges within which the input value falls
Extract time data
Extracts or preserves a portion of dates or timestamps

The Cloud DLP API can also handle standard bitmap images such as JPEGs and PNGs. Using optical character recognition (OCR) technology, the DLP API analyzes the text in images to return findings or generate a new image with the sensitive findings blocked out.

Measure re-identification risk with k-anonymity and l-diversity


Not all sensitive data is immediately obvious like a social security number or credit card number. Sometimes you have data where only certain values or combinations of values identify an individual, for example, a field containing information about an employee's job title doesn’t identify most employees. However, it does single out individuals with unique job titles like "CEO" where there’s only one employee with this title. Combined with other fields such as company, age or zip code, you may arrive at a single, identifiable individual. To help you better understand these kinds of quasi-identifiers, the DLP API provides a set of statistical risk analysis metrics. For example, risk metrics such as k-anonymity can help identify these outlier groups and give you valuable insights into how you might want to further de-identify your data, perhaps by removing rows and bucketing fields.

Use k-anonymity to help find identifiable individuals in your datasets


Integrate the DLP API into your workloads across the cloud ecosystem


The DLP API is built to be flexible and scalable, and includes several features to help you integrate it into your workloads, wherever they may be.

  • DLP templates — Templates allow you to configure and persist how you inspect your data and define how you want to transform it. You can then simply reference the template in your API calls and workloads, allowing you to easily update templates without having to redeploy new API calls or code.
  • Triggers — Triggers allow you to set up jobs to scan your data on a periodic basis, for example, daily, weekly or monthly. 
  • Actions — When a large scan job is done, you can configure the DLP API to send a notification with Cloud Pub/Sub. This is a great way to build a robust system that plays well within a serverless, event-driven ecosystem.

The DLP API can also integrate with our new Cloud Security Command Center Alpha, a security and data risk platform for Google Cloud Platform that helps enterprises gather data, identify threats, and act on them before they result in business damage or loss. Using the DLP API, you can find out which storage buckets contain sensitive and regulated data, help prevent unintended exposure, and ensure access is based on need-to-know. Click here to sign up for the Cloud Security Command CenterAlpha.
The DLP API integrates with Cloud Security Command Center to surface risks associated with sensitive data in GCP
Sensitive data is everywhere, but the DLP API can help make sure it doesn’t go anywhere it’s not supposed to. Watch this space for future blog posts that show you how to use the DLP API for specific use cases.

Building trust through Access Transparency



Auditability ranks at the top of cloud adopters’ security requirements. According to an MIT Sloan Management Review survey of more than 500 IT and business executives, 87% of respondents cited auditability as an important factor in evaluating cloud security—second only to a provider’s ability to prevent data compromises. While Google’s Cloud Audit Logging and similar products help answer the question of which of your administrators did what, where, when and why on your cloud objects, you’ve traditionally lost this audit trail once support is engaged. This is why we’re pleased to introduce Access Transparency, a new logs product unique to Google Cloud Platform (GCP) that provides an audit trail of actions taken by Google Support and Engineering when they interact with your data and system configurations on Google Cloud.

Access Transparency logs are available in beta for Compute Engine, App Engine, Cloud Identity and Access Management, Cloud Key Management Service, Cloud Storage and Persistent Disks— with more services becoming available throughout the year. Together, Cloud Audit Logs and Access Transparency Logs provide a more comprehensive view of admin activity in your cloud deployment.

Expanding your visibility with Access Transparency 


In the limited situations that access by Google employees does occur, Access Transparency logs are generated in near-real time and delivered to your Stackdriver Logging console in the same manner as Cloud Audit Logs. The logs not only show what resources were accessed and the operations performed, they also show the justification for that action. For example, they may include the ticket number you filed with support asking for help.
click to enlarge

You can also choose to export your Access Transparency logs into BigQuery and Cloud Storage, or to other tools in your existing audit pipeline through Cloud Pub/Sub. This allows you to integrate with your existing audit pipeline, where you may already be exporting your Cloud Audit Logs. You can then audit your Access Transparency logs with a combination of automated and manual review, in the same way you would with audit logs of your own internal activity.

Enabled by industry-leading data protection controls 


At Google Cloud, our philosophy is that our customers own their data, and we do not access that data for any reason other than those necessary to fulfill our contractual obligations to you. Technical controls require valid business justifications for any access to your content by support or engineering personnel. These structured justifications are used to generate your Access Transparency logs. Google also performs regular audits of accesses by administrators as a check on the effectiveness of our controls.

This system is built around limiting what employees can do, with multi-step processes to minimize the likelihood of misjudgment, and transparency to allow review of actions. Feedback loops also exist between Google’s audits and customer feedback to continue improving our processes and further limit the need to access your data in order to solve your problems.

Getting started with Access Transparency 


Access Transparency is available at no additional charge to customers with Platinum or Gold Support coverage, however spaces in our beta are limited. To apply for access, use our signup form. To find out more about Access Transparency, read the Access Transparency Documentation, or contact your dedicated support representative.

Access Transparency also continues to be available through SAP’s Data Custodian solution, which uses Access Transparency and other logs to support a managed GRC solution for your GCP deployments. For more information on Data Custodian, visit the SAP website.

Introducing new ways to protect and control your GCP services and data



They say security is a process, not a destination, and that certainly rang true as we prepared for today’s CEO Security Forum in New York, where we’re making more than 20 security announcements across the Google Cloud portfolio.

When it comes to Google Cloud Platform (GCP), our goal is to continuously improve on the strong foundation that we’ve built over the years, and help you build out a secure, scalable environment. Here’s an overview of our GCP-related news. Stay tuned over the coming days for deeper dives into some of these new products.

1. Keep sensitive data private with VPC Service Controls Alpha


If your organization is looking to take advantage of the fully managed GCP technologies for big data analytics, data processing and storage, but has hesitated to put sensitive data in the cloud outside your secured network, our new VPC Service Controls provide an additional layer of protection to help keep your data private.

Currently in alpha, VPC Service Controls create a security perimeter around data stored in API-based GCP services such as Google Cloud Storage, BigQuery and Bigtable. This helps mitigate data exfiltration risks stemming from stolen identities, IAM policy misconfigurations, malicious insiders and compromised virtual machines.

With this managed service, enterprises can configure private communication between cloud resources and hybrid VPC networks using Cloud VPN or Cloud Dedicated Interconnect. By expanding perimeter security from on-premise networks to data stored in GCP services, enterprises can feel confident about storing their data in the cloud and accessing it from an on-prem environment or cloud-based VMs.

VPC Service Controls also take security a step further with context-aware access control for your cloud resources using the Access Context Manager feature. Enterprises can create granular access control policies in Access Context Manager based on attributes like user location and IP address. These policies help ensure the appropriate security controls are in place when granting access to cloud resources from the internet.

Google Cloud is the first cloud provider to offer virtual security perimeters for API-based services with simplicity, speed and flexibility that far exceed what most organizations can achieve in a physical, on-premises environment.

Get started by signing up for the upcoming VPC Service Controls beta.

2. Get insight into data and application risk with Cloud Security Command Center Alpha


As organizations entrust more applications to the cloud, it can be tough to understand the extent of your cloud assets and the risks against them. The new Cloud Security Command Center (Cloud SCC), currently in alpha, lets you view and monitor an inventory of your cloud assets, scan storage systems for sensitive data, detect common web vulnerabilities and review access rights to your critical resources—all from a single, centralized dashboard.

Cloud SCC provides deep views into the security status and health of GCP services such as App Engine, Compute Engine, Cloud Storage, and Cloud Datastore. It integrates with the DLP API to help identify sensitive information, and with Google Cloud Security Scanner to uncover vulnerabilities such as cross-site-scripting (XSS) and Flash injection. Use it to manage access control policies, receive alerts about unexpected changes through an integration with Forseti, the open-source GCP security toolkit, and detect threats and suspicious activity with Google anomaly detection as well as security partners such as Cloudflare, CrowdStrike, Dome9, Palo Alto Networks, Qualys and RedLock.

Get started by signing up for the Cloud SCC alpha program.

3. Expand Your visibility with Access Transparency


Trust is paramount when choosing a cloud provider. We want to be as open and transparent as possible, allowing customers to see what happens to their data. Now, with Access Transparency, we’ll provide you with an audit log of authorized administrative accesses by Google Support and Engineering, as well as justifications for those accesses, for many GCP services, and we’ll be adding more throughout the year. With Access Transparency, we can continue to maintain high performance and reliability for your environment while remaining accountable to the trust you place in our service.

Access Transparency logs are generated in near-real time, and appear in your Stackdriver Logging console in the same way that your Cloud Audit Logs do, with the same protections you would expect from audit-grade logs. You can export Access Transparency logs into BigQuery or Cloud Storage for storage and archiving, or via PubSub into your existing audit pipeline or SIEM tooling for further investigation and review.

The combination of Google Cloud Audit Logs and Access Transparency logs gives you a more comprehensive view into administrative activity in your GCP environment.

To learn more about Access Transparency, visit the product page, where you can find out more and sign up for the beta program.

4. Avoid Denial of Service with Cloud Armor


If you run internet-facing services or apps, you have a tough job: quickly and responsively serving traffic to your end users, while simultaneously protecting against malicious attacks trying to take your services down. Here at Google, we’re no stranger to this phenomenon, and so today, we’re announcing Cloud Armor, a Distributed Denial of Service (DDoS) and application defense service that’s based on the same technologies and global infrastructure that we use to protect services like Search, Gmail and YouTube.

Global HTTP(S) Load Balancing provides built-in defense against Infrastructure DDoS attacks. No additional configuration, other than to configure load balancing, is required to activate this DDoS defense. Cloud Armor works with Cloud HTTP(S) Load Balancing, provides IPv4 and IPv6 whitelisting/blacklisting, defends against application-aware attacks such as cross-site scripting (XSS) and SQL injection (SQLi), and delivers geography-based access control.

A sophisticated rules language and global enforcement engine underpin Cloud Armor, enabling you to create custom defenses, with any combination of Layer 3 to Layer 7 parameters, against multivector attacks (combination of two or more attack types). Cloud Armor gives you visibility into your blocked and allowed traffic by sending information to Stackdriver Logging about each incoming request and the action taken on that request by the Cloud Armor rule.

Learn more by visiting the Cloud Armor product page.

5. Discover, classify and redact sensitive data with the DLP API


Sensitive information is a fact of life. The question is—how do you identify it and help ensure it’s protected? Enter the Cloud Data Loss Prevention (DLP) API, a managed service that lets you discover, classify and redact sensitive information stored in your organization’s digital assets.

First announced last year, the DLP API is now generally available. And because the DLP API is, well, an API, you can use it on virtually any data source or business application, whether it’s on GCP services like Cloud Storage or BigQuery, a third-party cloud, or in your on-premises data center. Furthermore, you can use the DLP API to detect (and, just as importantly, redact) sensitive information in real-time, as well as in batch-mode against static datasets.

Our goal is to make the DLP API an extensible part of your security arsenal. Since it was first announced, we’ve added several new detectors, including one to identify service account credentials, as well as the ability to build your own detectors based on custom dictionaries, patterns and context rules.

Learn more by visiting the DLP API product page.

6. Provide simple, secure access for any user to cloud applications from any device with Cloud Identity


Last summer we announced Cloud Identity, a built-in service that allows organizations to easily manage users and groups who need access to GCP resources. Our new, standalone Cloud Identity product is a full Identity as a Service (IDaaS) solution that adds premium features such as enterprise security, application management and device management to provide enterprises with simple, secure access for any user to cloud applications from any device.

Cloud Identity enables and accelerates the use of cloud-centric applications and services, while offering capabilities to meet customer organizations where they are with their on-premise IAM systems and apps.

Learn more by visiting the Cloud Identity product page.

7. Extending the benefits of GCP security to U.S. federal, state and local government customers through FedRamp authorization


While Google Cloud goes to great lengths to document the security capabilities of our infrastructure and platforms, third-party validation always helps. We’re pleased to announce that GCP, and Google’s underlying common infrastructure, have received the FedRAMP Rev. 4 Provisional Authorization to Operate (P-ATO) at the Moderate Impact level from the FedRAMP Joint Authorization Board (JAB). GCP’s certification encompasses data centers in many countries, so customers can take advantage of this certification from multiple Google Cloud regions.

Agencies and federal contractors can request access to our FedRAMP package by submitting a FedRAMP Package Access Request Form.

8. Take advantage of new security partnerships

In addition to today’s security announcements, we’ve also been working with several security companies to offer additional solutions that complement GCP’s capabilities. You can read about these partnerships in more detail in our partnerships blog post.

Today we’re announcing new GCP partner solutions, including:
  • Dome9 has developed a compliance test suite for the Payment Card Industry Data Security Standard (PCI DSS) in the Dome9 Compliance Engine. 
  • Rackspace Managed Security provides businesses with fully managed security on top of GCP. 
  • RedLock’s Cloud 360 Platform is a cloud threat defense security and compliance solution that provides additional visibility and control for Google Cloud environments.
As always, we’re thrilled to be able to share with you the fruits of our experience running and protecting some of the world’s most popular web applications. And we’re honored that you’ve chosen to make GCP your home in the cloud. For more on today’s security announcements read our posts on the Google Cloud blog, the G Suite blog and the Connected Workspaces blog.

Expanding our Google Cloud security partnerships



As we discussed in today’s blog post, security is top of mind for many businesses as they move to the cloud. To help more businesses take advantage of the cloud’s security benefits, we’re working with several leading security providers to offer solutions that complement Google Cloud Platform’s capabilities, and enable customers to leverage their existing tools from these vendors in the cloud. These partner solutions cover a broad set of enterprise security needs, such as advanced threat prevention, compliance, container security, managed security services and more.

Today, we’re announcing new partnerships, new solutions by existing partners and new partner integrations in our Cloud Security Command Center (Cloud SCC), currently in alpha. Here’s a little more on what each of these partnerships will offer:

Auth0 offers the ability to secure cloud endpoints and seamlessly implement secure identity management into customer products and services. Whether the goal is to add additional authentication sources like social login, migrate users without requiring password resets or add multi-factor authentication, Auth0 provides a range of services to accomplish many identity-related tasks. Auth0’s platform supports multiple use cases (B2B, B2C, B2E, IoT, API) and integrates into existing tech stacks.

Check Point  can now secure multiple VPCs using a single CloudGuard security gateway to protect customer applications. A single CloudGuard gateway can monitor traffic in and out of more than one VPC network at any given time, providing a more efficient and scalable solution for running and securing workloads.

Cloudflare Web Application Firewall helps prevents attackers from compromising sensitive customer data, and helps protect customers from common vulnerabilities like SQL injection attacks, cross-site scripting and cross-site forgery. Additionally, integration with the Cloud Security Command CenterAlpha combines their intelligence with Google security and data risk insights to give customers a holistic view of their security posture.

Dome9 has developed a compliance test suite for the Payment Card Industry Data Security Standard (PCI DSS) in the Dome9 Compliance Engine. Using the Compliance Engine, Google Cloud customers can assess the compliance posture of their projects, identify risks and gaps, fix issues such as overly permissive firewall rules, enforce compliance requirements and demonstrate compliance in audits. The integration between the Dome9 Arc platform and the Cloud Security Command Center allows customers to consume and explore the results of assessments of the Dome9 Compliance Engine directly from Cloud SCC.

Fortinet provides scalable network protection for workloads in Google Cloud Platform (GCP). Its FortiGate provides next-generation firewall and advanced security, and its Fortinet Security Fabric integration enables single pane-of-glass visibility and policy across on-premises workloads and GCP for consistent hybrid cloud security.

Palo Alto Networks VM-Series Next Generation Firewall helps customers to securely migrate their applications and data to GCP, protecting them through application whitelisting and threat prevention policies. Native automation features allow developers and cloud architects to create “touchless” deployments and policy updates, effectively embedding the VM-Series into the application development workflow. The VM-Series on GCP can be configured to forward threat prevention, URL Filtering and WildFire logs of high severity to the Cloud Security Command Center to provide a consolidated view of a customer’s GCP security posture.

Qualys provides vulnerability assessments for Google Compute Engine instances. Users can get their vulnerability posture at a glance and drill down for details and actionable intelligence for the vulnerabilities identified. Customers can get this visibility within the Cloud Security Command Center by deploying the lightweight Qualys agents on the instances, baking them into images or deploying them directly into Compute Engine instances.

Rackspace Managed Security and Compliance Assistance provides additional active security on GCP to detect and respond to advanced cyber threats. Rackspace utilizes pre-approved actions to promptly remediate security incidents. It also complements the strategic planning, architectural guidance and 24x7x365 operational support available through Managed Services for GCP.

RedLock Cloud 360 Platform is a cloud threat defense security and compliance solution that provides additional visibility and control for GCP. RedLock collects and correlates disparate data sets from Google Cloud to determine the risk posture of a customer’s environment, then employs risk scoring algorithms to help prioritize and remediate the highest risks. Redlock’s integration with the Cloud Security Command Center provides customers with centralized visibility into security and compliance risks. As part of the integration, RedLock periodically scans a customer's Google Cloud environments and sends results pertaining to resource misconfigurations, compliance violations, network security risks and anomalous user activities.

StackRox augments Google Kubernetes Engine’s built-in security functions with a deep focus on securing the container runtime environment. StackRox’s core capabilities and functionality include network discovery and visualization of the application, detection of adversarial actions and detection of new attacks via machine-learning capabilities.

Sumo Logic Machine Data Analytics Platform offers enterprise-class monitoring, troubleshooting and security for mission-critical cloud applications. Sumo Logic platform integrates directly with GCP services through Google Stackdriver to collect audit and operational data in real-time so that customers can monitor and troubleshoot Google VPC, Cloud IAM, Cloud Audit, App Engine, Compute Engine, Cloud SQL, BigQuery, Cloud Storage, Kubernetes Engine and Cloud Functions, with more coming soon.

To learn more about our partner program, or to find a partner, visit our partner page.