Tag Archives: Announcements

App Engine firewall now generally available



Securing applications in the cloud is critical for a variety of reasons: restricting access to trusted sources, protecting user data and limiting your application's bandwidth usage in the face of a DoS attack. The App Engine firewall lets you control access to your App Engine app through a set of rules, and is now generally available, ready to secure access to your production applications. Simply set up an application, provide a list of IP ranges to deny or allow, and App Engine does the rest.

With this release, you can now use the IPv4 and IPv6 address filtering capability in the App Engine firewall to enforce more comprehensive security policies rather than requiring developers to modify their application.

We have received lots of great feedback from our customers and partners about the security provided by the App Engine firewall, including Reblaze and Cloudflare:
"Thanks to the newly released App Engine firewall, Reblaze can now prevent even the most resourceful hacker from bypassing our gateways and accessing our customers’ App Engine applications directly. This new feature enables our customers to take advantage of Reblaze's comprehensive web security (including DDoS protection, WAF/IPS, bot mitigation, full remote management, etc.) on App Engine." 
 Tzury Bar Yochay, CTO of Reblaze Technologies
"With the App Engine firewall, our customers can lock down their application to only accept traffic from Cloudflare IPs. Because Cloudflare uses a reverse-proxy server, this integration further prevents direct access to an application’s origin servers and allows Cloudflare to filter and block malicious activity." 
 Travis Perkins, Head of Alliances at Cloudflare

Simple and effective 


Getting started with the App Engine firewall is easy. You can set up rules in the Google Cloud Platform Console, via REST requests in the App Engine Admin API, or with our gcloud CLI.

For example, let's say you have an application that's being attacked by several addresses on a rogue network. First, get the IP addresses from your application’s request logs. Then, add a deny rule for the rogue network to the firewall. Make sure the default rule is set to allow so that other users can still access the application.

And that's it! No need to modify and redeploy the application; access is now restricted to your whitelisted IP addresses. The IP addresses that match a deny rule will receive an HTTP 403 request before the request reaches your app, which means that your app won't spin up additional instances or be charged for handling the request.

Verify rules for any IP


Some applications may have complex rulesets, making it hard to determine whether an IP will be allowed or denied. In the Cloud Console, the Test IP tab allows you to enter an IP and see if your firewall will allow or deny the request.

Here, we want to make sure an internal developer IP is allowed. However, when we test the IP, we can see that the "rogue net" blocking rule takes precedence.
Rules are evaluated in priority order, with the first match being applied, so we can fix this by allowing the developer IP with a smaller priority value than the blocked network it lies within.
Another check, and we can see it's working as intended.
For more examples and details, check out the full App Engine firewall documentation.

We'd like to thank all you beta users who gave us feedback, and encourage anyone with questions, concerns or suggestions to reach out to us by reporting a public issue, posting in the App Engine forum, or messaging us on the App Engine slack channel.

Introducing Grafeas: An open-source API to audit and govern your software supply chain



Building software at scale requires strong governance of the software supply chain, and strong governance requires good data. Today, Google, along with JFrog, Red Hat, IBM, Black Duck, Twistlock, Aqua Security and CoreOS, is pleased to announce Grafeas, an open source initiative to define a uniform way for auditing and governing the modern software supply chain. Grafeas (“scribe” in Greek) provides organizations with a central source of truth for tracking and enforcing policies across an ever growing set of software development teams and pipelines. Build, auditing and compliance tools can use the Grafeas API to store, query and retrieve comprehensive metadata on software components of all kinds.

As part of Grafeas, Google is also introducing Kritis, a Kubernetes policy engine that helps customers enforce more secure software supply chain policies. Kritis (“judge” in Greek) enables organizations to do real-time enforcement of container properties at deploy time for Kubernetes clusters based on attestations of container image properties (e.g., build provenance and test status) stored in Grafeas.
“Shopify was looking for a comprehensive way to track and govern all the containers we ship to production. We ship over 6,000 builds every weekday and maintain a registry with over 330,000 container images. By integrating Grafeas and Kritis into our Kubernetes pipeline, we are now able to automatically store vulnerability and build information about every container image that we create and strictly enforce a built-by-Shopify policy: our Kubernetes clusters only run images signed by our builder. Grafeas and Kritis actually help us achieve better security while letting developers focus on their code. We look forward to more companies integrating with the Grafeas and Kritis projects.”  
Jonathan Pulsifer, Senior Security Engineer at Shopify. (Read more in Shopify’s blog post.)

The challenge of governance at scale 


Securing the modern software supply chain is a daunting task for organizations both large and small, exacerbated by several trends:

  • Growing, fragmented toolsets: As an organization grows in size and scope, it tends to use more development languages and tools, making it difficult to maintain visibility and control of its development lifecycle. 
  • Open-source software adoption: While open-source software makes developers more productive, it also complicates auditing and governance. 
  • Decentralization and continuous delivery: The move to decentralize engineering and ship software continuously (e.g., “push on green”) accelerates development velocity, but makes it difficult to follow best practices and standards. 
  • Hybrid cloud deployments: Enterprises increasingly use a mix of on-premises, private and public cloud clusters to get the best of each world, but find it hard to maintain 360-degree visibility into operations across such diverse environments. 
  • Microservice architectures: As organizations break down large systems into container-based microservices, it becomes harder to track all the pieces.

As a result, organizations generate vast quantities of metadata, all in different formats from different vendors and are stored in many different places. Without uniform metadata schemas or a central source of truth, CIOs struggle to govern their software supply chains, let alone answer foundational questions like: “Is software component X deployed right now?” “Did all components deployed to production pass required compliance tests?” and “Does vulnerability Y affect any production code?” 

The Grafeas approach 

Grafeas offers a central, structured knowledge-base of the critical metadata organizations need to successfully manage their software supply chains. It reflects best practices Google has learned building internal security and governance solutions across millions of releases and billions of containers. These include:

  • Using immutable infrastructure (e.g., containers) to establish preventative security postures against persistent advanced threats 
  • Building security controls into the software supply chain, based on comprehensive component metadata and security attestations, to protect production deployments 
  • Keeping the system flexible and ensuring interoperability of developer tools around common specifications and open-source software

Grafeas is designed from the ground up to help organizations apply these best practices in modern software development environments, using the following features and design points:

  • Universal coverage: Grafeas stores structured metadata against the software component’s unique identifier (e.g., container image digest), so you don’t have to co-locate it with the component’s registry, and so it can store metadata about components from many different repositories. 
  • Hybrid cloud-friendly: Just as you can use JFrog Artifactory as the central, universal component repository across hybrid cloud deployments, you can use the Grafeas API as a central, universal metadata store. 
  • Pluggable: Grafeas makes it easy to add new metadata producers and consumers (for example, if you decide to add or change security scanners, add new build systems, etc.) 
  • Structured: Structured metadata schemas for common metadata types (e.g., vulnerability, build, attestation and package index metadata) let you add new metadata types and providers, and the tools that depend on Grafeas can immediately understand those new sources. 
  • Strong access controls: Grafeas allows you to carefully control access for multiple metadata producers and consumers. 
  • Rich query-ability: With Grafeas, you can easily query all metadata across all of your components so you don’t have to parse monolithic reports on each component.

Defragmenting and centralizing metadata 

At each stage of the software supply chain (code, build, test, deploy and operate), different tools generate metadata about various software components. Examples include the identity of the developer, when the code was checked in and built, what vulnerabilities were detected, what tests were passed or failed, and so on. This metadata is then captured by Grafeas. See the image below for a use case of how Grafeas can provide visibility for software development, test and operations teams as well as CIOs.
(click to enlarge)

To give a comprehensive, unified view of this metadata, we built Grafeas to promote cross-vendor collaboration and compatibility; we’ve released it as open source, and are working with contributors from across the ecosystem to further develop the platform:

  • JFrog is implementing Grafeas in the JFrog Xray API and will support hybrid cloud workflows that require metadata in one environment (e.g., on-premises in Xray) to be used elsewhere (e.g., on Google Cloud Platform). Read more on JFrog’s blog
  • Red Hat is planning on enhancing the security features and automation of Red Hat Enterprise Linux container technologies in OpenShift with Grafeas. Read more on Red Hat’s blog
  • IBM plans to deliver Grafeas and Kristis as part of the IBM Container Service on IBM Cloud, and to integrate our Vulnerability Advisor and DevOps tools with the Grafeas API. Read more on IBM’s blog
  • Black Duck is collaborating with Google to implement the Google artifact metadata API implementation of Grafeas, to bring improved enterprise-grade open-source security to Google Container Registry and Google Container Engine. Read more on Black Duck’s blog
  • Twistlock will integrate with Grafeas to publish detailed vulnerability and compliance data directly into orchestration tooling, giving customers more insight and confidence about their container operations. Read more on Twistlock’s blog.
  • Aqua Security will integrate with Grafeas to publish vulnerabilities and violations, and to enforce runtime security policies based on component metadata information. Read more on Aqua’s blog
  • CoreOS is exploring integrations between Grafeas and Tectonic, its enterprise Kubernetes platform, allowing it to extend its image security scanning and application lifecycle governance capabilities. 

Already, several contributors are planning upcoming Grafeas releases and integrations:

  • JFrog’s Xray implementation of Grafeas API 
  • A Google artifact metadata API implementation of Grafeas, together with Google Container Registry vulnerability scanning 
  • Bi-directional metadata sync between JFrog Xray and the Google artifact metadata API 
  • Black Duck integration with Grafeas and the Google artifact metadata API 
Building on this momentum, we expect numerous other contributions to the Grafeas project early in 2018.

Join us!

The way we build and deploy software is undergoing fundamental changes. If scaled organizations are to reap the benefits of containers, microservices, open source and hybrid cloud, they need a strong governance layer to underpin their software development processes. Here are some ways you can learn more about and contribute to the project:


 We hope you will join us!

Better tools for teams of all sizes

We’ve heard feedback from businesses of all sizes that they need simpler ways to manage the analytics products they use and the team members who use them. That’s why we’re making new controls available to everyone who uses Analytics, Tag Manager, and Optimize and improving navigation for users of Surveys and Data Studio. These new features will help you more easily manage your accounts, get an overview of your business, and move between products.

Streamlined account management



With centralized account management, you can control user access and permissions across multiple products, like Analytics, Tag Manager, and Optimize.

The first step is to create an organization to represent your business. You then link this organization to all of the different accounts that belong to your business. You can also move accounts between the organizations you create.

Now you have a central location where administrators for your organization can:
  • Create rules for which types of new users should be allowed access to your organization
  • Audit existing users and decide which products and features they should have access to
  • Remove users who have left your organization or no longer need access to the tools
  • See the last time a user in your organization accessed Google Analytics data
  • Allow users to discover who are your organization’s admins and contact them for help


New home page



Setting up an organization also gives you access to a new home page that provides an overview of your business. You’ll be able to manage accounts and settings across products and get insights and quick access to the products and features you use most. For example, you might see a large increase in visitors for a specific Analytics property, and then click through to Analytics to investigate where the visitors are coming from.


Simplified navigation



Finally, you’ll get a unified user experience across products. Common navigation and product headers make it easy to switch between products and access the data you need. You can view accounts by organization, or see everything you have access to in one place. We’ve also redesigned search, making it possible to search across all of your accounts in a single place.


Get started



If your business would benefit from these features, please visit this page to get started. You can also check out the help center for more info.

These updates will be rolling out over the next few weeks, so please stay tuned if you don’t yet have access.

Note: If you’re using the enterprise versions of our products, like Analytics 360, you already have access to these features as part of the Google Analytics 360 Suite.

Google Code-in 2017 is seeking organization applications


We are now accepting applications for open source organizations who want to participate in Google Code-in 2017. Google Code-in, a global online contest for pre-university students ages 13-17, invites students to learn by contributing to open source software.

Working with young students is a special responsibility and each year we hear inspiring stories from mentors who participate. To ensure these new, young contributors have a great support system, we select organizations that have gained experience in mentoring students by previously taking part in Google Summer of Code.

Organizations must apply before Tuesday, October 24 at 16:00 UTC.

17 organizations were accepted last year, and over the last 7 years, 4,553 students from 99 different countries have completed more than 23,651 tasks for participating open source projects. Tasks fall into 5 categories:

  • Code: writing or refactoring 
  • Documentation/Training: creating/editing documents and helping others learn more
  • Outreach/Research: community management, outreach/marketing, or studying problems and recommending solutions
  • Quality Assurance: testing and ensuring code is of high quality
  • User Interface: user experience research or user interface design and interaction

Once an organization is selected for Google Code-in 2017 they will define these tasks and recruit mentors who are interested in providing online support for students.

You can find a timeline, FAQ and other information about Google Code-in on our website. If you’re an educator interested in sharing Google Code-in with your students, you can find resources here.

By Josh Simmons, Google Open Source

Now shipping: Compute Engine machine types with up to 96 vCPUs and 624GB of memory


Got compute- and memory-hungry applications? We’ve got you covered, with new machine types that have up to 96 vCPUs and 624 GB of memory—a 50% increase in compute resources per Google Compute Engine VM. These machine types run on Intel Xeon Scalable processors (codenamed Skylake), and offer the most vCPUs of any cloud provider on that chipset. Skylake in turn provides up to 20% faster compute performance, 82% faster HPC performance, and almost 2X the memory bandwidth compared with the previous generation Xeon.1

96 vCPU VMs are available in three predefined machine types

  • Standard: 96 vCPUs and 360 GB of memory
  • High-CPU: 96 vCPUs and 86.4 GB of memory
  • High-Memory: 96 vCPUs and 624 GB of memory


You can also use custom machine types and extended memory with up to 96 vCPUs and 624GB of memory, allowing you to better create exactly the machine shape you need, avoid wasted resources, and pay for only what you use.


The new 624GB Skylake instances are certified for SAP HANA scale-up deployments. And if you want to run even larger HANA analytical workloads, scale-out configurations of up to 9.75TB of memory with 16 n1-highmem-96 nodes are also now certified for data warehouses running BW4/HANA.

You can use these new 96-core machines in beta today in four GCP regions: Central US, West US, West Europe, and East Asia. To get started, visit your GCP Console and create a new instance. Or check out our docs for instructions on creating new virtual machines using the gcloud command line tool.


Need even more compute power or memory? We’re also working on a range of new, even larger VMs, with up to 4TB of memory. Tell us about your workloads and join our early testing group for new machine types.


1 Based on comparing Intel Xeon Scalable Processor codenamed "Skylake" versus previous generation Intel Xeon processor codenamed "Broadwell." 20% based on SpecINT. 82% based on on High Performance Linpack for 4 node cluster with AVX512. Performance improvements include improvements from use of Math Kernel Library and Intel AVX512. Performance tests are measured using specific computer systems, components, software, operations and functions, and may have been optimized for performance only on Intel microprocessors. Any change to any of those factors may cause the results to vary. You should consult other information to assist you in fully evaluating your contemplated purchases. For more information go to http://www.intel.com/benchmarks

Partnering on open source: Managing Google Cloud Platform with Chef


Managing cloud resources is a critical part of the application lifecycle. That’s why today, we released and open sourced a set of comprehensive cookbooks for Chef users to manage Google Cloud Platform (GCP) resources.

Chef is a continuous automation platform powered by an awesome community. Together, Chef and GCP enable you to drive continuous automation across infrastructure, compliance and applications.

The new cookbooks allow you to define an entire GCP infrastructure using Chef recipes. The Chef server then creates the infrastructure, enforces it, and ensures it stays in compliance. The cookbooks are idempotent, meaning you can reapply them when changes are required and still achieve the same result.

The new cookbooks support the following products:



We also released a unified authentication cookbook that provides a single authentication mechanism for all the cookbooks.

These new cookbooks are Chef certified, having passed the Chef engineering team’s rigorous quality and review bar, and are open-source under the Apache-2.0 license on GCP's Github repository.

We tested the cookbooks on CentOS, Debian, Ubuntu, Windows and other operating systems. Refer to the operating system support matrix for compatibility details. The cookbooks work with Chef Client, Chef Server, Chef Solo, Chef Zero, and Chef Automate.

To learn more about these Chef cookbooks, register for the webinar with myself and Chef’s JJ Asghar on 15 October 2017.

Getting started with Chef on GCP

Using these new cookbooks is as easy as following these four steps:
  1. Install the cookbooks.
  2. Get a service account with privileges for the GCP resources that you want to manage and enable the the APIs for each of the GCP services you will use.
  3. Describe your GCP infrastructure in Chef:
    1. Define a gauth_credential resource
    2. Define your GCP infrastructure
  4. Run Chef to apply the recipe.
Now, let’s discuss these steps in more detail.

1. Install the cookbooks

You can find all the GCP cookbooks for Chef on Chef Supermarket. We also provide a “bundle” cookbook that installs every GCP cookbook at once. That way you can choose the granularity of the code you pull into your infrastructure.

Note: These Google cookbooks require neither administrator privileges nor special privileges/scopes on the machines that Chef runs on. You can install the cookbooks either as a regular user on the machine that will execute the recipe, or on your Chef server; the latter option distributes the cookbooks to all clients.

The authentication cookbook requires a few of our gems. You can install them using various methods, including using Chef itself:


chef_gem 'googleauth'
chef_gem 'google-api-client'


For more details on how to install the gems, please visit the authentication cookbook documentation.

Now, you can go ahead and install the Chef cookbooks. Here’s how to install them all with a single command:


knife cookbook site install google-cloud


Or, you can install only the cookbooks for select products:


knife cookbook site install google-gcompute    # Google Compute Engine
knife cookbook site install google-gcontainer  # Google Container Engine
knife cookbook site install google-gdns        # Google Cloud DNS
knife cookbook site install google-gsql        # Google Cloud SQL
knife cookbook site install google-gstorage    # Google Cloud Storage


2. Get your service account credentials and enable APIs

To ensure maximum flexibility and portability, you must authenticate and authorize GCP resources using service account credentials. Using service accounts allows you to restrict the privileges to the minimum necessary to perform the job.

Note: Because service accounts are portable, you don’t need to run Chef inside GCP. Our cookbooks run on any computer with internet access, including other cloud providers. You might, for example, execute deployments from within a CI/CD system pipeline such as Travis or Jenkins, or from your own development machine.

Click here to learn more about service accounts, and how to create and enable them.

Also make sure to enable the the APIs for each of the GCP services you intend to use.

3a. Define your authentication mechanism

Once you have your service account, add the following resource block to your recipe to begin authenticating with it. The resource name, here 'mycred' is referenced in the objects in the credential parameter.


gauth_credential 'mycred' do
  action :serviceaccount
  path '/home/nelsonjr/my_account.json'
  scopes ['https://www.googleapis.com/auth/compute']
end


For further details on how to setup or customize authentication visit the Google Authentication cookbook documentation.

3b. Define your resources

You can manage any resource for which we provide a type. The example below creates an SQL instance and database in Cloud SQL. For the full list of resources that you can manage, please refer to the respective cookbook documentation link or to this aggregate summary view.


gsql_instance ‘my-app-sql-server’ do
  action :create
  project 'google.com:graphite-playground'
  credential 'mycred'
end

gsql_database 'webstore' do
  action :create
  charset 'utf8'
  instance ‘my-app-sql-server’
  project 'google.com:graphite-playground'
  credential 'mycred'
end


Note that the above code has to be described in a recipe within a cookbook. We recommend you have a “profile” wrapper cookbook that describes your infrastructure, and reference the Google cookbooks as a dependency.

4. Apply your recipe

Next, we direct Chef to enforce the recipe in the “profile” cookbook. For example:

$ chef-client -z --runlist ‘recipe[mycloud::myapp]’

In this example, mycloud is the “profile” cookbook, and myapp is the recipe that contains the GCP resource declarations.

Please note that you can apply the recipe from anywhere that Chef can execute recipes (client, server, automation), once or multiple times, or periodically in the background using an agent.

Next steps

Now you're ready to start managing GCP resources with Chef, and start reaping the benefits of cross-cloud configuration management. Our plan is to continue to improve the cookbooks and add support for more Google products. We're also preparing to release the technology used to create these cookbooks as open source. If you have questions about this effort please visit Chef on GCP discussions forum, or reach out to us on [email protected].

Announcing more Open Source Peer Bonus winners

We’re excited to announce 2017’s second round of Open Source Peer Bonus winners. Google Open Source established this program six years ago to encourage Googlers to recognize and celebrate the external developers contributing to the open source ecosystem Google depends on.

The Open Source Peer Bonus program works like this: Googlers nominate open source developers outside of the company who deserve recognition for their contributions to open source projects, including those used by Google. Nominees are reviewed by a volunteer team of engineers and the winners receive our heartfelt thanks and a small token of our appreciation.

To date, we’ve recognized nearly 600 open source contributors from dozens of countries who have contributed their time and talent to more than 400 open source projects. You can find past winners in recent blog posts: Fall 2016, Spring 2017.

Without further adieu, we’d like to recognize the latest round of winners and the projects they worked on. Here are the individuals who gave us permission to thank them publicly:

Name Project Name Project
Mo JangdaAMP ProjectEric TangMaterial Motion
Osvaldo LopezAMP ProjectNicholas Tollerveymicro:bit, Mu
Jason JeanAngular CLIDamien GeorgeMicroPython
Henry ZhuBabelTom SpilmanMonoGame
Oscar BoykinBazel Scala rulesArthur EdgeNARKOZ/gitlab
Francesc AltedBloscSebastian BergNumPy
Matt HoltCaddyBogdan-Andrei IancuOpenSIPS
Martijn CroonenChromiumAmit AmbastaOR-tools
Raphael CostaChromiumMichael PowellOR-tools
Mariatta WijayaCPythonWestbrook JohnsonPolymer
Victor StinnerCPythonMarten Seemannquic-go
Derek ParkerDelveFabian HennekeSecure Shell
Thibaut CouroubledevdocsChris FillmoreShaka Player
David Lechnerev3devTakeshi KomiyaSphinx
Michael NiedermayerFFmpegDan KennedySQLite
Mathew HuuskoFirebaseJoe MistachkinSQLite
Armin RonacherFlaskRichard HippSQLite
Nenad StojanovskiForseti SecurityYuxin WuTensorpack
Solly RossHeapsterMichael Herzogthree.js
Bjørn PedersenHugoTakahiro Aoyagithree.js
Brion VibberJS-InterpreterJelle ZijlstraTypeshed
Xiaoyu ZhangKubernetesVerónica LópezWomen Who Go
Anton KachurinMaterial Components for the Web

Thank you all so much for your contributions to the open source community and congratulations on being selected!

By Maria Webb, Google Open Source

Introducing custom roles, a powerful way to make Cloud IAM policies more precise



As enterprises move their applications, services and data to the cloud, it’s critical that they put appropriate access controls in place to help ensure that the right people can access the right data at the right time. That’s why we’re excited to announce the beta release of custom roles for Cloud IAM.

Custom roles offer customers full control of 1,287 public permissions across Google Cloud Platform services. This helps administrators grant users the permissions they need to do their jobs — and only those permissions. Fine-grained access controls help enforce the principle of least privilege for resources and data on GCP.

“Verily is using custom roles to uphold the highest standards of patient trust by carefully managing the granularity of data access granted to people and programs based on their ‘need-to-know’.” — Harriet Brown, Product Manager for Trust, Compliance, and Data Security at Verily Life Sciences 

Understanding IAM roles 

IAM offers three primitive roles for Owner, Editor, and Viewer that make it easy to get started, and over one hundred service-specific predefined roles that combine a curated set of permissions necessary to complete different tasks across GCP. In many cases, predefined roles are sufficient for controlling access to GCP services. For example, the Cloud SQL Viewer predefined role combines 14 permissions necessary to allow users to browse and export databases.

Custom roles complement the primitive and predefined roles when you need to be even more precise. For example, an auditor may only need to access a database to gather audit findings so they know what data is being collected, but not to read the actual data or perform any other operations. You can build your own “Cloud SQL Inventory” custom role to grant auditors browse access to databases without giving them permission to export their contents.

How to create custom roles 

To begin crafting custom roles, we recommend starting from the available predefined roles. These predefined roles are appropriate for most use cases and often only need small changes to the permissions list to meet an organization's requirements. Here’s how you could implement a custom role for the above use case:

Step 1: Select the predefined role that you’d like to customize, in this case Cloud SQL Viewer:
Step 2: Clone the predefined role and give it a custom name and ID.  Add or remove the desired permissions for your new custom role. In this case, that’s removing cloudsql.instances.export.

How to use custom roles 

Custom roles are available now in the Cloud Console, on the Roles tab under the ‘IAM & admin’ menu; as a REST API; and on the command line as gcloud beta iam. As you create a custom role, you can also assign it a lifecycle stage to inform your users about the readiness of the role for production usage.

IAM supports custom roles for projects and across entire organizations to centralize development, testing, maintenance, and sharing of roles.


Maintaining custom roles 

When using custom roles, it’s important to track what permissions are associated with the roles you create, since available permissions for GCP services evolve and change over time. Unlike GCP predefined roles, you control if and when permissions are added or removed. Returning to our example, if new features are added to the Cloud SQL service — with corresponding new permissions — then you decide whether to add the new permissions to your customized “SQL Inventory” role as you see fit. During your testing, the Cloud Console’s appearance may vary for users who are granted custom roles, since UI elements may be enabled or disabled by specific permissions. To help maintain your custom roles, you can refer to the new IAM permissions change log to find all changes to beta and GA services’ permissions.

Get started! 

Interested in customizing Cloud IAM roles in your GCP project? Check out the detailed step-by-step instructions on how to get started here. We hope Cloud IAM custom roles make it easier for organizations to align access controls to their business processes. In conjunction with resource-level IAM policies, which can control access down to specific resources such as Pub/Sub topics or Machine Learning models, security administrators now have the power to publish policies as precise as granting a single user just one permission on a resource — or on whole folders full of projects. We welcome your feedback.

Google Container Engine – Kubernetes 1.8 takes advantage of the cloud built for containers



Next week, we will roll out Kubernetes 1.8 to Google Container Engine for early access customers. In addition, we are advancing significant new functionality in Google Cloud to give Container Engine customers a great experience across Kubernetes releases. As a result, Container Engine customers get new features that are only available on Google Cloud Platform, for example highly available clusters, cluster auto-scaling and auto-repair, GPU hardware support, container-native networking and more.

Since we founded Kubernetes back in 2014, Google Cloud has been the leading contributor to the Kubernetes open source project in every release including 1.8. We test, stage and roll out Kubernetes on Google Cloud, and the same team that writes it, supports it, ensuring you receive the latest innovations faster without risk of compatibility breaks or support hassles.

Let’s take a look at the new Google Cloud enhancements that make Kubernetes run so well.

Speed and automation 

Earlier this week we announced that Google Compute Engine, Container Engine and many other GCP services have moved from per-minute to per-second billing. We also lowered the minimum run charge to one minute from 10 minutes, giving you even finer granularity so you only pay for what you use.

Many of you appreciate how quickly you can spin up a cluster on Container Engine. We’ve made it even faster - improving cluster startup time by 45%, so you’re up and running faster, and better able to take advantage of the pricing minimum-time charge. These improvements also apply to scaling your existing node pools.

A long-standing ask has been high availability masters for Container Engine. We are pleased to announce early access support for high availability, multi-master Container Engine clusters, which increase our SLO to 99.99% uptime. You can elect to run your Kubernetes masters and nodes in up to three zones within a region for additional protection from zonal failures. Container Engine seamlessly shifts load away from failed masters and nodes when needed. Sign up here to try out high availability clusters.

In addition to speed and simplicity, Container Engine automates Kubernetes in production, giving developers choice, and giving operators peace of mind. We offer several powerful Container Engine automation features:

  • Node Auto-Repair is in beta and opt-in. Container Engine can automatically repair your nodes using the Kubernetes Node Problem Detector to find common problems and proactively repair nodes and clusters. 
  • Node Auto-Upgrade is generally available and opt-in. Cluster upgrades are a critical Day 2 task and to give you automation with full control, we now offer Maintenance Windows (beta) to specify when you want Container Engine to auto-upgrade your masters and nodes. 
  • Custom metrics on the Horizontal Pod Autoscaler will soon be in beta so you can scale your pods on metrics other than CPU utilization. 
  • Cluster Autoscaling is generally available with performance improvements enabling up to 1,000 nodes, with up to 30 pods in each node as well as letting you specify a minimum and maximum number of nodes for your cluster. This will automatically grows or shrinks your cluster depending on workload demands. 

Container-native networking - Container Engine exclusive! - only on GCP

Container Engine now takes better advantage of GCP’s unique, software-defined network with first-class Pod IPs and multi-cluster load balancing.

  • Aliased IP support is in beta. With aliased IP support, you can take advantage of several network enhancements and features, including support for connecting Container Engine clusters over a Peered VPC. Aliased IPs are available for new clusters only; support for migrating existing clusters will be added in an upcoming release. 
  • Multi-cluster ingress will soon be in alpha. You will be able to construct highly available, globally distributed services by easily setting up Google Cloud Load Balancing to serve your end users from the closest Container Engine cluster. To apply for access, please fill out this form
  • Shared VPC support will soon be in alpha. You will be able to create Container Engine clusters on a VPC shared by multiple projects in your cloud organization. To apply for access, please fill out this form


Machine learning and hardware acceleration

Machine learning, data analytics and Kubernetes work especially well together on Google Cloud. Container Engine with GPUs turbocharges compute-intensive applications like machine learning, image processing, artificial intelligence and financial modeling. This release brings you managed CUDA-as-a-Service in containers. Big data is also better on Container Engine with new features that make GCP storage accessible from Spark on Kubernetes.

  • NVIDIA Tesla P100 GPUs are available in alpha clusters. In addition to the NVIDIA Tesla K80, you can now create a node with up to 4 NVIDIA P100 GPUs. P100 GPUs can accelerate your workloads by up to 10x compared to the K80! If you are interested in alpha testing your CUDA models in Container Engine, please sign up for the GPU alpha
  • Cloud Storage is now accessible from Spark. Spark on Kubernetes can now communicate with Google BigQuery and Google Cloud Storage as data sources and sinks from Spark using bigdata-interop connectors
  • CronJobs are now in beta so you can now schedule cron jobs such as data processing pipelines to run on a given schedule in your production clusters! 

Extensibility 

As more enterprises use Container Engine, we are actively improving extensibility so you can match Container Engine to your environment and standards.


Security and reliability 

We designed Container Engine with enterprise security and reliability in mind. This release adds several new enhancements.

  • Role Based Access Control (RBAC) is now generally available. This feature allows a cluster administrator to specify fine-grained policies describing which users, groups, and service accounts are allowed to perform which operations on which API resources. 
  • Network Policy Enforcement using Calico is in beta. Starting from Kubernetes 1.7.6, you can help secure your Container Engine cluster with network policy pod-to-pod ingress rules. Kubernetes 1.8 adds additional support for CIDR-based rules, allowing you to whitelist access to resources outside of your Kubernetes cluster (e.g., VMs, hosted services, and even public services), so that you can integrate your Kubernetes application with other IT services and investments. Additionally, you can also now specify pod-to-pod egress rules, providing tighter controls needed to ensure service integrity. Learn more about here
  • Node Allocatable is generally available. Container Engine includes the Kubernetes Node Allocatable feature for more accurate resource management, providing higher node stability and reliability by protecting node components from out-of-resource issues. 
  • Priority / Preemption is in alpha clusters. Container Engine implements Kubernetes Priority and Preemption so you can associate priority pods to priority levels such that you can preempt lower-priority pods to make room for higher-priority ones when you have more workloads ready to run on the cluster than there are resources available. 

Enterprise-ready container operations - monitoring and management designed for Kubernetes 

In Kubernetes 1.7, we added view-only workload, networking, and storage views to the Container Engine user interface. In 1.8, we display even more information, enable more operational and development tasks without having to leave the UI, and improve integration with Stackdriver and Cloud Shell.



The following features are all generally available:

  • Easier configurations: You can now view and edit your YAML files directly in the UI. We also added easy to use shortcuts for the most common user actions like rolling updates or scaling a deployment.
  • Node information details: Cluster view now shows details such as node health status, relevant logs, and pod information so you can easily troubleshoot your clusters and nodes. 
  • Stackdriver Monitoring integration: Your workload views now include charts showing CPU, memory, and disk usage. We also link to corresponding Stackdriver pages for a more in-depth look. 
  • Cloud Shell integration: You can now generate and execute exact kubectl commands directly in the browser. No need to manually switch context between multiple clusters and namespaces or copy and paste. Just hit enter! 
  • Cluster recommendations: Recommendations in cluster views suggest ways that you can improve your cluster, for example, turning on autoscaling for underutilized clusters or upgrading nodes for version alignment. 

In addition, Audit Logging is available to early access customers. This features enables you to view your admin activity and data access as part of Cloud Audit Logging. Please complete this form to take part in the Audit Logging early access program.


Container Engine everywhere 

Container Engine customers are global. To keep up with demand, we’ve expanded our global capacity to include our latest GCP regions: Frankfurt (europe-west3), Northern Virginia (us-east4) and São Paulo (southamerica-east1). With these new regions Container Engine is now available in a dozen locations around the world, from Oregon to Belgium to Sydney.



Customers of all sizes have been benefiting from containerizing their applications and running them on Container Engine. Here are a couple of recent examples:

Mixpanel, a popular product analytics company, processes 5 trillion data points every year. To keep performance high, Mixpanel uses Container Engine to automatically scale resources.

“All of our applications and our primary database now run on Google Container Engine. Container Engine gives us elasticity and scalable performance for our Kubernetes clusters. It’s fully supported and managed by Google, which makes it more attractive to us than elastic container services from other cloud providers,” says Arya Asemanfar, Engineering Manager at Mixpanel. 

RealMassive, a provider of real-time commercial real estate information, was able to cut its cloud hosting costs in half by moving to microservices on Container Engine.

“What it comes down to for us is speed-to-market and cost. With Google Cloud Platform, we can confidently release services multiple times a day and launch new markets in a day. We’ve also reduced our cloud hosting costs by 50% by moving to microservices on Google Container Engine,” says Jason Vertrees, CTO at RealMassive. 

Bitnami, an application package and deployment platform, shows you how to use Container Engine networking features to create a private Kubernetes cluster that enforces service privacy so that your services are available internally but not to the outside world.

Try it today! 

In a few days, all Container Engine customers will have access to Kubernetes 1.8 in alpha clusters. These new updates will help even more businesses run Kubernetes in production to get the most from their infrastructure and application delivery. If you want to be among the first to access Kubernetes 1.8 on your production clusters, please join our early access program.

You can find the complete list of new features in the Container Engine release notes. For more information, visit our website or sign-up for our free trial at no cost.

Introducing Network Policy support for Google Container Engine, with Project Calico and Tigera



[Editor’s Note: Today we announced the beta of Kubernetes Network Policy in Google Container Engine, a feature we implemented in close collaboration with our partner Tigera, the company behind Project Calico. Read on for more details from Tigera co-founder and vice president of product, Andy Randall.]

When it comes to network policy, a lot has changed. Back in the day, we protected enterprise data centers with a big expensive network appliance called a firewall that allowed you to define rules about what traffic to allow in and out of the data center. In the cloud world, virtual firewalls provide similar functionality for virtual machines. For example, Google Compute Engine allows you to configure firewall rules on VPC networks.

In a containerized microservices environment such as Google Container Engine, network policy is particularly challenging. Traditional firewalls provide great perimeter security for intrusion from outside the cluster (i.e. “north-south” traffic), but aren’t designed for “east-west” traffic within the cluster at a finer-grained level. And while Container Engine automates the creation and destruction of containers (each with its own IP address), not only do you have many more IP endpoints than you used to, the automated create-run-destroy life-cycle of a container can result in churn up to 250x that of virtual machines.

Traditional firewall rules are no longer sufficient for containerized environments; we need a more dynamic, automated approach that is integrated with the orchestrator. (For those interested in why we can’t just continue with traditional virtual network / firewall approaches, see Christopher Liljenstolpe’s blog post, Micro-segmentation in the Cloud Native World.)


We think the Kubernetes Network Policy API and the Project Calico implementation present a solution to this challenge. Given Google’s leadership role in the community, and its commitment to running Container Engine on the latest Kubernetes release, it’s only natural that they would be the first to include this capability in their production hosted Kubernetes service, and we at Tigera are delighted to have helped support this effort.


Kubernetes Network Policy 1.0

What exactly does Kubernetes Network Policy let you do? Kubernetes Network Policy allows you to easily specify the connectivity allowed within your cluster, and what should be blocked. (It is a stable API as of Kubernetes v1.7.)

You can find the full API definition in the Kubernetes documentation but the key points are as follows:
  • Network policies are defined by the NetworkPolicy resource type. These are applied  to the Kubernetes API server like any other resource (e.g., kubectl apply -f my-network-policy.yaml).
  • By default, all pods in a namespace allow unrestricted access. That is, they can accept incoming network connections from any source.
  • A NetworkPolicy object contains a selector expression (“podSelector”) that selects a set of pods to which the policy applies, and the rules about which incoming connections will be allowed (“ingress” rules). Ingress rules can be quite flexible, including their own namespace selector or pod selector expressions.
  • Policies apply to a namespace. Every pod in that namespace selected by the policy’s podSelector will have the ingress rules applied, so any connection attempts that are not explicitly allowed are rejected. Calico enforces this policy extremely efficiently using iptables rules programmed into the underlying host’s kernel. 


Here is an example NetworkPolicy resource to give you a sense of how this all fits together:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: test-network-policy
spec:
  podSelector:
    matchLabels:
      role: db
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          project: myproject
    - podSelector:
        matchLabels:
          role: frontend
    ports:
    - protocol: TCP
      port: 6379


In this case, the policy is called “test-network-policy” and applies to the default namespace. It restricts inbound connections to every pod in the default namespace that has the label “role: db” applied. The selectors are disjunctive, i.e., either can be true. That means that connections can come from any pod in a namespace with label “project: myproject”, OR any pod in the default namespace with the label “role: frontend”. Further, these connections must be on TCP port 6379 (the standard port for Redis).

As you can see, Network Policy is an intent-based policy model, i.e., you specify your desired end-state and let the system ensure it happens. As pods are created and destroyed, or a label (such as “role: db”) is applied or deleted from existing pods, you don’t need to update anything: Calico automatically takes care of things behind the scenes and ensures that every pod on every host has the right access rules applied.

As you can imagine, that’s quite a computational challenge at scale, and Calico’s policy engine contains a lot of smarts to meet Container Engine’s production performance demands. The good news is that you don’t need to worry about that. Just apply your policies and Calico takes care of the rest.


Enabling Network Policy in Container Engine

For new and existing clusters running at least Kubernetes v1.7.6, you can enable network policy on Container Engine via the UI, CLI or API. For new clusters, simply set the flag (or check the box in the UI) when creating the cluster. For existing clusters there is a two-step process:
  1. Enable the network policy add-on on the master.
  2. Enable network policy for the entire cluster’s node-pools.
Here’s how to do that during cluster creation:

# Create a cluster with Network Policy Enabled
gcloud beta container clusters create <CLUSTER> --project=<PROJECT_ID> 
--zone=&ltZONE> --enable-network-policy --cluster-version=1.7.6


Here’s how to do it for existing clusters:
# Create a cluster with Network Policy Enabled

# Enable the addon
gcloud beta container clusters update <CLUSTER> --project=<PROJECT_ID> 
--zone=<ZONE>--update-addons=NetworkPolicy=ENABLE

# Enable on nodes (This re-creates the node pools)
gcloud beta container clusters update <CLUSTER> --project=<PROJECT_ID> 
--zone=<ZONE> --enable-network-policy

Looking ahead 

Environments running Kubernetes 1.7 can use the NetworkPolicy API capabilities that we discussed above, essentially ingress rules defined by selector expressions. However, you can imagine wanting to do more, such as:

  • Applying egress rules (restricting which outbound connections a pod can make) 
  • Referring to IP addresses or ranges within rules 

The good news is that the new Kubernetes 1.8 includes these capabilities, and Google and Tigera are working together to make them available in Container Engine. And, beyond that, we are working on even more advanced policy capabilities. Watch this space!

Attend our joint webinar! 

Want to learn more? Google Product Manager Matt DeLio will join Casey Davenport, the Kubernetes Networking SIG leader and a software engineer at Tigera, to talk about best practices and design patterns for securing your applications with network policy. Register here for the October 5th webinar.