Tag Archives: Containers & Kubernetes

What a year! Google Cloud Platform in 2017



The end of the year is a time for reflection . . . and making lists. As 2017 comes to a close, we thought we’d review some of the most memorable Google Cloud Platform (GCP) product announcements, white papers and how-tos, as judged by popularity with our readership.

As we pulled the data for this post, some definite themes emerged about your interests when it comes to GCP:
  1. You love to hear about advanced infrastructure: CPUs, GPUs, TPUs, better network plumbing and more regions. 
  2.  How we harden our infrastructure is endlessly interesting to you, as are tips about how to use our security services. 
  3.  Open source is always a crowd-pleaser, particularly if it presents a cloud-native solution to an age-old problem. 
  4.  You’re inspired by Google innovation — unique technologies that we developed to address internal, Google-scale problems. So, without further ado, we present to you the most-read stories of 2017.

Cutting-edge infrastructure

If you subscribe to the “bigger is always better” theory of cloud infrastructure, then you were a happy camper this year. Early in 2017, we announced that GCP would be the first cloud provider to offer Intel Skylake architecture, GPUs for Compute Engine and Cloud Machine Learning became generally available and Shazam talked about why cloud GPUs made sense for them. In the spring, you devoured a piece on the performance of TPUs, and another about the then-largest cloud-based compute cluster. We announced yet more new GPU models and topping it all off, Compute Engine began offering machine types with a whopping 96 vCPUs and 624GB of memory.

It wasn’t just our chip offerings that grabbed your attention — you were pretty jazzed about Google Cloud network infrastructure too. You read deep dives about Espresso, our peering-edge architecture, TCP BBR congestion control and improved Compute Engine latency with Andromeda 2.1. You also dug stories about new networking features: Dedicated Interconnect, Network Service Tiers and GCP’s unique take on sneakernet: Transfer Appliance.

What’s the use of great infrastructure without somewhere to put it? 2017 was also a year of major geographic expansion. We started out the year with six regions, and ended it with 13, adding Northern Virginia, Singapore, Sydney, London, Germany, Sao Paolo and Mumbai. This was also the year that we shed our Earthly shackles, and expanded to Mars ;)

Security above all


Google has historically gone to great lengths to secure our infrastructure, and this was the year we discussed some of those advanced techniques in our popular Security in plaintext series. Among them: 7 ways we harden our KVM hypervisor, Fuzzing PCI Express and Titan in depth.

You also grooved on new GCP security services: Cloud Key Management and managed SSL certificates for App Engine applications. Finally, you took heart in a white paper on how to implement BeyondCorp as a more secure alternative to VPN, and support for the European GDPR data protection laws across GCP.

Open, hybrid development


When you think about GCP and open source, Kubernetes springs to mind. We open-sourced the container management platform back in 2014, but this year we showed that GCP is an optimal place to run it. It’s consistently among the first cloud services to run the latest version (most recently, Kubernetes 1.8) and comes with advanced management features out of the box. And as of this fall, it’s certified as a conformant Kubernetes distribution, complete with a new name: Google Kubernetes Engine.

Part of Kubernetes’ draw is as a platform-agnostic stepping stone to the cloud. Accordingly, many of you flocked to stories about Kubernetes and containers in hybrid scenarios. Think Pivotal Container Service and Kubernetes’ role in our new partnership with Cisco. The developers among you were smitten with Cloud Container Builder, a stand-alone tool for building container images, regardless of where you deploy them.

But our open source efforts aren’t limited to Kubernetes — we also made significant contributions to Spinnaker 1.0, and helped launch the Istio and Grafeas projects. You ate up our "Partnering on open source" series, featuring the likes of HashiCorp, Chef, Ansible and Puppet. Availability-minded developers loved our Customer Reliability Engineering (CRE) team’s missive on release canaries, and with API design: Choosing between names and identifiers in URLs, our Apigee team showed them a nifty way to have their proverbial cake and eat it too.

Google innovation


In distributed database circles, Google’s Spanner is legendary, so many of you were delighted when we announced Cloud Spanner and a discussion of how it defies the CAP Theorem. Having a scalable database that offers strong consistency and great performance seemed to really change your conception of what’s possible — as did Cloud IoT Core, our platform for connecting and managing “things” at scale. CREs, meanwhile, showed you the Google way to handle an incident.

2017 was also the year machine learning became accessible. For those of you with large datasets, we showed you how to use Cloud Dataprep, Dataflow, and BigQuery to clean up and organize unstructured data. It turns out you don’t need a PhD to learn to use TensorFlow, and for visual learners, we explained how to visualize a variety of neural net architectures with TensorFlow Playground. One Google Developer Advocate even taught his middle-school son TensorFlow and basic linear algebra, as applied to a game of rock-paper-scissors.

Natural language processing also became a mainstay of machine learning-based applications; here, we highlighted with a lighthearted and relatable example. We launched the Video Intelligence API and showed how Cloud Machine Learning Engine simplifies the process of training a custom object detector. And the makers among you really went for a post that shows you how to add machine learning to your IoT projects with Google AIY Voice Kit. Talk about accessible!

Lastly, we want to thank all our customers, partners and readers for your continued loyalty and support this year, and wish you a peaceful, joyful, holiday season. And be sure to rest up and visit us again Next year. Because if you thought we had a lot to say in 2017, well, hold onto your hats.

Out of one, many: Using Jenkins, GCP projects and service accounts at Catalant



[Editor’s Note: Today we hear from Catalant, an on-demand staffing provider that connects consultants and enterprises with a Software as a Service (SaaS) application that’s built on Google Cloud Platform (GCP). Using Jenkins, Google projects and service accounts, Catalant was able to build a single shared environment across production, development and sales that was easy to manage and that satisfied its compliance and regulatory requirements].

If your organization provides a SaaS application, you probably have multiple environments: production of course, but also demo, test, staging and integration environments to support various use cases. From a management perspective, you want to share resources across all those environments so you have the least number of moving parts. But ease of management and robust security are often at odds. For security purposes, the best practice is a separate project for each environment where nothing is shared and there's complete isolation.

Here at Catalant, we approached this problem by taking a step back and understanding the requirements from different parts of the organization:
  1. Compliance: Each environment and its data needs to be secure and not be shared. 
  2. Sales: We need an environment that lets us control the data, so that we can give a consistent, predictable demo. 
  3. Development: We need an environment where we can test things before putting them into production. 
  4. Engineering Management: We need continuous integration and continuous deployment (CI/CD). Also, developers should not be required to use GCP-specific tools.
Based on these requirements, we elected to go with a single shared Jenkins project to manage CI/CD activities for all the environments (test, demo, prod) that we may bring up, which satisfied developers and engineering management. Google Cloud’s concept of projects, meanwhile, addressed the compliance team’s concerns with fortified boundaries that by default do not allow unauthorized traffic into the environment. Finally, we used service accounts to allow projects to communicate with one another.
Figure 1. Jenkins Pipeline
Figure 2. Projects Layout

We built this environment on Google Compute Engine. And while it’s out of the scope of this article to show how to build this out on Google Kubernetes Engine (formerly Container Engine), these resources can show you how to do it yourself:

Creating a service account


By default, when a developer creates a project, GCP also creates a default Compute Engine service account that you can use to access any of its own project resources as well as that of another project’s resources. We took advantage of this service account to access the Jenkins project resources.

We store all the images that we build with the Jenkins project in Container Registry. We provided “Storage Object Viewer” access for each project’s default service account so that the images can be deployed (via pull access) into an environment-specific project. In addition, to deploy the containers, we created a Jenkins service account that can authenticate into projects’ Kubernetes clusters for a specific namespace.

Here’s how to create a service account based on a namespace:

Step1 - Create a service account:

kubectl create serviceaccount <sa-name> --namespace <ns-name></ns-name></sa-name>

This command creates a service account on the destination project. This service account will be used by Jenkins in order to authenticate into the destination cluster.

Step 2 - Verify the new service account:

kubectl get serviceaccount <sa-name> --namespace <ns-name> -o yaml</ns-name></sa-name>

This checks that the service account was successfully created, and outputs the service account details in a yaml format.

Step 3 - Get the secret name:

kubectl get sa <sa-name> -o json --namespace  <ns-name> | jq -r .secrets[].name</ns-name></sa-name>

This retrieves the secret name associated with the service account created in Step 1.

Step 4 - Get the certificate:

kubectl get secret   -o json --namespace  <ns-name> | jq -r '.data["ca.crt"]' | base64 -d > ca.crt</ns-name></secret-name>

This gets the certificate details from the secret, decodes the certificate data and stores it into a file ca.crt. The certificate ca.crt will be used in order to authenticate into a cluster.

Step 5 - Get the token:

kubectl get secret <secret-name> -o json --namespace <ns-name> | jq -r '.data["token"]' | base64 -d</ns-name></secret-name>

This command gets the token from the secret and decodes the token to plain text. The token will be used in order to authenticate into the cluster.

Step 6 - Get the IP address of the cluster:

kubectl config view -o yaml | grep server

Allowing cross-project access


When Jenkins does a deploy, it needs to authenticate into each project's Kubernetes cluster. In the Jenkins application, we created the service account’s token and certificate as credentials. The steps below show how to authenticate into a different project, known as cross-project access.

Again, let’s explain what each step does:

Step 1 - Set a cluster entry in kubeconfig:

kubectl config set-cluster <cluster-name> --embed-certs=true --server=<cluster-ip> --certificate-authority=<path/to/certificate>

where
  • <cluster-name>can be any name 
  • --embed-certs=true embeds certs for the cluster entry in kubeconfig 
  • --server=<cluster-ip> is the cluster ip where we’re trying to authenticate, namely, the IP generated in Step 6 of the service account creation process 
  • --certificate-authority=<path certificate> is the certificate path and the certificate is the certificate file we generated in Step 4 of service account creation section above 
Step 2 - Set the user entry in kubeconfig:

kubectl config set-credentials  --token=

where
  • <credentials-name> can be any name 
  • --token=<token-value> is the token value that was decoded during Step 5 of the previous section 
Step 3- Set the context entry in kubeconfig:

kubectl config set-context <context-name> --cluster=<cluster-name> --user=<credentials-name> --namespace=<ns-name></ns-name></credentials-name></cluster-name></context-name>

where
  • <context-name>can be any name 
  • --cluster=<cluster-name> is the cluster name set up in Step 1 above
  • --user=<credentials-name> is the credentials name set up in Step 2 above 
  • --namespace=<ns-name> - the namespace name we like to interact with. 
Step 4 - Set the current-context in a kubeconfig file:
kubectl config use-context <context-name>

Where <context-name> is the context name that we created in Step 3 above. 

After setting up the context, we’re ready to access the destination project cluster. All the kubectl commands will be executed against the destination project cluster. A simple test to verify that we're accessing the destination project cluster successfully is to check for pods.

kubectl get pods -n <ns-name>

If the output pods’ list shown is the list of the destination project pods, then you set up the configuration correctly. All the Kubernetes commands are associated to the destination project.

In this setup, bringing up new environments is quick and easy since the Jenkins environment doesn’t have to be re-created or copied for each Google project. Of course, it does create a single point of failure and a shared resource. It’s important to configure Jenkins correctly, so that work from a single environment can’t starve out the rest. Make sure you have enough resources for the workers and limit the number of builds per branch to one  that way multiple commits in quick succession to a branch can’t overload the infrastructure.

All told, the combination of Jenkins for CI/CD plus Google Cloud projects and service accounts give us the best of both worlds: a single shared environment that uses resources efficiently and is easy to manage, plus the security and isolation that our compliance and sales teams demanded. If you have questions or comments about this environment, reach out to us. And for more information, visit GoCatalant.com.

What a year! Google Cloud Platform in 2017



The end of the year is a time for reflection . . . and making lists. As 2017 comes to a close, we thought we’d review some of the most memorable Google Cloud Platform (GCP) product announcements, white papers and how-tos, as judged by popularity with our readership.

As we pulled the data for this post, some definite themes emerged about your interests when it comes to GCP:
  1. You love to hear about advanced infrastructure: CPUs, GPUs, TPUs, better network plumbing and more regions. 
  2.  How we harden our infrastructure is endlessly interesting to you, as are tips about how to use our security services. 
  3.  Open source is always a crowd-pleaser, particularly if it presents a cloud-native solution to an age-old problem. 
  4.  You’re inspired by Google innovation — unique technologies that we developed to address internal, Google-scale problems. So, without further ado, we present to you the most-read stories of 2017.

Cutting-edge infrastructure

If you subscribe to the “bigger is always better” theory of cloud infrastructure, then you were a happy camper this year. Early in 2017, we announced that GCP would be the first cloud provider to offer Intel Skylake architecture, GPUs for Compute Engine and Cloud Machine Learning became generally available and Shazam talked about why cloud GPUs made sense for them. In the spring, you devoured a piece on the performance of TPUs, and another about the then-largest cloud-based compute cluster. We announced yet more new GPU models and topping it all off, Compute Engine began offering machine types with a whopping 96 vCPUs and 624GB of memory.

It wasn’t just our chip offerings that grabbed your attention — you were pretty jazzed about Google Cloud network infrastructure too. You read deep dives about Espresso, our peering-edge architecture, TCP BBR congestion control and improved Compute Engine latency with Andromeda 2.1. You also dug stories about new networking features: Dedicated Interconnect, Network Service Tiers and GCP’s unique take on sneakernet: Transfer Appliance.

What’s the use of great infrastructure without somewhere to put it? 2017 was also a year of major geographic expansion. We started out the year with six regions, and ended it with 13, adding Northern Virginia, Singapore, Sydney, London, Germany, Sao Paolo and Mumbai. This was also the year that we shed our Earthly shackles, and expanded to Mars ;)

Security above all


Google has historically gone to great lengths to secure our infrastructure, and this was the year we discussed some of those advanced techniques in our popular Security in plaintext series. Among them: 7 ways we harden our KVM hypervisor, Fuzzing PCI Express and Titan in depth.

You also grooved on new GCP security services: Cloud Key Management and managed SSL certificates for App Engine applications. Finally, you took heart in a white paper on how to implement BeyondCorp as a more secure alternative to VPN, and support for the European GDPR data protection laws across GCP.

Open, hybrid development


When you think about GCP and open source, Kubernetes springs to mind. We open-sourced the container management platform back in 2014, but this year we showed that GCP is an optimal place to run it. It’s consistently among the first cloud services to run the latest version (most recently, Kubernetes 1.8) and comes with advanced management features out of the box. And as of this fall, it’s certified as a conformant Kubernetes distribution, complete with a new name: Google Kubernetes Engine.

Part of Kubernetes’ draw is as a platform-agnostic stepping stone to the cloud. Accordingly, many of you flocked to stories about Kubernetes and containers in hybrid scenarios. Think Pivotal Container Service and Kubernetes’ role in our new partnership with Cisco. The developers among you were smitten with Cloud Container Builder, a stand-alone tool for building container images, regardless of where you deploy them.

But our open source efforts aren’t limited to Kubernetes — we also made significant contributions to Spinnaker 1.0, and helped launch the Istio and Grafeas projects. You ate up our "Partnering on open source" series, featuring the likes of HashiCorp, Chef, Ansible and Puppet. Availability-minded developers loved our Customer Reliability Engineering (CRE) team’s missive on release canaries, and with API design: Choosing between names and identifiers in URLs, our Apigee team showed them a nifty way to have their proverbial cake and eat it too.

Google innovation


In distributed database circles, Google’s Spanner is legendary, so many of you were delighted when we announced Cloud Spanner and a discussion of how it defies the CAP Theorem. Having a scalable database that offers strong consistency and great performance seemed to really change your conception of what’s possible — as did Cloud IoT Core, our platform for connecting and managing “things” at scale. CREs, meanwhile, showed you the Google way to handle an incident.

2017 was also the year machine learning became accessible. For those of you with large datasets, we showed you how to use Cloud Dataprep, Dataflow, and BigQuery to clean up and organize unstructured data. It turns out you don’t need a PhD to learn to use TensorFlow, and for visual learners, we explained how to visualize a variety of neural net architectures with TensorFlow Playground. One Google Developer Advocate even taught his middle-school son TensorFlow and basic linear algebra, as applied to a game of rock-paper-scissors.

Natural language processing also became a mainstay of machine learning-based applications; here, we highlighted with a lighthearted and relatable example. We launched the Video Intelligence API and showed how Cloud Machine Learning Engine simplifies the process of training a custom object detector. And the makers among you really went for a post that shows you how to add machine learning to your IoT projects with Google AIY Voice Kit. Talk about accessible!

Lastly, we want to thank all our customers, partners and readers for your continued loyalty and support this year, and wish you a peaceful, joyful, holiday season. And be sure to rest up and visit us again Next year. Because if you thought we had a lot to say in 2017, well, hold onto your hats.

Cloud Audit Logging for Kubernetes Engine: Answer the who, what, when of admin accesses



Sometimes, in Google Kubernetes Engine, you want to investigate what’s happened in your cluster. Thankfully, Stackdriver Cloud Audit Logging now supports Kubernetes Engine in beta, and can help you answer “Who did what, where and when?” to your cluster.

To showcase Cloud Audit Logging, we’d like to introduce you to Alice, a DevOps engineer running her environment on Kubernetes Engine. Alice logs into her Kubernetes cluster to inspect the workloads and notices something odd – an instance of FoobarDB. This is peculiar, as to the best of her knowledge, her team is not using FoobarDB.
She examines the pod and realizes it’s actually a pod running a reverse shell that's only masquerading as FoobarDB. Alice decides to mitigate this vulnerability by deleting the offending pod, but wants to understand how it got there in the first place, to prevent future breaches. She turns to Cloud Audit Logging, which is a feature of Stackdriver Logging (our real-time log management and analysis tool) that writes and stores admin activity logs about your project.
While searching the audit logs, Alice finds a “create pod” request for the rogue pod coming from one of the service accounts.
She sees that this service account is associated with the deployment running the company’s PHP frontend (by investigating the “principalEmail” field highlighted by the top red box in the screenshot above) – however, there are hundreds of replicas of that deployment. Fortunately, audit logs contain the IP address of the node where pod is located. Alice can use that information to find the instance that created the rogue pod. Cloud Audit Logging contains logs about Admin Activity and Data Access but not the application-level logs so Alice uses her regular logging tool (also Stackdriver in this case) to look at the PHP frontend logs and to understand exactly what happened. She manages to track the breach to a vulnerability in the PHP version that the frontend is using so she applies a patch that upgrades the application.

Alice decides to try to prevent similar attacks in the future by using Stackdriver Monitoring features to create an alert based on a log-based metric that will fire if any identity other than the controller manager creates pods.

These are just some of the scenarios that Cloud Audit Logging can help you with. For more information on using Cloud Audit Logging, check out the documentation. And watch this space for future posts about how to use BigQuery and Cloud Audit Logging together to further identify and minimize unauthorized access.


Take it for a spin


Try Kubernetes Engine today with our generous 12-month free trial of $300 credits. Spin up a cluster (or a dozen) and experience the difference of running Kubernetes on the cloud build for containers.

Cloud Audit Logging for Kubernetes Engine: Answer the who, what, when of admin accesses



Sometimes, in Google Kubernetes Engine, you want to investigate what’s happened in your cluster. Thankfully, Stackdriver Cloud Audit Logging now supports Kubernetes Engine in beta, and can help you answer “Who did what, where and when?” to your cluster.

To showcase Cloud Audit Logging, we’d like to introduce you to Alice, a DevOps engineer running her environment on Kubernetes Engine. Alice logs into her Kubernetes cluster to inspect the workloads and notices something odd – an instance of FoobarDB. This is peculiar, as to the best of her knowledge, her team is not using FoobarDB.
She examines the pod and realizes it’s actually a pod running a reverse shell that's only masquerading as FoobarDB. Alice decides to mitigate this vulnerability by deleting the offending pod, but wants to understand how it got there in the first place, to prevent future breaches. She turns to Cloud Audit Logging, which is a feature of Stackdriver Logging (our real-time log management and analysis tool) that writes and stores admin activity logs about your project.
While searching the audit logs, Alice finds a “create pod” request for the rogue pod coming from one of the service accounts.
She sees that this service account is associated with the deployment running the company’s PHP frontend (by investigating the “principalEmail” field highlighted by the top red box in the screenshot above) – however, there are hundreds of replicas of that deployment. Fortunately, audit logs contain the IP address of the node where pod is located. Alice can use that information to find the instance that created the rogue pod. Cloud Audit Logging contains logs about Admin Activity and Data Access but not the application-level logs so Alice uses her regular logging tool (also Stackdriver in this case) to look at the PHP frontend logs and to understand exactly what happened. She manages to track the breach to a vulnerability in the PHP version that the frontend is using so she applies a patch that upgrades the application.

Alice decides to try to prevent similar attacks in the future by using Stackdriver Monitoring features to create an alert based on a log-based metric that will fire if any identity other than the controller manager creates pods.

These are just some of the scenarios that Cloud Audit Logging can help you with. For more information on using Cloud Audit Logging, check out the documentation. And watch this space for future posts about how to use BigQuery and Cloud Audit Logging together to further identify and minimize unauthorized access.


Take it for a spin


Try Kubernetes Engine today with our generous 12-month free trial of $300 credits. Spin up a cluster (or a dozen) and experience the difference of running Kubernetes on the cloud build for containers.

Cutting cluster management fees on Google Kubernetes Engine



Today, we're excited to announce that we have eliminated the cluster management fee for Google Kubernetes Engine, our managed Kubernetes service.

We founded the Kubernetes open-source project in 2014, and have remained the leading contributor to it. Internally at Google, we’ve been running globally scaled, production workloads in containers for over a decade. Kubernetes and Kubernetes Engine include the best of what we have learned, including the advanced cluster management features that web-scale production applications require. Today’s announcement makes Kubernetes Engine’s cluster management available at no charge, for any size cluster, effective immediately.

To put this pricing update in context, Kubernetes Engine has always provided a managed master at no charge for clusters of fewer than six nodes. For larger clusters we also provided the managed master at no charge, however we charged a flat fee of $0.15 per hour to manage the cluster. This flat fee is now eliminated for all cluster sizes. At Google, we’ve found that larger clusters are more efficient  especially when running multiple workloads. So if you were hesitating to create larger clusters worry no more and scale freely!


Nodes in the cluster
Older Pricing
New Pricing
(effective immediately)
Cluster Management Fee
Cluster Management Fee
0 to 5 nodes
0
0
6+ nodes
$0.15 / hour
0


That’s great news, but some of you may be wondering what all is included in cluster management? In the context of Google Kubernetes Engine, every cluster includes a master VM that acts as its control plane. Kubernetes Engine’s cluster management includes the following capabilities among others:



A useful point of comparison is the cost of managing your Kubernetes cluster yourself, either on Google Compute Engine or on another cloud. In a self-managed cluster, you pay for the VM that hosts the master and any resources you need for monitoring, logging and storing its state. Depending on the size of your cluster, moving to Kubernetes Engine could save a decent fraction of your total bill just by saving the cost of the master.

Of course while dollar savings are nice, we have invested Google engineering in automating cluster management with Kubernetes Engine to you save time and headaches as well. In a self-managed cluster, you're responsible for scaling the master as your cluster grows, and for backing up etcd. You have to keep an eye out for security patches and apply them. To access new Kubernetes features, you have to upgrade the master and cluster yourself. And most likely cluster repair and scaling is manual. With Google Kubernetes Engine, on the other hand, we take care of all of this complexity at no charge so you can focus on your business.
“[Google Kubernetes Engine] gives us elasticity and scalable performance for our Kubernetes clusters. It’s fully supported and managed by Google, which makes it more attractive to us than elastic container services from other cloud providers”  
 Arya Asemanfar, Engineering Manager at Mixpanel
We’re committed to raising the bar on Kubernetes’ reliability, cost-effectiveness, ease-of-use and enterprise readiness, and continue to add advanced management capabilities into Kubernetes Engine. For a preview of what’s next we invite you to join an early access program for node auto-provisioning, a new cluster management feature that provisions the right type of nodes in your auto-scaling cluster based on the observed behavior of your workloads. To join the early access program, fill out this form.

Deploying Memcached on Kubernetes Engine: tutorial



Memcached is one of the most popular open source, multi-purpose caching systems. It usually serves as a temporary store for frequently used data to speed up web applications and lighten database loads. We recently published a tutorial to learn how to deploy a cluster of distributed Memcached servers on Kubernetes Engine using Kubernetes and Helm.
Memcached has two main design goals:

  • Simplicity: Memcached functions like a large hash table and offers a simple API to store and retrieve arbitrarily shaped objects by key. 
  • Speed: Memcached holds cache data exclusively in random-access memory (RAM), making data access extremely fast.
Memcached is a distributed system that allows its hash table’s capacity to scale horizontally across a pool of servers. Each Memcached server operates in complete isolation and is unaware of the other servers in the pool. Therefore, the routing and load balancing between the servers must be done at the client level.

The tutorial explains how to effectively deploy Memcached servers to Kubernetes Engine, and describes how Memcached clients can proceed to discover the server endpoints and set up load balancing.

The tutorial also explains how to improve the system by enabling connection pooling with Mcrouter, a powerful open source Memcached proxy. Advanced optimization techniques are also discussed to reduce latency between the proxies and Memcached clients.

Check out the step-by-step tutorial for all the details on this solution. We hope this will inspire you to deploy caching servers to speed up your applications!

Building reliable deployments with Spinnaker, Container Engine and Container Builder


Kubernetes has some amazing primitives to help you deploy your applications, which let Kubernetes handle the heavy lifting of rolling out containerized applications. With Container Engine you can have your Kubernetes clusters up in minutes ready for your applications to land on them.

But despite the ease of standing up this fine-tuned deployment engine, there are many things that need to happen before deployments can even start. And once they’ve kicked off, you’ll want to make sure that your deployments have completed safely and in a timely manner.

To fill these gaps, developers often look to tools like Container Builder and Spinnaker to create continuous delivery pipelines.



We recently created a solutions guide that shows you how to build out a continuous delivery pipeline from scratch using Container Builder and Spinnaker. Below is an example continuous delivery pipeline that validates your software, builds it, and then carefully rolls it out to your users:



First, your developers tag your software and push it to a Git repository. When the tagged commit lands in your repository, Container Builder detects the change and begins the process of building and testing your application. Once your tests have passed, an immutable Docker image of your application is tagged and pushed to Container Registry. Spinnaker picks it up from here by detecting that a new Docker image has been pushed to your registry and starting the deployment process.

Spinnaker’s pipeline stages allow you to create complex flows to roll out changes. The example here uses a canary deployment to roll out the software to a small percentage of users, and then runs a functional validation of your application. Once those functional checks are complete in the canary environment, Spinnaker pauses the deployment pipeline and waits for a manual approval before it rolls out the application to the rest of your users. Before approving it, you may want to inspect some key performance indicators, wait for traffic in your application to settle or manually validate the canary environment. Once you’re satisfied with the changes, you can approve the release and Spinnaker completes rolling out your software.

As you can imagine, this exact flow won’t work for everyone. Thankfully Spinnaker and Container Builder give you flexible and granular stages that allow you to automate your release process while mapping it to the needs of your organization.

Get started by checking out the Spinnaker solution. Or visit the documentation to learn more about Spinnaker’s pipeline stages.


Customizing Stackdriver Logs for Container Engine with Fluentd


Many Google Cloud Platform (GCP) users are now migrating production workloads to Container Engine, our managed Kubernetes environment.  Container Engine supports Stackdriver logging on GCP by default, which uses Fluentd under the hood to send your logs to Stackdriver. 


You may also want to fully customize your Container Engine cluster’s Stackdriver logs with additional logging filters. If that describes you, check out this tutorial where you’ll learn how you can configure Fluentd in Container Engine to apply additional logging filters prior to sending your logs to Stackdriver.

Google Container Engine – Kubernetes 1.8 takes advantage of the cloud built for containers



Next week, we will roll out Kubernetes 1.8 to Google Container Engine for early access customers. In addition, we are advancing significant new functionality in Google Cloud to give Container Engine customers a great experience across Kubernetes releases. As a result, Container Engine customers get new features that are only available on Google Cloud Platform, for example highly available clusters, cluster auto-scaling and auto-repair, GPU hardware support, container-native networking and more.

Since we founded Kubernetes back in 2014, Google Cloud has been the leading contributor to the Kubernetes open source project in every release including 1.8. We test, stage and roll out Kubernetes on Google Cloud, and the same team that writes it, supports it, ensuring you receive the latest innovations faster without risk of compatibility breaks or support hassles.

Let’s take a look at the new Google Cloud enhancements that make Kubernetes run so well.

Speed and automation 

Earlier this week we announced that Google Compute Engine, Container Engine and many other GCP services have moved from per-minute to per-second billing. We also lowered the minimum run charge to one minute from 10 minutes, giving you even finer granularity so you only pay for what you use.

Many of you appreciate how quickly you can spin up a cluster on Container Engine. We’ve made it even faster - improving cluster startup time by 45%, so you’re up and running faster, and better able to take advantage of the pricing minimum-time charge. These improvements also apply to scaling your existing node pools.

A long-standing ask has been high availability masters for Container Engine. We are pleased to announce early access support for high availability, multi-master Container Engine clusters, which increase our SLO to 99.99% uptime. You can elect to run your Kubernetes masters and nodes in up to three zones within a region for additional protection from zonal failures. Container Engine seamlessly shifts load away from failed masters and nodes when needed. Sign up here to try out high availability clusters.

In addition to speed and simplicity, Container Engine automates Kubernetes in production, giving developers choice, and giving operators peace of mind. We offer several powerful Container Engine automation features:

  • Node Auto-Repair is in beta and opt-in. Container Engine can automatically repair your nodes using the Kubernetes Node Problem Detector to find common problems and proactively repair nodes and clusters. 
  • Node Auto-Upgrade is generally available and opt-in. Cluster upgrades are a critical Day 2 task and to give you automation with full control, we now offer Maintenance Windows (beta) to specify when you want Container Engine to auto-upgrade your masters and nodes. 
  • Custom metrics on the Horizontal Pod Autoscaler will soon be in beta so you can scale your pods on metrics other than CPU utilization. 
  • Cluster Autoscaling is generally available with performance improvements enabling up to 1,000 nodes, with up to 30 pods in each node as well as letting you specify a minimum and maximum number of nodes for your cluster. This will automatically grows or shrinks your cluster depending on workload demands. 

Container-native networking - Container Engine exclusive! - only on GCP

Container Engine now takes better advantage of GCP’s unique, software-defined network with first-class Pod IPs and multi-cluster load balancing.

  • Aliased IP support is in beta. With aliased IP support, you can take advantage of several network enhancements and features, including support for connecting Container Engine clusters over a Peered VPC. Aliased IPs are available for new clusters only; support for migrating existing clusters will be added in an upcoming release. 
  • Multi-cluster ingress will soon be in alpha. You will be able to construct highly available, globally distributed services by easily setting up Google Cloud Load Balancing to serve your end users from the closest Container Engine cluster. To apply for access, please fill out this form
  • Shared VPC support will soon be in alpha. You will be able to create Container Engine clusters on a VPC shared by multiple projects in your cloud organization. To apply for access, please fill out this form


Machine learning and hardware acceleration

Machine learning, data analytics and Kubernetes work especially well together on Google Cloud. Container Engine with GPUs turbocharges compute-intensive applications like machine learning, image processing, artificial intelligence and financial modeling. This release brings you managed CUDA-as-a-Service in containers. Big data is also better on Container Engine with new features that make GCP storage accessible from Spark on Kubernetes.

  • NVIDIA Tesla P100 GPUs are available in alpha clusters. In addition to the NVIDIA Tesla K80, you can now create a node with up to 4 NVIDIA P100 GPUs. P100 GPUs can accelerate your workloads by up to 10x compared to the K80! If you are interested in alpha testing your CUDA models in Container Engine, please sign up for the GPU alpha
  • Cloud Storage is now accessible from Spark. Spark on Kubernetes can now communicate with Google BigQuery and Google Cloud Storage as data sources and sinks from Spark using bigdata-interop connectors
  • CronJobs are now in beta so you can now schedule cron jobs such as data processing pipelines to run on a given schedule in your production clusters! 

Extensibility 

As more enterprises use Container Engine, we are actively improving extensibility so you can match Container Engine to your environment and standards.


Security and reliability 

We designed Container Engine with enterprise security and reliability in mind. This release adds several new enhancements.

  • Role Based Access Control (RBAC) is now generally available. This feature allows a cluster administrator to specify fine-grained policies describing which users, groups, and service accounts are allowed to perform which operations on which API resources. 
  • Network Policy Enforcement using Calico is in beta. Starting from Kubernetes 1.7.6, you can help secure your Container Engine cluster with network policy pod-to-pod ingress rules. Kubernetes 1.8 adds additional support for CIDR-based rules, allowing you to whitelist access to resources outside of your Kubernetes cluster (e.g., VMs, hosted services, and even public services), so that you can integrate your Kubernetes application with other IT services and investments. Additionally, you can also now specify pod-to-pod egress rules, providing tighter controls needed to ensure service integrity. Learn more about here
  • Node Allocatable is generally available. Container Engine includes the Kubernetes Node Allocatable feature for more accurate resource management, providing higher node stability and reliability by protecting node components from out-of-resource issues. 
  • Priority / Preemption is in alpha clusters. Container Engine implements Kubernetes Priority and Preemption so you can associate priority pods to priority levels such that you can preempt lower-priority pods to make room for higher-priority ones when you have more workloads ready to run on the cluster than there are resources available. 

Enterprise-ready container operations - monitoring and management designed for Kubernetes 

In Kubernetes 1.7, we added view-only workload, networking, and storage views to the Container Engine user interface. In 1.8, we display even more information, enable more operational and development tasks without having to leave the UI, and improve integration with Stackdriver and Cloud Shell.



The following features are all generally available:

  • Easier configurations: You can now view and edit your YAML files directly in the UI. We also added easy to use shortcuts for the most common user actions like rolling updates or scaling a deployment.
  • Node information details: Cluster view now shows details such as node health status, relevant logs, and pod information so you can easily troubleshoot your clusters and nodes. 
  • Stackdriver Monitoring integration: Your workload views now include charts showing CPU, memory, and disk usage. We also link to corresponding Stackdriver pages for a more in-depth look. 
  • Cloud Shell integration: You can now generate and execute exact kubectl commands directly in the browser. No need to manually switch context between multiple clusters and namespaces or copy and paste. Just hit enter! 
  • Cluster recommendations: Recommendations in cluster views suggest ways that you can improve your cluster, for example, turning on autoscaling for underutilized clusters or upgrading nodes for version alignment. 

In addition, Audit Logging is available to early access customers. This features enables you to view your admin activity and data access as part of Cloud Audit Logging. Please complete this form to take part in the Audit Logging early access program.


Container Engine everywhere 

Container Engine customers are global. To keep up with demand, we’ve expanded our global capacity to include our latest GCP regions: Frankfurt (europe-west3), Northern Virginia (us-east4) and São Paulo (southamerica-east1). With these new regions Container Engine is now available in a dozen locations around the world, from Oregon to Belgium to Sydney.



Customers of all sizes have been benefiting from containerizing their applications and running them on Container Engine. Here are a couple of recent examples:

Mixpanel, a popular product analytics company, processes 5 trillion data points every year. To keep performance high, Mixpanel uses Container Engine to automatically scale resources.

“All of our applications and our primary database now run on Google Container Engine. Container Engine gives us elasticity and scalable performance for our Kubernetes clusters. It’s fully supported and managed by Google, which makes it more attractive to us than elastic container services from other cloud providers,” says Arya Asemanfar, Engineering Manager at Mixpanel. 

RealMassive, a provider of real-time commercial real estate information, was able to cut its cloud hosting costs in half by moving to microservices on Container Engine.

“What it comes down to for us is speed-to-market and cost. With Google Cloud Platform, we can confidently release services multiple times a day and launch new markets in a day. We’ve also reduced our cloud hosting costs by 50% by moving to microservices on Google Container Engine,” says Jason Vertrees, CTO at RealMassive. 

Bitnami, an application package and deployment platform, shows you how to use Container Engine networking features to create a private Kubernetes cluster that enforces service privacy so that your services are available internally but not to the outside world.

Try it today! 

In a few days, all Container Engine customers will have access to Kubernetes 1.8 in alpha clusters. These new updates will help even more businesses run Kubernetes in production to get the most from their infrastructure and application delivery. If you want to be among the first to access Kubernetes 1.8 on your production clusters, please join our early access program.

You can find the complete list of new features in the Container Engine release notes. For more information, visit our website or sign-up for our free trial at no cost.