Tag Archives: Containers & Kubernetes

Kubernetes best practices: mapping external services



Editor’s note: Today is the sixth installment in a seven-part video and blog series from Google Developer Advocate Sandeep Dinesh on how to get the most out of your Kubernetes environment.

If you’re like most Kubernetes users, chances are you use services that live outside your cluster. For example, maybe you use the Twillio API to send text messages, or maybe the Google Cloud Vision API to do image analysis.

If your applications in your different environments connect to the same external endpoint, and have no plans to bring the external service into your Kubernetes cluster, it is perfectly fine to use the external service endpoint directly in your code. However, there are many scenarios where this is not the case.

A good example of this are databases. While some cloud-native databases such as Cloud Firestore or Cloud Spanner use a single endpoint for all access, most databases have separate endpoints for different instances.

At this point, you may be thinking that a good solution to finding the endpoint is to use ConfigMaps. Simply store the endpoint address in a ConfigMap, and use it in your code as an environment variable. While this solution works, there are a few downsides. You need to modify your deployment to include the ConfigMap and write additional code to read from the environment variables. But most importantly, if the endpoint address changes you may need to restart all running containers to get the updated endpoint address.

In this episode of “Kubernetes best practices”, let’s learn how to leverage Kubernetes’ built-in service discovery mechanisms for services running outside the cluster, just like you can for services inside the cluster! This gives you parity across your dev and prod environments, and if you eventually move the service inside the cluster, you don’t have to change your code at all.

Scenario 1: Database outside cluster with IP address

A very common scenario is when you are hosting your own database, but doing so outside the cluster, for example on a Google Compute Engine instance. This is very common if you run some services inside Kubernetes and some outside, or need more customization or control than Kubernetes allows.

Hopefully, at some point, you can move all services inside the cluster, but until then you are living in a hybrid world. Thankfully, you can use static Kubernetes services to ease some of the pain.

In this example, I created a MongoDB server using Cloud Launcher. Because it is created in the same network (or VPC) as the Kubernetes cluster, it can be accessed using the high performance internal IP address. In Google Cloud, this is the default setup, so there is nothing special you need to configure.

Now that we have the IP address, the first step is to create a service:
kind: Service
apiVersion: v1
metadata:
 name: mongo
Spec:
 type: ClusterIP
 ports:
 - port: 27017
   targetPort: 27017
You might notice there are no Pod selectors for this service. This creates a service, but it doesn’t know where to send the traffic. This allows you to manually create an Endpoints object that will receive traffic from this service.

kind: Endpoints
apiVersion: v1
metadata:
 name: mongo
subsets:
 - addresses:
     - ip: 10.240.0.4
   ports:
     - port: 27017
You can see that the Endpoints manually defines the IP address for the database, and it uses the same name as the service. Kubernetes uses all the IP addresses defined in the Endpoints as if they were regular Kubernetes Pods. Now you can access the database with a simple connection string:
mongodb://mongo
> No need to use IP addresses in your code at all! If the IP address changes in the future, you can update the Endpoint with the new IP address, and your applications won’t need to make any changes.

Scenario 2: Remotely hosted database with URI

If you are using a hosted database service from a third party, chances are they give you a unified resource identifier (URI) that you can use to connect to. If they give you an IP address, you can use the method in Scenario 1.

In this example, I have two MongoDB databases hosted on mLab. One of them is my dev database, and the other is production.

The connection strings for these databases are as follows:
mongodb://<dbuser>:<dbpassword>@ds149763.mlab.com:49763/dev
mongodb://<dbuser>:<dbpassword>@ds145868.mlab.com:45868/prod
mLab gives you a dynamic URI and a dynamic port, and you can see that they are both different. Let’s use Kubernetes to create an abstraction layer over these differences. In this example, let’s connect to the dev database.

You can create a “ExternalName” Kubernetes service, which gives you a static Kubernetes service that redirects traffic to the external service. This service does a simple CNAME redirection at the kernel level, so there is very minimal impact on your performance.

The YAML for the service looks like this:
kind: Service
apiVersion: v1
metadata:
 name: mongo
spec:
 type: ExternalName
 externalName: ds149763.mlab.com
Now, you can use a much more simplified connection string:
mongodb://<dbuser>:<dbpassword>@mongo:<port>/dev
Because “ExternalName” uses CNAME redirection, it can’t do port remapping. This might be okay for services with static ports, but unfortunately it falls short in this example, where the port is dynamic. mLab’s free tier gives you a dynamic port number and you cannot change it. This means you need a different connection string for dev and prod.

However, if you can get the IP address, then you can do port remapping as I will explain in the next section.

Scenario 3: Remotely hosted database with URI and port remapping

While the CNAME redirect works great for services with the same port for each environment, it falls short in scenarios where the different endpoints for each environment use different ports. Thankfully we can work around that using some basic tools.

The first step is to get the IP address from the URI.

If you run the nslookup, hostname, or ping command against the URI, you can get the IP address of the database.

You can now create a service that remaps the mLab port and an endpoint for this IP address.
kind: Service
apiVersion: v1
metadata:
 name: mongo
spec:
 ports:
 - port: 27017
   targetPort: 49763
---
kind: Endpoints
apiVersion: v1
metadata:
 name: mongo
subsets:
 - addresses:
     - ip: 35.188.8.12
   ports:
     - port: 49763
Note: A URI might use DNS to load-balance to multiple IP addresses, so this method can be risky if the IP addresses change! If you get multiple IP addresses from the above command, you can include all of them in the Endpoints YAML, and Kubernetes will load balance traffic to all the IP addresses.

With this, you can connect to the remote database without needing to specify the port. The Kubernetes service does the port remapping transparently!
mongodb://<dbuser>:<dbpassword>@mongo/dev

Conclusion

Mapping external services to internal ones gives you the flexibility to bring these services into the cluster in the future while minimizing refactoring efforts. Even if you don’t plan to bring them in today, you never know what tomorrow might bring! Additionally, it makes it easier to manage and understand which external services your organization is using.

If the external service has a valid domain name and you don’t need port remapping, then using the “ExternalName” service type is an easy and quick way to map the external service to an internal one. If you don’t have a domain name or need to do port remapping, simply add the IP addresses to an endpoint and use that instead.

Going to Google Cloud Next18? Stop by to meet me and other Kubernetes team members in the "Meet the Experts" zone! Hope to see you there!

Kubernetes best practices: mapping external services



Editor’s note: Today is the sixth installment in a seven-part video and blog series from Google Developer Advocate Sandeep Dinesh on how to get the most out of your Kubernetes environment.

If you’re like most Kubernetes users, chances are you use services that live outside your cluster. For example, maybe you use the Twillio API to send text messages, or maybe the Google Cloud Vision API to do image analysis.

If your applications in your different environments connect to the same external endpoint, and have no plans to bring the external service into your Kubernetes cluster, it is perfectly fine to use the external service endpoint directly in your code. However, there are many scenarios where this is not the case.

A good example of this are databases. While some cloud-native databases such as Cloud Firestore or Cloud Spanner use a single endpoint for all access, most databases have separate endpoints for different instances.

At this point, you may be thinking that a good solution to finding the endpoint is to use ConfigMaps. Simply store the endpoint address in a ConfigMap, and use it in your code as an environment variable. While this solution works, there are a few downsides. You need to modify your deployment to include the ConfigMap and write additional code to read from the environment variables. But most importantly, if the endpoint address changes you may need to restart all running containers to get the updated endpoint address.

In this episode of “Kubernetes best practices”, let’s learn how to leverage Kubernetes’ built-in service discovery mechanisms for services running outside the cluster, just like you can for services inside the cluster! This gives you parity across your dev and prod environments, and if you eventually move the service inside the cluster, you don’t have to change your code at all.

Scenario 1: Database outside cluster with IP address

A very common scenario is when you are hosting your own database, but doing so outside the cluster, for example on a Google Compute Engine instance. This is very common if you run some services inside Kubernetes and some outside, or need more customization or control than Kubernetes allows.

Hopefully, at some point, you can move all services inside the cluster, but until then you are living in a hybrid world. Thankfully, you can use static Kubernetes services to ease some of the pain.

In this example, I created a MongoDB server using Cloud Launcher. Because it is created in the same network (or VPC) as the Kubernetes cluster, it can be accessed using the high performance internal IP address. In Google Cloud, this is the default setup, so there is nothing special you need to configure.

Now that we have the IP address, the first step is to create a service:
kind: Service
apiVersion: v1
metadata:
 name: mongo
Spec:
 type: ClusterIP
 ports:
 - port: 27017
   targetPort: 27017
You might notice there are no Pod selectors for this service. This creates a service, but it doesn’t know where to send the traffic. This allows you to manually create an Endpoints object that will receive traffic from this service.

kind: Endpoints
apiVersion: v1
metadata:
 name: mongo
subsets:
 - addresses:
     - ip: 10.240.0.4
   ports:
     - port: 27017
You can see that the Endpoints manually defines the IP address for the database, and it uses the same name as the service. Kubernetes uses all the IP addresses defined in the Endpoints as if they were regular Kubernetes Pods. Now you can access the database with a simple connection string:
mongodb://mongo
> No need to use IP addresses in your code at all! If the IP address changes in the future, you can update the Endpoint with the new IP address, and your applications won’t need to make any changes.

Scenario 2: Remotely hosted database with URI

If you are using a hosted database service from a third party, chances are they give you a unified resource identifier (URI) that you can use to connect to. If they give you an IP address, you can use the method in Scenario 1.

In this example, I have two MongoDB databases hosted on mLab. One of them is my dev database, and the other is production.

The connection strings for these databases are as follows:
mongodb://<dbuser>:<dbpassword>@ds149763.mlab.com:49763/dev
mongodb://<dbuser>:<dbpassword>@ds145868.mlab.com:45868/prod
mLab gives you a dynamic URI and a dynamic port, and you can see that they are both different. Let’s use Kubernetes to create an abstraction layer over these differences. In this example, let’s connect to the dev database.

You can create a “ExternalName” Kubernetes service, which gives you a static Kubernetes service that redirects traffic to the external service. This service does a simple CNAME redirection at the kernel level, so there is very minimal impact on your performance.

The YAML for the service looks like this:
kind: Service
apiVersion: v1
metadata:
 name: mongo
spec:
 type: ExternalName
 externalName: ds149763.mlab.com
Now, you can use a much more simplified connection string:
mongodb://<dbuser>:<dbpassword>@mongo:<port>/dev
Because “ExternalName” uses CNAME redirection, it can’t do port remapping. This might be okay for services with static ports, but unfortunately it falls short in this example, where the port is dynamic. mLab’s free tier gives you a dynamic port number and you cannot change it. This means you need a different connection string for dev and prod.

However, if you can get the IP address, then you can do port remapping as I will explain in the next section.

Scenario 3: Remotely hosted database with URI and port remapping

While the CNAME redirect works great for services with the same port for each environment, it falls short in scenarios where the different endpoints for each environment use different ports. Thankfully we can work around that using some basic tools.

The first step is to get the IP address from the URI.

If you run the nslookup, hostname, or ping command against the URI, you can get the IP address of the database.

You can now create a service that remaps the mLab port and an endpoint for this IP address.
kind: Service
apiVersion: v1
metadata:
 name: mongo
spec:
 ports:
 - port: 27017
   targetPort: 49763
---
kind: Endpoints
apiVersion: v1
metadata:
 name: mongo
subsets:
 - addresses:
     - ip: 35.188.8.12
   ports:
     - port: 49763
Note: A URI might use DNS to load-balance to multiple IP addresses, so this method can be risky if the IP addresses change! If you get multiple IP addresses from the above command, you can include all of them in the Endpoints YAML, and Kubernetes will load balance traffic to all the IP addresses.

With this, you can connect to the remote database without needing to specify the port. The Kubernetes service does the port remapping transparently!
mongodb://<dbuser>:<dbpassword>@mongo/dev

Conclusion

Mapping external services to internal ones gives you the flexibility to bring these services into the cluster in the future while minimizing refactoring efforts. Even if you don’t plan to bring them in today, you never know what tomorrow might bring! Additionally, it makes it easier to manage and understand which external services your organization is using.

If the external service has a valid domain name and you don’t need port remapping, then using the “ExternalName” service type is an easy and quick way to map the external service to an internal one. If you don’t have a domain name or need to do port remapping, simply add the IP addresses to an endpoint and use that instead.

Going to Google Cloud Next18? Stop by to meet me and other Kubernetes team members in the "Meet the Experts" zone! Hope to see you there!

Beyond CPU: horizontal pod autoscaling with custom metrics in Google Kubernetes Engine



Many customers of Kubernetes Engine, especially enterprises, need to autoscale their environments based on more than just CPU usage—for example queue length or concurrent persistent connections. In Kubernetes Engine 1.9 we started adding features to address this and today, with the latest beta release of Horizontal Pod Autoscaler (HPA) on Kubernetes Engine 1.10, you can configure your deployments to scale horizontally in a variety of ways.

To walk you through your different horizontal scaling options, meet Barbara, a DevOps engineer working at a global video-streaming company. Barbara runs her environment on Kubernetes Engine, including the following microservices:
  • A video transcoding service that processes newly uploaded videos
  • A Google Cloud Pub/Sub queue for the list of videos that the transcoding service needs to process
  • A video-serving frontend that streams videos to users
A high-level diagram of Barbara’s application.

To make sure she meets the service level agreement for the latency of processing the uploads (which her company defines as a total travel time of the uploaded file) Barbara configures the transcoding service to scale horizontally based on the queue length—adding more replicas when there are more videos to process or removing replicas and saving money when the queue is short. In Kubernetes Engine 1.10 she accomplishes that by using the new ‘External’ metric type when configuring the Horizontal Pod Autoscaler. You can read more about this here.

apiVersion: autoscaling/v2beta1                                                 
kind: HorizontalPodAutoscaler                                                   
metadata:                                                                       
  name: transcoding-worker                                                                    
  namespace: video                                                            
spec:                                                                           
  minReplicas: 1                                                                
  maxReplicas: 20                                                                
  metrics:                                                                      
  - external:                                                                      
      metricName: pubsub.googleapis.com|subscription|num_undelivered_messages   
      metricSelector:                                                           
        matchLabels:                                                            
          resource.labels.subscription_id: transcoder_subscription                            
      targetAverageValue: "10"                                                   
    type: External                                                              
  scaleTargetRef:                                                               
    apiVersion: apps/v1                                              
    kind: Deployment                                                            
    name: transcoding-worker
To handle scaledowns correctly, Barbara also makes sure to set graceful termination periods of pods that are long enough to allow any transcoding already happening on pods to complete. She also writes her application to stop processing new queue items after it receives the SIGTERM termination signal from Kubernetes Engine.
A high-level diagram of Barbara’s application showing the scaling bottleneck.

Once the videos are transcoded, Barbara needs to ensure great viewing experience for her users. She identifies the bottleneck for the serving frontend: the number of concurrent persistent connections that a single replica can handle. Each of her pods already exposes its current number of open connections, so she configures the HPA object to maintain the average value of open connections per pod at a comfortable level. She does that using the Pods custom metric type.

apiVersion: autoscaling/v2beta1                                                 
kind: HorizontalPodAutoscaler                                                   
metadata:                                                                       
  name: frontend                                                                    
  namespace: video                                                            
spec:                                                                           
  minReplicas: 4                                                                
  maxReplicas: 40                                                                
  metrics:  
  - type: Pods
    pods:
      metricName: open_connections
      targetAverageValue: 100                                                                                                                            
  scaleTargetRef:                                                               
    apiVersion: apps/v1                                              
    kind: Deployment                                                            
    name: frontend
To scale based on the number of concurrent persistent connections as intended, Barbara also configures readiness probes such that any saturated pods are temporarily removed from the service until their situation improves. She also ensures that the streaming client can quickly recover if its current serving pod is scaled down.

It is worth noting here that her pods expose the open_connections metric as an endpoint for Prometheus to monitor. Barbara uses the prometheus-to-sd sidecar to make those metrics available in Stackdriver. To do that, she adds the following YAML to her frontend deployment config. You can read more about different ways to export metrics and use them for autoscaling here.

containers:
  ...
  - name: prometheus-to-sd
    image: gcr.io/google-containers/prometheus-to-sd:v0.2.6
    command:
    - /monitor
    - --source=:http://localhost:8080
    - --stackdriver-prefix=custom.googleapis.com
    - --pod-id=$(POD_ID)
    - --namespace-id=$(POD_NAMESPACE)
    env:
    - name: POD_ID
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.uid
    - name: POD_NAMESPACE
      valueFrom:
        fieldRef:
          fieldPath: metadata.namespace
Recently, Barbara’s company introduced a new feature: streaming live videos. This introduces a new bottleneck to the serving frontend. It now needs to transcode some streams in real- time, which consumes a lot of CPU and decreases the number of connections that a single replica can handle.
A high-level diagram of Barbara’s application showing the new bottleneck due to CPU intensive live transcoding.
To deal with that, Barbara uses an existing feature of the Horizontal Pod Autoscaler to scale based on multiple metrics at the same time—in this case both the number of persistent connections as well as CPU consumption. HPA selects the maximum signal of the two, which is then used to trigger autoscaling:

apiVersion: autoscaling/v2beta1                                                 
kind: HorizontalPodAutoscaler                                                   
metadata:                                                                       
  name: frontend                                                                    
  namespace: video                                                            
spec:                                                                           
  minReplicas: 4                                                                
  maxReplicas: 40                                                                
  metrics:  
  - type: Pods
    pods:
      metricName: open_connections
      targetAverageValue: 100
  - type: Resource
    resource:
      name: cpu
      targetAverageUtilization: 60                                                                                                                        
  scaleTargetRef:                                                               
    apiVersion: apps/v1                                              
    kind: Deployment                                                            
    name: frontend
These are just some of the scenarios that HPA on Kubernetes can help you with.

Take it for a spin

Try Kubernetes Engine today with our generous 12-month free trial of $300 credits. Spin up a cluster (or a dozen) and experience the difference of running Kubernetes on Google Cloud, the cloud built for containers. And watch this space for future posts about how to use Cluster Autoscaler and Horizontal Pod Autoscaler together to make the most out of Kubernetes Engine.

Get higher availability with Regional Persistent Disks on Google Kubernetes Engine



Building highly available stateful applications on Kubernetes has been a challenge for a long time. As such, many enterprises have to write complex application logic on top of Kubernetes APIs for running applications such as databases and distributed file systems.

Today, we’re excited to announce the beta launch of Regional Persistent Disks (Regional PD) for Kubernetes Engine, making it easier for organizations of all sizes to build and run highly available stateful applications in the cloud. While Persistent Disks (PD) are zonal resources, and applications built on top of PDs can become unavailable in the event of a zonal failure, Regional PDs provide network-attached block storage with synchronous replication of data between two zones in a region. This approach maximizes application availability without sacrificing consistency. Regional PD automatically handles transient storage unavailability in a zone, and provides an API to facilitate cross-zone failover (learn more about this in the documentation).

Regional PD has native integration with the Kubernetes master that manages health monitoring and failover to the secondary zone in case of an outage in the primary zone. With a Regional PD, you also take advantage of replication at the storage layer, rather than worrying about application-level replication. This offers a convenient building block for implementing highly available solutions on Kubernetes Engine, and can provide cross-zone replication to existing legacy services. Internally, for instance, we used Regional PD to implement high availability in Google Cloud SQL for Postgres.

To understand the power of Regional PD, imagine you want to deploy Wordpress with MySQL in Kubernetes. Before Regional PD, if you wanted to build an HA configuration, you needed to write complex application logic, typically using custom resources, or use a commercial replication solution. Now, simply use Regional PD as the storage backend for the Wordpress and MySQL databases. Because of the block-level replication, the data is always present in another zone, so you don’t need to worry if there is a zonal outage.

With Regional PD, you can build a two-zone HA solution by simply changing the storage class definition in the dynamic provisioning specification—no complex Kubernetes controller management required!

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: repd
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-standard
  replication-type: regional-pd
  zones: us-central1-a, us-central1-b
Alternatively, you can manually provision a Regional PD using the gcloud command line tool:
gcloud beta compute disks create gce-disk-1
    --region europe-west1
    --replica-zones europe-west1-b,europe-west1-c
Regional PDs are now available in Kubernetes Engine clusters. To learn more, check out the documentation, Kubernetes guide and sample solution using Regional PD.

Introducing Shared VPC for Google Kubernetes Engine



Containers have come a long way, from the darling of cloud-native startups to the de-facto way to deploy production workloads in large enterprises. But as containers grow into this new role, networking continues to pose challenges in terms of management, deployment and scaling.

We are excited to introduce Shared Virtual Private Cloud (VPC) to help Google Kubernetes Engine tackle scale and reliability needs of container networking for enterprise workloads.

Shared VPC for better control of enterprise network resources
In large enterprises, you often need to place different departments into different projects, for purposes of budgeting, access control, accounting, etc. And while isolation and network segmentation are recommended best practices, network segmentation can pose a challenge for sharing resources. In Compute Engine environments, Shared VPC networks let enterprise administrators give multiple projects permission to communicate via a single, shared virtual network without giving control of critical resources such as firewalls. Now, with the general availability of Kubernetes Engine 1.10, we are extending the shared VPC model to connect Kubernetes Engine clusters with versions 1.8 and above—connecting from multiple projects to a common VPC network.

Before Shared VPC, it was possible to achieve this setup in a crippled, insecure way, by bridging projects in their own VPC with Cloud VPNs. The problem with this approach was that it required N*(N-1)/2 connections to obtain full connectivity between each project. An additional challenge was that the network for each cluster wasn’t fully configurable, making it difficult for one project to communicate with another without a NAT gateway in between. Security was another concern since the organization administrator had no control over firewalls in the other projects.

Now, with Shared VPC, you can overcome these challenges and compartmentalize Kubernetes Engine clusters into separate projects, for the following benefits to your enterprise:
  • Sharing of common resources - Shared VPC makes it easy to use network resources that must be shared across various teams, such as a set of IPs within RFC 1918 shared across multiple subnets and Kubernetes Engine clusters.
  • Security - Organizations want to leverage more granular IAM roles to separate access to sensitive projects and data. By restricting what individual users can do with network resources, network and security administrators can better protect enterprise assets from inadvertent or deliberate acts that can compromise the network. While a network administrator can set firewalls for every team, a cluster administrator’s responsibilities might be to manage workloads within a project. Shared VPC provides you with centralized control of critical network resources under the organization administrator, while still giving flexibility to the various organization admins to manage clusters in their own projects.
  • Billing - Teams can use projects and separate Kubernetes Engine clusters to isolate their resource usage, which helps with accounting and budgeting needs by letting you view billing for each team separately.
  • Isolation and support for multi-tenant workloads - You can break up your deployment into projects and assign them to the teams working on them, lowering the chances that one team’s actions will inadvertently affect another team’s projects.

Below is an example of how you can map your organizational structure to your network resources using Shared VPC:

Here is what, Spotify, one of the many enterprises using Kubernetes Engine, has to say:
“Kubernetes is our preferred orchestration solution for thousands of our backend services because of its capabilities for improved resiliency, features such as autoscaling, and the vibrant open-source community. Shared VPC in Kubernetes Engine is essential for us to be able to use Kubernetes Engine with our many GCP projects."
- Matt Brown, Software Engineer, Spotify

How does Shared VPC work?
Host project contains one or more shared network resources while the service project(s) map to the different teams or departments in your organization. After setting up the correct IAM permissions for service accounts in both the host and service projects, the cluster admin can instantiate a number of Compute Engine resources in any of the service projects. This way, critical resources like firewalls are centrally managed by the network or security admin, while cluster admins are able to create clusters in the respective service projects.

Shared VPC is built on top of Alias IP. Kubernetes Engine clusters in service projects will need to be configured with a primary CIDR range (from which to draw Node IP addresses), and two secondary CIDR ranges (from which to draw Kubernetes Pod and Service IP addresses). The following diagram illustrates a subnet with the three CIDR ranges from which the clusters in the Shared VPC are carved out.

The Shared VPC IAM permissions model
To get started with Shared VPC, the first step is to set up the right IAM permissions on service accounts. For the cluster admin to be able to create Kubernetes Engine clusters in the service projects, the host project administrator needs to grant the compute.networkUser and container.hostServiceAgentUser roles in the host project, allowing the service project's service accounts to use specific subnetworks and to perform networking administrative actions to manage Kubernetes Engine clusters. For more detailed information on the IAM permissions model, follow along here.

Try it out today!
Create a Shared VPC cluster in Kubernetes Engine and get the ease of access and scale for your enterprise workloads. Don’t forget to sign up for the upcoming webinar, 3 reasons why you should run your enterprise workloads on Kubernetes Engine.

Google Kubernetes Engine 1.10 is generally available and ready for the enterprise



Today, we’re excited to announce the general availability of Google Kubernetes Engine 1.10, which lays the foundation for new features to enable greater enterprise usage. Here on the Kubernetes Engine team, we’ve been thinking about challenges such as security, networking, logging, and monitoring that are critical to the enterprise for a long time. Now, in parallel to the GA of Kubernetes Engine 1.10, we are introducing a train of new features to support enterprise use cases. These include:
  • Shared Virtual Private Cloud (VPC) for better control of your network resources
  • Regional Persistent Disks and Regional Clusters for higher-availability and stronger SLAs
  • Node Auto-Repair GA, and Custom Horizontal Pod Autoscaler for greater automation
Better yet, these all come with the robust security that Kubernetes Engine provides by default.
Let’s take a look at some of the new features that we will add to Kubernetes Engine 1.10 in the coming weeks.
Networking: global hyperscale network for applications with Shared VPC
Large organizations with several teams prefer to share physical resources while maintaining logical separation of resources between departments. Now, you can deploy your workloads in Google’s global Virtual Private Cloud (VPC) in a Shared VPC model, giving you the flexibility to manage access to shared network resources using IAM permissions while still isolating your departments. Shared VPC lets your organization administrators delegate administrative responsibilities, such as creating and managing instances and clusters, to service project admins while maintaining centralized control over network resources like subnets, routes, and firewalls. Stay tuned for more on Shared VPC support this week, where we’ll demonstrate how enterprise users can separate resources owned by projects while allowing them to communicate with each other over a common internal network.

Storage: high availability with Regional Persistent Disks
To make it easier to build highly available solutions, Kubernetes Engine will provide support for the new Regional Persistent Disk (Regional PD). Regional PD, available in the coming days, provides durable network-attached block storage with synchronous replication of data between two zones in a region. With Regional PDs, you don’t have to worry about application-level replication and can take advantage of replication at the storage layer. This replication offers a convenient building block for implementing highly available solutions on Kubernetes Engine.
Reliability: improved uptime with Regional Clusters, node auto-repair
Regional clusters, to be generally available soon, allows you to create a Kubernetes Engine cluster with a multi-master, highly-available control plane that spreads your masters across three zones in a region—an important feature for clusters with higher uptime requirements. Regional clusters also offers a zero-downtime upgrade experience when upgrading Kubernetes Engine masters. In addition to Regional Clusters, the node auto-repair feature is now generally available. Node auto-repair monitors the health of the nodes in your cluster and repairs any that are unhealthy.
Auto-scaling: Horizontal Pod Autoscaling with custom metrics
Our users have long asked for the ability to scale horizontally any way they like. In Kubernetes Engine 1.10, Horizontal Pod Autoscaler supports three different custom metrics types in beta: External (e.g., for scaling based on Cloud Pub/Sub queue length - one of the most requested use cases), Pods (e.g., for scaling based on the average number of open connections per pod) and Object (e.g., for scaling based on Kafka running in your cluster).

Kubernetes Engine enterprise adoption

Since we launched it in 2014, Kubernetes has taken off like a rocket. It is becoming “the Linux of the cloud,” according to Jim Zemlin, Executive Director of the Linux Foundation. Analysts estimate that 54 percent of Fortune 100 companies use Kubernetes across a spectrum of industries including finance, manufacturing, media, and others.
Kubernetes Engine, the first production-grade managed Kubernetes service, has been generally available since 2015. Core-hours for the service have ballooned: in 2017 Kubernetes Engine core-hours grew 9X year over year, supporting a wide variety of applications. Stateful workload (e.g. databases and key-value stores) usage has grown since the initial launch in 2016, to over 40 percent of production Kubernetes Engine clusters.
Here is what a few of the enterprises who are using Kubernetes Engine have to say.
Alpha Vertex, a financial services company that delivers advanced analytical capabilities to the financial community, built a Kubernetes cluster of 150 64-core Intel Skylake processors in just 15 minutes and trains 20,000 machine learning models concurrently using Kubernetes Engine.
“Google Kubernetes Engine is like magic for us. It’s the best container environment there is. Without it, we couldn’t provide the advanced financial analytics we offer today. Scaling would be difficult and prohibitively expensive.”
- Michael Bishop, CTO and Co-Founder, Alpha Vertex
Philips Lighting builds lighting products, systems, and services. Philips uses Kubernetes Engine to handle 200 million transactions every day, including 25 million remote lighting commands.
“Google Kubernetes Engine delivers a high-performing, flexible infrastructure that lets us independently scale components for maximum efficiency.”
- George Yianni, Head of Technology, Home Systems, Philips Lighting
Spotify the digital music service that hosts more than 2 billion playlists and gives consumers access to more than 30 million songs uses Kubernetes Engine for thousands of backend services.
“Kubernetes is our preferred orchestration solution for thousands of our backend services because of its capabilities for improved resiliency, features such as autoscaling, and the vibrant open source community. Shared VPC in Kubernetes Engine is essential for us to be able to use Kubernetes Engine with our many GCP projects.”
- Matt Brown, Software Engineer, Spotify
Get started today and let Google Cloud manage your enterprise applications on Kubernetes Engine. To learn more about Kubernetes Engine, join us for a deep dive into the Kubernetes 1.10 enterprise features in Kubernetes Engine by signing up for our upcoming webinar, 3 reasons why you should run your enterprise workloads on Kubernetes Engine.

Kubernetes best practices: terminating with grace



Editor’s note: Today is the second installment in a seven-part video and blog series from Google Developer Advocate Sandeep Dinesh on how to get the most out of your Kubernetes environment.

When it comes to distributed systems, handling failure is key. Kubernetes helps with this by utilizing controllers that can watch the state of your system and restart services that have stopped performing. On the other hand, Kubernetes can often forcibly terminate your application as part of the normal operation of the system.

In this episode of “Kubernetes Best Practices,” let’s take a look at how you can help Kubernetes do its job more efficiently and reduce the downtime your applications experience.


In the pre-container world, most applications ran on VMs or physical machines. If an application crashed, it took quite a while to boot up a replacement. If you only had one or two machines to run the application, this kind of time-to-recovery was unacceptable.

Instead, it became common to use process-level monitoring to restart applications when they crashed. If the application crashed, the monitoring process could capture the exit code and instantly restart the application.

With the advent of systems like Kubernetes, process monitoring systems are no longer necessary, as Kubernetes handles restarting crashed applications itself. Kubernetes uses an event loop to make sure that resources such as containers and nodes are healthy. This means you no longer need to manually run these monitoring processes. If a resource fails a health check, Kubernetes automatically spins up a replacement.

Check out this episode to see how you can set up custom health checks for your services.

The Kubernetes termination lifecycle

Kubernetes does a lot more than monitor your application for crashes. It can create more copies of your application to run on multiple machines, update your application, and even run multiple versions of your application at the same time!

This means there are many reasons why Kubernetes might terminate a perfectly healthy container. If you update your deployment with a rolling update, Kubernetes slowly terminates old pods while spinning up new ones. If you drain a node, Kubernetes terminates all pods on that node. If a node runs out of resources, Kubernetes terminates pods to free those resources (check out this previous post to learn more about resources).

It’s important that your application handle termination gracefully so that there is minimal impact on the end user and the time-to-recovery is as fast as possible!

In practice, this means your application needs to handle the SIGTERM message and begin shutting down when it receives it. This means saving all data that needs to be saved, closing down network connections, finishing any work that is left, and other similar tasks.

Once Kubernetes has decided to terminate your pod, a series of events takes place. Let’s look at each step of the Kubernetes termination lifecycle.

1 - Pod is set to the “Terminating” State and removed from the endpoints list of all Services

At this point, the pod stops getting new traffic. Containers running in the pod will not be affected.

2 - preStop Hook is executed

The preStop Hook is a special command or http request that is sent to the containers in the pod.

If your application doesn’t gracefully shut down when receiving a SIGTERM you can use this hook to trigger a graceful shutdown. Most programs gracefully shut down when receiving a SIGTERM, but if you are using third-party code or are managing a system you don’t have control over, the preStop hook is a great way to trigger a graceful shutdown without modifying the application.

3 - SIGTERM signal is sent to the pod

At this point, Kubernetes will send a SIGTERM signal to the containers in the pod. This signal lets the containers know that they are going to be shut down soon.

Your code should listen for this event and start shutting down cleanly at this point. This may include stopping any long-lived connections (like a database connection or WebSocket stream), saving the current state, or anything like that.

Even if you are using the preStop hook, it is important that you test what happens to your application if you send it a SIGTERM signal, so you are not surprised in production!

4 - Kubernetes waits for a grace period

At this point, Kubernetes waits for a specified time called the termination grace period. By default, this is 30 seconds. It’s important to note that this happens in parallel to the preStop hook and the SIGTERM signal. Kubernetes does not wait for the preStop hook to finish.

If your app finishes shutting down and exits before the terminationGracePeriod is done, Kubernetes moves to the next step immediately.

If your pod usually takes longer than 30 seconds to shut down, make sure you increase the grace period. You can do that by setting the terminationGracePeriodSeconds option in the Pod YAML. For example, to change it to 60 seconds:


5 - SIGKILL signal is sent to pod, and the pod is removed


If the containers are still running after the grace period, they are sent the SIGKILL signal and forcibly removed. At this point, all Kubernetes objects are cleaned up as well.

Conclusion

Kubernetes can terminate pods for a variety of reasons, and making sure your application handles these terminations gracefully is core to creating a stable system and providing a great user experience.

Improving application availability with Alias IPs, now with hot standby



High availability and redundancy are essential features for a cloud deployment. On Google Cloud Platform (GCP), Alias IPs allow you to configure secondary IPs or IP ranges on your virtual machine (VM) instances, for a secure and highly scalable way to deliver traffic to your applications. Today, we’re excited to announce that you can now dynamically add and remove alias IP ranges for existing, running VMs, so that you can migrate your applications from one VM to another in the event of a software or machine failure.

In the past, you could deploy highly available applications on GCP by using static routes that point to a hosted Virtual IP (VIP), plus adjusting the next-hop VM of that VIP destination based on availability of the application hosting the VM. Now, Alias IP ranges support hot standby availability deployments, including multiple standbys for a single VM (one-to-many), as well as multiple standbys for multiple VMs (many-to-many).
With this native support, you can now rely on GCP’s IP address management capabilities to carve out flexible IP ranges for your VMs. This delivers the following benefits over high-availability solutions that use static routes:
  • Improved security: Deployments that use Alias IP allow us to apply anti-spoofing checks that validate the source and destination IP, and allow any traffic with any source or destination to be forwarded. In contrast, static routes require that you disable anti-spoof protection for a VM.
  • Connectivity through VPN / Google Cloud Interconnect: Highly available application VIPs implemented as Alias IP addresses can be announced by Cloud Router via BGP to an on-premises network connected via VPN or Cloud Interconnect. This is important if you are accessing the highly available application from your on-premises data center.
  • Native access to Google services like Google Cloud Storage, BigQuery and any other managed services from googleapis.com. By using Alias IP, highly available applications get native access to these services, avoiding bottlenecks created by an external NAT proxy.
Let’s take a look at how you can configure floating Alias IPs.

Imagine you need to configure a highly available application that requires machine state to be constantly synced, for example between a database master and slave running on VMs in your network. Using Internal Load Balancing doesn’t help here since the traffic needs to be sent to only one server. With Alias IPs, you can configure your database to run using secondary IPs on the VM's primary network interface. In the event of a failure, you can dynamically switch this IP to be removed from the bad VM and attach it to the new server.

This approach is also be useful if an application in your virtual network needs to be accessed across regions, since Internal Load Balancing currently only supports only in-region access.

You can use Alias IP from the gcloud command line interface.

To migrate 10.10.0.5 from VM-A to VM-B
a) Remove the IP from VM-A

gcloud compute instances network-interfaces update \
     virtual-machine-a --zone us-central1-a --aliases 
b) Add the IP to VM-B

gcloud compute instances network-interfaces update \
     virtual-machine-b --zone us-central1-a \
     --aliases “range1:10.10.0.5”
In addition to adding and removing alias IPs from running VMs, you can create up to 10 Alias IP ranges per network interface, including up to seven secondary interfaces attached to other networks.
You can also use Alias IP with applications running within containers and being managed by container orchestration systems such as Kubernetes or Mesos. Click here to learn more about how Kubernetes uses Alias IPs.

Being able to migrate your workloads while they are running goes a long way toward ensuring high availability for your applications. Drop us a line about how you use Alias IPs, and other networking features you’d like to see on GCP.

Exploring container security: Isolation at different layers of the Kubernetes stack



Editor’s note: This is the seventh in a series of blog posts on container security at Google.

To conclude our blog series on container security, today’s post covers isolation, and when containers are appropriate for actually, well... containing. While containers bring great benefits to your development pipeline and provide some resource separation, they were not designed to provide a strong security boundary.

The fact is, there are some cases where you might want to run untrusted code in your environment. Untrusted code lives on a spectrum. On one end you have known bad malware that an engineer is trying to examine and reverse-engineer; and on the other end you might just have a third-party application or tool that you haven't audited yourself. Maybe it’s a project that historically had vulnerabilities and you aren’t quite ready to let it loose in your environment yet. In each of these cases, you don’t want the code to affect the security of your own workloads.

With that said, let’s take a look at what kind of security isolation containers do provide, and, in the event that it’s not enough, where to look for stronger isolation.

Hypervisors provide a security boundary for VMs
Traditionally, you might have put this untrusted code in its own virtual machine, relying on the security of the hypervisor to prevent processes from escaping or affecting other processes. A hypervisor provides a relatively strong security boundary—that is, we don’t expect code to be able to easily cross it by breaking out of a VM. At Google, we use the KVM hypervisor, and put significant effort into ensuring its security.

The level of trust you require for your code is all relative. The more sensitive the data you process, the more you need to be able to trust the software that accesses it. You don’t treat code that doesn’t access user data (or other critical data) the same way you treat code that does—or that’s in the serving path of active user sessions, or that has root cluster access. In a perfect world, you access your most critical data with code you wrote, reviewed for security issues, and ran some security checks against (such as fuzzing). You then verify that all these checks passed before you deploy it. Of course, you may loosen these requirements based on where the code runs, or what it does—the same open-source tool might be insufficiently trusted in a hospital system to examine critical patient information, but sufficiently trusted in a test environment for a game app you’re developing in your spare time.

A ‘trust boundary’ is the point at which your code changes its level of trust (and hence its security requirements), and a ‘security boundary’ is how you enforce these trust boundaries. A security boundary is a set of controls, managed together across all surfaces, to prevent a process from one trust level from elevating its trust level and affecting more trusted processes or access other users’ data. A container is one such security boundary, albeit not a very strong one. This is because, compared to a hypervisor, a native OS container is a larger, more complex security boundary, with more potential vulnerabilities. On the other hand, containers are meant to be run on a minimal OS, which limits the potential surface area for an attack at the OS level. At Google, we aim to protect all trust boundaries with at least two different security boundaries that each need to fail in order to cross a trust boundary.

Layers of isolation in Kubernetes
Kubernetes has several nested layers, each of which provides some level of isolation and security. Building on the container, Kubernetes layers provide progressively stronger isolation— you can start small and upgrade as needed. Starting from the smallest unit and moving outwards, here are the layers of a Kubernetes environment:

  • Container (not specific to Kubernetes): A container provides basic management of resources, but does not isolate identity or the network, and can suffer from a noisy neighbor on the node for resources that are not isolated by cgroups. It provides some security isolation, but only provides a single layer, compared to our desired double layer.
  • Pod: A pod is a collection of containers. A pod isolates a few more resources than a container, including the network. It does so with micro-segmentation using Kubernetes Network Policy, which dictates which pods can speak to one another. At the moment, a pod does not have a unique identity, but the Kubernetes community has made proposals to provide this. A pod still suffers from noisy neighbors on the same host.
  • Node: This is a machine, either physical or virtual. A node includes a collection of pods, and has a superset of the privileges of those pods. A node leverages a hypervisor or hardware for isolation, including for its resources. Modern Kubernetes nodes run with distinct identities, and are authorized only to access the resources required by pods that are scheduled to the node. There can still be attacks at this level, such as convincing the scheduler to assign sensitive workloads to the node. You can use firewall rules to restrict network traffic to the node.
  • Cluster: A cluster is a collection of nodes and a control plane. This is a management layer for your containers. Clusters offer stronger network isolation with per-cluster DNS.
  • Project: A GCP project is a collection of resources, including Kubernetes Engine clusters. A project provides all of the above, plus some additional controls that are GCP-specific, like project-level IAM for Kubernetes Engine and org policies. Resource names, and other resource metadata, are visible up to this layer.
There’s also the Kubernetes Namespace, the fundamental unit for authorization in Kubernetes. A namespace can contain multiple pods. Namespaces provide some control in terms of authorization, via namespace-level RBAC, but don’t try to control resource quota, network, or policies. Namespaces allow you to easily allocate resources to certain processes; these are meant to help you manage how you use your resources, not necessarily prevent a malicious process from escaping and accessing another process’ resources.

Diagram 1: Isolation provided by layer of Kubernetes

Recently, Google also announced the open-source gVisor project, which provides stronger isolation at the pod level.

Sample scenario: Multi-tenant SaaS workload
In practice, it can be hard to decide what isolation requirements you should have for your workload, and how to enforce them—there isn’t a one-size-fits-all solution. Time to do a little threat modeling.

A common scenario we hear, is a developer building a multi-tenant SaaS application running in Kubernetes Engine, in order to help manage and scale their application as needed to meet demand. In this scenario, let’s say we have a SaaS application running its front-end and back-end on Kubernetes Engine, with a back-end database for transaction data, and a back-end database for payment data; plus some open-source code for critical functions such as DNS and secret management.

You might be worried about a noisy (or nosy!) neighbor—that someone else is monopolizing resources you need, and you’re unable to serve your app. Cryptomining is a trendy attack vector these days, and being able to stay up and running even if one part of your infrastructure is affected is important to you. In cases like these, you might want to isolate certain critical workloads at the node layer.

You might be worried about information leaking between your applications. Of course Spacely Sprockets knows that you have other customers, but it shouldn’t be able to find out that Cogswell’s Cogs is also using your application—they’re competitors. In this case, you might want to be careful with your naming, and take care to block access to unauthenticated node ports (with NetworkPolicy), or isolate at the cluster level.

You might also be concerned that critical data, like customer payment data, is sufficiently segmented from access by less trusted workloads. Customer payment data should require different trust levels to access than user-submitted jobs. In this case, you might want to isolate at the cluster level, or run these each in their own sandbox.

So all together, you might have your entire application running in a single project, with different clusters for each environment, and place any highly trusted workload in its own cluster. In addition, you’ll need to make careful resource sharing decisions at the node and pod layers to isolate different customers.

Another common multi-tenant scenario we hear is one where you’re running entirely untrusted code. For example, your users may give you arbitrary code that you run on their behalf. In this case, for a multi-tenant cluster you'll want to investigate sandboxing solutions.

In the end
If you’ve learned one thing from this blog post, it’s that there’s no one right way to configure a Kubernetes environment—the right security isolation settings depend on what you are running, where, who is accessing the data, and how. We hope you enjoyed this series on container security! And while this is the last installment, you can look forward to more information about security best practices, as we continue to make Google Cloud, including Kubernetes Engine, the best place to run containers.

Kubernetes best practices: Resource requests and limits



Editor’s note: Today is the fourth installment in a seven-part video and blog series from Google Developer Advocate Sandeep Dinesh on how to get the most out of your Kubernetes environment.

When Kubernetes schedules a Pod, it’s important that the containers have enough resources to actually run. If you schedule a large application on a node with limited resources, it is possible for the node to run out of memory or CPU resources and for things to stop working!

It’s also possible for applications to take up more resources than they should. This could be caused by a team spinning up more replicas than they need to artificially decrease latency (hey, it’s easier to spin up more copies than make your code more efficient!), to a bad configuration change that causes a program to go out of control and use 100% of the available CPU. Regardless of whether the issue is caused by a bad developer, bad code, or bad luck, what’s important is that you be in control.

In this episode of Kubernetes best practices, let’s take a look at how you can solve these problems using resource requests and limits.

Requests and Limits

Requests and limits are the mechanisms Kubernetes uses to control resources such as CPU and memory. Requests are what the container is guaranteed to get. If a container requests a resource, Kubernetes will only schedule it on a node that can give it that resource. Limits, on the other hand, make sure a container never goes above a certain value. The container is only allowed to go up to the limit, and then it is restricted.

It is important to remember that the limit can never be lower than the request. If you try this, Kubernetes will throw an error and won’t let you run the container.

Requests and limits are on a per-container basis. While Pods usually contain a single container, it’s common to see Pods with multiple containers as well. Each container in the Pod gets its own individual limit and request, but because Pods are always scheduled as a group, you need to add the limits and requests for each container together to get an aggregate value for the Pod.

To control what requests and limits a container can have, you can set quotas at the Container level and at the Namespace level. If you want to learn more about Namespaces, see this previous installment from our blog series!

Let’s see how these work.

Container settings

There are two types of resources: CPU and Memory. The Kubernetes scheduler uses these to figure out where to run your pods.

Here are the docs for these resources.

If you are running in Google Kubernetes Engine, the default Namespace already has some requests and limits set up for you.

These default settings are okay for “Hello World”, but it is important to change them to fit your app.

A typical Pod spec for resources might look something like this. This pod has two containers:

Each container in the Pod can set its own requests and limits, and these are all additive. So in the above example, the Pod has a total request of 500 mCPU and 128 MiB of memory, and a total limit of 1 CPU and 256MiB of memory.

CPU

CPU resources are defined in millicores. If your container needs two full cores to run, you would put the value “2000m”. If your container only needs ¼ of a core, you would put a value of “250m”.

One thing to keep in mind about CPU requests is that if you put in a value larger than the core count of your biggest node, your pod will never be scheduled. Let’s say you have a pod that needs four cores, but your Kubernetes cluster is comprised of dual core VMs—your pod will never be scheduled!

Unless your app is specifically designed to take advantage of multiple cores (scientific computing and some databases come to mind), it is usually a best practice to keep the CPU request at ‘1’ or below, and run more replicas to scale it out. This gives the system more flexibility and reliability.

It’s when it comes to CPU limits that things get interesting. CPU is considered a “compressible” resource. If your app starts hitting your CPU limits, Kubernetes starts throttling your container. This means the CPU will be artificially restricted, giving your app potentially worse performance! However, it won’t be terminated or evicted. You can use a liveness health check to make sure performance has not been impacted.

Memory

Memory resources are defined in bytes. Normally, you give a mebibyte value for memory (this is basically the same thing as a megabyte), but you can give anything from bytes to petabytes.

Just like CPU, if you put in a memory request that is larger than the amount of memory on your nodes, the pod will never be scheduled.

Unlike CPU resources, memory cannot be compressed. Because there is no way to throttle memory usage, if a container goes past its memory limit it will be terminated. If your pod is managed by a Deployment, StatefulSet, DaemonSet, or another type of controller, then the controller spins up a replacement.

Nodes

It is important to remember that you cannot set requests that are larger than resources provided by your nodes. For example, if you have a cluster of dual-core machines, a Pod with a request of 2.5 cores will never be scheduled! You can find the total resources for Kubernetes Engine VMs here.

Namespace settings

In an ideal world, Kubernetes’ Container settings would be good enough to take care of everything, but the world is a dark and terrible place. People can easily forget to set the resources, or a rogue team can set the requests and limits very high and take up more than their fair share of the cluster.

To prevent these scenarios, you can set up ResourceQuotas and LimitRanges at the Namespace level.

ResourceQuotas

After creating Namespaces, you can lock them down using ResourceQuotas. ResourceQuotas are very powerful, but let’s just look at how you can use them to restrict CPU and Memory resource usage.

A Quota for resources might look something like this:

Looking at this example, you can see there are four sections. Configuring each of these sections is optional.

requests.cpu is the maximum combined CPU requests in millicores for all the containers in the Namespace. In the above example, you can have 50 containers with 10m requests, five containers with 100m requests, or even one container with a 500m request. As long as the total requested CPU in the Namespace is less than 500m!

requests.memory is the maximum combined Memory requests for all the containers in the Namespace. In the above example, you can have 50 containers with 2MiB requests, five containers with 20MiB CPU requests, or even a single container with a 100MiB request. As long as the total requested Memory in the Namespace is less than 100MiB!

limits.cpu is the maximum combined CPU limits for all the containers in the Namespace. It’s just like requests.cpu but for the limit.

limits.memory is the maximum combined Memory limits for all containers in the Namespace. It’s just like requests.memory but for the limit.

If you are using a production and development Namespace (in contrast to a Namespace per team or service), a common pattern is to put no quota on the production Namespace and strict quotas on the development Namespace. This allows production to take all the resources it needs in case of a spike in traffic.

LimitRange

You can also create a LimitRange in your Namespace. Unlike a Quota, which looks at the Namespace as a whole, a LimitRange applies to an individual container. This can help prevent people from creating super tiny or super large containers inside the Namespace.

A LimitRange might look something like this:

Looking at this example, you can see there are four sections. Again, setting each of these sections is optional.

The default section sets up the default limits for a container in a pod. If you set these values in the limitRange, any containers that don’t explicitly set these themselves will get assigned the default values.

The defaultRequest section sets up the default requests for a container in a pod. If you set these values in the limitRange, any containers that don’t explicitly set these themselves will get assigned the default values.

The max section will set up the maximum limits that a container in a Pod can set. The default section cannot be higher than this value. Likewise, limits set on a container cannot be higher than this value. It is important to note that if this value is set and the default section is not, any containers that don’t explicitly set these values themselves will get assigned the max values as the limit.

The min section sets up the minimum Requests that a container in a Pod can set. The defaultRequest section cannot be lower than this value. Likewise, requests set on a container cannot be lower than this value either. It is important to note that if this value is set and the defaultRequest section is not, the min value becomes the defaultRequest value too.

The lifecycle of a Kubernetes Pod

At the end of the day, these resources requests are used by the Kubernetes scheduler to run your workloads. It is important to understand how this works so you can tune your containers correctly.

Let’s say you want to run a Pod on your Cluster. Assuming the Pod specifications are valid, the Kubernetes scheduler will use round-robin load balancing to pick a Node to run your workload.

Note: The exception to this is if you use a nodeSelector or similar mechanism to force Kubernetes to schedule your Pod in a specific place. The resource checks still occur when you use a nodeSelector, but Kubernetes will only check nodes that have the required label.

Kubernetes then checks to see if the Node has enough resources to fulfill the resources requests on the Pod’s containers. If it doesn’t, it moves on to the next node.

If none of the Nodes in the system have resources left to fill the requests, then Pods go into a “pending” state. By using Kubernetes Engine features such as the Node Autoscaler, Kubernetes Engine can automatically detect this state and create more Nodes automatically. If there is excess capacity, the autoscaler can also scale down and remove Nodes to save you money!

But what about limits? As you know, limits can be higher than the requests. What if you have a Node where the sum of all the container Limits is actually higher than the resources available on the machine?

At this point, Kubernetes goes into something called an “overcommitted state.” Here is where things get interesting. Because CPU can be compressed, Kubernetes will make sure your containers get the CPU they requested and will throttle the rest. Memory cannot be compressed, so Kubernetes needs to start making decisions on what containers to terminate if the Node runs out of memory.

Let’s imagine a scenario where we have a machine that is running out of memory. What will Kubernetes do?

Note: The following is true for Kubernetes 1.9 and above. In previous versions, it uses a slightly different process. See this doc for an in-depth explanation.

Kubernetes looks for Pods that are using more resources than they requested. If your Pod’s containers have no requests, then by default they are using more than they requested, so these are prime candidates for termination. Other prime candidates are containers that have gone over their request but are still under their limit.

If Kubernetes finds multiple pods that have gone over their requests, it will then rank these by the Pod’s priority, and terminate the lowest priority pods first. If all the Pods have the same priority, Kubernetes terminates the Pod that’s the most over its request.

In very rare scenarios, Kubernetes might be forced to terminate Pods that are still within their requests. This can happen when critical system components, like the kubelet or docker, start taking more resources than were reserved for them.

Conclusion

While your Kubernetes cluster might work fine without setting resource requests and limits, you will start running into stability issues as your teams and projects grow. Adding requests and limits to your Pods and Namespaces only takes a little extra effort, and can save you from running into many headaches down the line!