Precious cargo: Securing containers with Kubernetes Engine 1.8



With every new release of Kubernetes and Google Kubernetes Engine, we add new security features, strengthen existing security controls and move to stronger default configurations. We strive to improve Kubernetes security in general, and to make Kubernetes Engine more secure by default so that you don’t have to apply these configurations yourself.

With the speed of development in Kubernetes, there are often new features and security configurations for you to know about. This post will guide you through implementing our current guidance for hardening your Kubernetes Engine cluster. If you’re feeling adventurous, we’ll also discuss new security features that you can test on alpha clusters (which are not recommended for production use).

Security best practices for your Kubernetes cluster

When running a Kubernetes cluster, there are several best practices we recommend you follow:
  •  Use least privilege service accounts on your nodes
  •  Disable the Kubernetes web UI 
  •  Disable legacy authorization (now disabled by default for new clusters in Kubernetes 1.8) But before you can do that, you’ll need to set a few environment variables first:
#Your project ID
PROJECT_ID=
#Your Zone. E.g. us-west1-c
ZONE=
#New service account we will create. Can be any string that isn't an existing service account. E.g. min-priv-sa
SA_NAME=
#Name for your cluster we will create or modify. E.g. example-secure-cluster
CLUSTER_NAME=
#Name for a node-pool we will create. Can be any string that isn't an existing node-pool. E.g. example-node-pool
NODE_POOL=

Use least privilege service accounts on your nodes


The principle of least privilege helps to reduce the "blast radius" of a potential compromise, by granting each component only the minimum permissions required to perform its function. Should one component become compromised, least privilege makes it much more difficult to chain attacks together and escalate permissions.

Each Kubernetes Engine node has a Service Account associated with it. You’ll see the Service Account user listed in the IAM section of the Cloud Console as “Compute Engine default service account.” This account has broad access by default, making it useful to wide variety of applications, but has more permissions than you need to run your Kubernetes Engine cluster.

We recommend you create and use a minimally privileged service account to run your Kubernetes Engine Cluster instead of the Compute Engine default service account.

Kubernetes Engine requires, at a minimum, the service account to have the monitoring.viewer, monitoring.metricWriter, and logging.logWriter roles.

The following commands will create a GCP service account for you with the minimum permissions required to operate Kubernetes Engine:

gcloud iam service-accounts create "${SA_NAME}" \
  --display-name="${SA_NAME}"

gcloud projects add-iam-policy-binding "${PROJECT_ID}" \
  --member "serviceAccount:${SA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com" \
  --role roles/logging.logWriter

gcloud projects add-iam-policy-binding "${PROJECT_ID}" \
  --member "serviceAccount:${SA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com" \
  --role roles/monitoring.metricWriter

gcloud projects add-iam-policy-binding "${PROJECT_ID}" \
  --member "serviceAccount:${SA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com" \
  --role roles/monitoring.viewer

#if your cluster already exists, you can now create a new node pool with this new service account.
gcloud container node-pools create "${NODE_POOL}" \
  --service-account="${SA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com" \
  --cluster="${CLUSTER_NAME}"

If you need your Kubernetes Engine cluster to have access to other Google Cloud services, we recommend that you create an additional role and provision it to workloads via Kubernetes secrets, rather than re-use this one.

Note: We’re currently designing a system to make obtaining GCP credentials in your Kubernetes cluster much easier and will completely replace this workflow. Join the Kubernetes Container Identity Working Group to participate.

Disable the Kubernetes Web UI

We recommend you disable the Kubernetes Web UI when running on Kubernetes Engine. The Kubernetes Web UI (aka KubernetesDashboard) is backed by a highly privileged Kubernetes Service Account. The Cloud Console provides much of the same functionality, so you don't need these permissions if you're running on Kubernetes Engine.

The following command disables the Kubernetes Web UI:
gcloud container clusters update "${CLUSTER_NAME}" \
    --update-addons=KubernetesDashboard=DISABLED

Disable legacy authorization

Starting with Kubernetes 1.8, Attribute-Based Access Control (ABAC) is disabled by default in Kubernetes Engine. Role-Based Access Control (RBAC) was released as beta in Kubernetes 1.6, and ABAC was kept enabled until 1.8 to give users time to migrate. RBAC has significant security advantages and is now stable, so it’s time to disable ABAC. If you're still relying on ABAC, review the Prerequisites for using RBAC before continuing. If you upgraded your cluster from an older version and are using ABAC, you should update your access controls configuration:
gcloud container clusters update "${CLUSTER_NAME}" \
  --no-enable-legacy-authorization

To create a new cluster with all of the above recommendations, run:
gcloud container clusters create "${CLUSTER_NAME}" \
  --service-account="${SA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com" \
  --no-enable-legacy-authorization \
  --disable-addons=KubernetesDashboard


Create a cluster network policy


In addition to the aforementioned best practices, we recommend you create network policies to control the communication between your cluster's Pods and Services. Kubernetes Engine's Network Policy enforcement, currently in beta, makes it much more difficult for attackers to propagate inside your cluster. You can also use the Kubernetes Network Policy API to create Pod-level firewall rules in Kubernetes Engine. These firewall rules determine which Pods and Services can access one another inside your cluster.

To enable network policy enforcement when creating a new cluster, specify the --enable-network-policy flag using gcloud beta:

gcloud beta container clusters create "${CLUSTER_NAME}" \
  --project="${PROJECT_ID}" \
  --zone="${ZONE}" \
  --enable-network-policy

Once Network Policy has been enabled, you'll have to actually define a policy. Since this is specific to your exact topology, we can’t provide a detailed walkthrough. The Kubernetes documentation, however, has an excellent overview and walkthrough for a simple nginx deployment.

Note: Alpha and beta features such as Kubernetes Engine’s Network Policy API represent meaningful security improvements in the GKE APIs. Be aware that alpha and beta features are not covered by any SLA or deprecation policy, and may be subject to breaking changes in future releases. We don't recommend you use these features for production clusters.

Closing thoughts


Many of the same lessons we learned from traditional information security apply to Containers, Kubernetes, and Kubernetes Engine; we just have new ways to apply them. Adhere to least privilege, minimize your attack surface by disabling legacy or unnecessary functionality, and the most traditional of all: write good firewall policies. To learn more, visit the Kubernetes Engine webpage and documentation. If you’re just getting started with containers and Google Cloud Platform (GCP), be sure to sign up for a free trial.