Editor’s note: This is the fifth in a series of blog posts on container security at Google.
It’s only been a few months since we last spoke about securing Google Kubernetes Engine, but a lot has changed since then. Our security team has been working to further harden Kubernetes Engine, so that you can deploy sensitive containerized applications on the platform with confidence. Today we’ll walk through the latest best practices for hardening your Kubernetes Engine cluster, with updates for new features in Kubernetes Engine versions 1.9 and 1.10.
1. Follow the steps in the previous hardening guideThis new hardening guide assumes you’ve already completed the previous one. So go ahead and run though that guide real quick, and head on back over here.
2. Service Accounts and Access ScopesNext, you’ll need to think about service accounts and access control. We strive to set up Kubernetes Engine with usable but protected defaults. In Kubernetes Engine 1.7, we disabled the Kubernetes Dashboard (the web UI) by default, because it uses a highly privileged service account; and in 1.8, we disabled Attribute-Based Access Control (ABAC) by default, since Role-Based Access Control (RBAC) provides more complex permission management. Now, in Kubernetes Engine 1.10, new clusters will no longer have the compute-rw scope on node service accounts enabled by default, which reduces the blast radius of a potential node compromise. If a node were exploited, an attacker would not be able to use the service account to create new compute resources or read node metadata directly, which could be a path for privilege escalation.
If you’ve created a Kubernetes Engine cluster recently, you may have seen the following warning:
This means that if you have a special requirement to use the node’s service account to access storage or manipulate compute resources, you’ll need to explicitly include the required scopes when creating new clusters:
gcloud container clusters create example-cluster \ --scopes=compute-rw,gke-default
If you’re like most people and don’t use these scopes, your new clusters are automatically created with the gke-default permissions.
3. Create good RBAC rolesIn the Kubernetes Engine 1.8 hardening blog post, we made sure node service accounts were running with the minimum required permissions, but what about the accounts used by DevOps team(s), Cluster administrators, or security teams? They all need different levels of access to clusters, which should be kept as restricted as possible.
While Cloud IAM provides great user access management at the Google Cloud Platform (GCP) Project level, RBAC roles control access within each Kubernetes cluster. They work in concert to help you enforce strong access control.
A good RBAC role should give a user exactly the permissions they need, and no more. Here is how to create and grant a user permission to view pods only, for example:
``` PROJECT_ID=$(gcloud config get-value project) PRIMARY_ACCOUNT=$(gcloud config get-value account) # Specify your cluster name. CLUSTER=cluster-1 # You may have to grant yourself permission to manage roles kubectl create clusterrolebinding cluster-admin-binding \ --clusterrole cluster-admin --user $PRIMARY_ACCOUNT # Create an IAM service account for the user “gke-pod-reader”, which we will allow to read pods gcloud iam service-accounts create gke-pod-reader \ --display-name "GKE Pod Reader" \ [email protected]$PROJECT_ID.iam.gserviceaccount.com cat > pod-reader-clusterrole.yaml<<EOF kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: default name: pod-reader rules: - apiGroups: [""] # "" indicates the core API group resources: ["pods"] verbs: ["get", "watch", "list"] EOF kubectl create -f pod-reader-clusterrole.yaml cat > pod-reader-clusterrolebinding.yaml<<EOF kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: pod-reader-global subjects: - kind: User name: $USER_EMAIL apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: pod-reader apiGroup: rbac.authorization.k8s.io EOF kubectl create -f pod-reader-clusterrolebinding.yaml # Check the permissions of our Pod Reader user. gcloud iam service-accounts keys create \ --iam-account $USER_EMAIL pod-reader-key.json gcloud container clusters get-credentials $CLUSTER gcloud auth activate-service-account $USER_EMAIL \ --key-file=pod-reader-key.json # Our user can get/list all pods in the cluster. kubectl get pods --all-namespaces # But they can’t see the deployments, services, or nodes. kubectl get deployments --all-namespaces kubectl get services --all-namespaces kubectl get nodes # Reset gcloud and kubectl to your main user. gcloud config set account $PRIMARY_ACCOUNT gcloud container clusters get-credentials $CLUSTER ```
Check out the GCP documentation more information about how to configure RBAC.
4. Consider custom IAM rolesFor most people, the predefined IAM roles available on Kubernetes Engine work great. If they meet your organization's needs then you’re good to go. If you need more fine-grained control, though, we also have the tools you need.
Custom IAM Roles let you define new roles, alongside the predefined ones, with the exact permissions your users require and no more.
5. Explore the cutting edgeWe’ve launched a few new features to beta that we recommend turning on, at least in a test environment, to prepare for their general availability.
In order to use these beta features, you’ll need to enable the v1beta1 API on your cluster by running this command:
gcloud config set container/use_v1_api false
Conceal your host VM’s metadata Server [Beta]Starting with the release of Kubernetes 1.9.3, Kubernetes Engine can conceal the Compute Engine metadata server from your running workloads, to prevent your workload from impersonating the node. Many practical attacks against Kubernetes rely on access to the node’s metadata server to extract the node’s identity document and token.
Constraining access to the underlying service account, by using least privilege service accounts as we did in the previous guide, is a good idea; preventing workloads from impersonating the node is even better. Note that containers running in your pods will still be able to access the non-sensitive data from the metadata server.
Follow these instructions to enable Metadata Concealment.
Enable and define a Pod Security Policy [beta]Kubernetes offers many controls to restrict your workloads at the pod spec level to execute with only their minimum required capabilities. Pod Security Policy allows you to set smart defaults for your pods, and enforce controls you want to enable across your fleet. The policies you define should be specific to the needs of your application. If you’re not sure where to start, we recommend the restricted-psp.yaml in the kubernetes.io documentation for example policies. It’s pretty restrictive, but it’s a good place to start, and you can loosen the restrictions later as appropriate.
Follow these instructions to get started with Pod Security Policies.
6. Where to look for practical adviceIf you’ve been following our blog series so far, hopefully you’ve already learned a lot about container security. For Kubernetes Engine, we’ve put together a new Overview of Kubernetes Engine Security, now published in our documentation, to guide you as you think through your security model. This page can act as a starting point to get a brief overview of the various security features and configurations that you can use to help ensure your clusters are following best practices. From that page, you can find links to more detailed guidance for each of the features and recommendations.
We’re working hard on many more Kubernetes Engine security features. To stay in the know, keep an eye on this blog for more security posts, and have a look at the Kubernetes Engine hardening guide for prescriptive guidance on how to bolster the security of your clusters.