How Kubernetes takes container workload portability to the next level


Developers love application containers and the Docker and Rocket package formats, because of the package-once, run-anywhere experience that simplifies their jobs. But even the easiest-to-use technologies can spiral out of control and become victims of their own success. Google knows this all too well. With our own internal systems, we realized a long time ago that the most efficient way to share compute resources was containers, and the only way to run containers at scale is to use automation and orchestration. And so we developed Cgroups, which we donated to the Linux Foundation to help establish the container ecosystem, and what we affectionately call Borg, our cluster management system.

Flash forward to the recent rise of containers, and it occurred to us that developers at large could benefit from the service discovery, configuration and orchestration that Borg provides, to simplify building and running multi-node container-based applications. Thus Kubernetes was born, an open-source derivative of Borg that anyone can use to manage their container environment.

Earlier this year, we transferred the Kubernetes IP to the Cloud Native Computing Foundation. Under the auspices of the CNCF, members such as IBM, Docker, CoreOS, Mesosphere, Red Hat and VMware work alongside Google to ensure that Kubernetes works not just in Google environments, but in whatever public or private cloud an organization may choose.

What does that mean for container-centric shops? Kubernetes builds on the workload portability that containers provide, by helping organizations to avoid getting locked into any one cloud provider. Today, you may be running on Google Container Engine, but there may come a time when you wish you could take advantage of IBM’s middleware. Or you may be a longtime AWS shop, but would love to use Google Cloud Platform’s advanced big data and machine learning. Or you’re on Microsoft Azure today for its ability to run .Net applications, but would like to take advantage of existing in-house resources running OpenStack. By providing an application-centric API on top of compute resources, Kubernetes helps realize the promise of multi-cloud scenarios.

If running across more than one cloud is in your future, choosing Kubernetes as the basis of your container orchestration strategy makes sense. Today, most of the major public cloud providers offer container orchestration and scheduling as a service. Our offering, Google Container Engine, or GKE, is based on Kubernetes and by placing Kubernetes in the hands of the CNCF, our goal is to ensure that your applications will run on any Kubernetes implementation that a cloud provider may offer  or that you run yourself.

Even today, it’s possible to run Kubernetes on any cloud environment of your choosing. Don’t believe us? Just look at CoreOS Tectonic, which runs on AWS, or Kubernetes for Microsoft Azure.

Stay tuned for a tutorial about how to set up and run Kubernetes to run multi-cloud applications, or get started right away with a free trial.