Posted by David Aronchick, Product Manager, Google Container Engine
Today, we’re bringing the latest Kubernetes 1.5 release to Google Cloud Platform (GCP) customers. In addition to the full slate of features available in Kubernetes, Google Container Engine brings a simplified user experience for cross-cloud federation, support for running stateful applications and automated maintenance of your clusters.
Highlights of this Container Engine release include:
Auto-upgrade and auto-repair for nodes simplify on-going management of your clusters
Simplified cross-cloud federation with support for the new "kubefed" tool
Automated scaling for key cluster add-ons, ensuring improved uptime for critical cluster services
StatefulSets (originally called PetSets) in beta, enabling you to run stateful workloads on Container Engine
HIPAA compliance allowing you to run HIPAA regulated workloads in containers (after agreement to Google Cloud’s standard Business Associate Agreement).
The adoption of Kubernetes and growth of the community has propelled it to be one of the fastest and most active open source projects, and that growth is mirrored in the accelerating usage of Container Engine. By using the fully managed services, companies can focus on delivering value for their customers, rather than on maintaining their infrastructure. Some recent customer highlights include:
GroupBy uses Container Engine to support continuous delivery of new commerce application capabilities for their customers, including retailers such as The Container Store, Urban Outfitters and CVS Health.
“Google Container Engine provides us with the openness, stability and scalability we need to manage and orchestrate our Docker containers. This year, our customers flourished during Black Friday and Cyber Monday with zero outages, downtime or interruptions in service thanks, in part, to Google Container Engine.” - Will Warren, Chief Technology Officer at GroupBy.
MightyTV ported their workloads to Container Engine to power their video recommendation engine, reducing their cost by 33% compared to running on traditional virtual machines. Additionally, they were able to remove a third-party monitoring and logging service and let go of maintaining Kubernetes on their own.
If you’d like to help shape the future of Kubernetes — the core technology Container Engine is built on — join the open Kubernetes community and participate via the kubernetes-users-mailing list or chat with us on the kubernetes-users Slack channel.
Finally, if you’d like to try Kubernetes or GCP, it’s super easy to get started with one-click Kubernetes clusters creation with Container Engine. Sign up for a free trial here.
Today Red Hat is releasing the general availability of their OpenShift Dedicated service running on Google Cloud Platform (GCP). This combination helps speed the adoption of Kubernetes, containers and cloud-native application patterns.
We often hear from customers that they need open source tools that enable their applications across both their own data centers and multiple cloud providers. Our collaboration with Red Hat around Kubernetes and OpenShift, is a great example of how we're committed to working with partners on open hybrid solutions.
OpenShift Dedicated on GCP offers a new option to enterprise IT organizations that want to use Red Hat container technology to deploy, manage and support their OpenShift instances. With OpenShift Dedicated, developers maintain control over the build and isolation process for their applications. Red Hat acts as the service provider, managing OpenShift Dedicated and offering support, helping customers focus more heavily on application development and business velocity. We'll also be working with Red Hat to make it easy for customers to augment their OpenShift applications with GCP’s broad and growing portfolio of services.
OpenShift and Kubernetes
As the second largest contributor to the project, Red Hat is a key collaborator helping to evolve and mature Kubernetes. Red Hat also uses Kubernetes as a foundation for Red Hat OpenShift Container Platform, which adds a service catalog, build automation, deployment automation and application lifecycle management to meet the needs of its enterprise customers.
OpenShift Dedicated is underpinned by Red Hat Enterprise Linux, and marries Red Hat’s enterprise-grade container application platform with Google’s 12+ years of operational expertise around containers (and the resulting optimization of our infrastructure for container-based workloads).
Enterprise developers who want to complement their on-premises infrastructure with cloud services and a global footprint, but who still want stable, more secure, open-source solutions, should try out OpenShift Dedicated on Google Cloud Platform, either as a complement to an on-premise OpenShift deployment or as a stand alone offering. You can sign up for the service here. We welcome your feedback on how to make the service even better.
Example application: analyzing a Tweet stream using OpenShift and Google BigQuery
We’re also working with Red Hat to make it easy for you to augment your OpenShift-based applications wherever they run. Below is an early example of using BigQuery, Google's managed data warehouse, and Google Cloud Pub/Sub, its real-time messaging service, with Red Hat OpenShift Dedicated. This can be the starting point to incorporate social insights into your own services.
Step 1: Next, set up a service account. A service account is a way to interact with your GCP resources by using a different identity than your primary login and is generally intended for server-to-server interaction. From the GCP Navigation Menu, click on "Permissions."
Once there, click on "Service accounts."
Click on "Create service account," which will prompt you to enter a service account name. Name your project and click on "Furnish a new private key." Select the default "JSON" Key type.
Step 2: Once you click "Create," a service account “.json” will be downloaded to your browser’s downloads location.
Important: Like any credential, this represents an access mechanism to authenticate and use resources in your GCP account — KEEP IT SAFE! Never place this file in a publicly accessible source repo (e.g., public GitHub).
Step 3: We’ll be using the JSON credential via a Kubernetes secret deployed to your OpenShift cluster. To do so, first perform a base64 encoding of your JSON credential file:
$ base64 -i ~/path/to/downloads/credentials.json
Keep the output (a very long string) ready for use in the next step, where you’ll replace‘BASE64_CREDENTIAL_STRING’ in the pod example (below) with the output of the base64 encoding.
Important: Note that base64 is encoded (not encrypted) and can be readily reversed, so this file (with the base64 string) should be treated with the same high degree of care as the credential file mentioned above.
Step 4: Create the Kubernetes secret inside your OpenShift cluster. A secret is the proper place to make sensitive information available to pods running in your cluster (like passwords or the credentials downloaded in the previous step). This is what your pod definition will look like (e.g., google-secret.yaml):
You’ll want to add this file to your source-control system (minus the credentials).
Replace ‘BASE64_CREDENTIAL_STRING’ with the base64 output from the prior step.
Step 5: Deploy the secret to the cluster:
$ oc create -f google-secret.yaml
Step 6: Now you can use Google APIs from your OpenShift cluster. To take your GCP-enabled cluster for a spin, try going through the steps detailed in Real-Time Data Analysis with Kubernetes, Cloud Pub/Sub and BigQuery, a solutions document. You’ll need to make two minor tweaks for the solution to work on your OpenShift cluster:
For any pod that needs to access Google APIs, modify it to create a reference to the secret, including exporting the environment variable “GOOGLE_APPLICATION_CREDENTIALS” to the pod (here’s more information on application default credentials).
In the PubSub-BiqQuery solution, that means you’ll modify two pod definitions:, pubsub/bigquery-controller.yaml and pubsub/twitter-stream.yaml
Step 7: Finally, anywhere the solution instructs you to use "kubectl," replace that with the equivalent OpenShift command "oc."
That’s it! If you follow along with the rest of the steps in the solution, you’ll soon be able to query (and see) tweets showing up in your BigQuery table — arriving via Cloud Pub/Sub. Going forward with your own deployments, all you need to do is follow the above steps of attaching the credential secret to any pod where you use Google Cloud SDKs and/or access Google APIs.
One of our goals here on the Google Cloud Platform team is to support the broadest possible array of platforms and operating systems. That’s why we’re so excited about the ASP.NET Core, the next generation of the open source ASP.NET web framework built on .NET Core. With it, .NET developers can run their apps cross-platform on Windows, Mac and Linux.
One thing that ASP.NET Core does is allow .NET applications to run in Docker containers. All of a sudden, we’ve gone from Windows-only web apps to lean cross-platform web apps running in containers. This has been great to see!
ASP.NET Core supports running apps across a variety of operating system platforms
Containers can provide a stable runtime environment for apps, but they aren’t always easy to manage. You still need to worry about how to automate deployment of containers, how to scale up and down and how to upgrade or downgrade app versions reliably. In short, you need a container management platform that you can rely on in production.
That’s where the open-source Kubernetes platform comes in. Kubernetes provides high-level building blocks such as pods, labels, controllers and services that collectively help maintenance of containerized apps. Google Container Engine provides a hosted version of Kubernetes which can greatly simplify creating and managing Kubernetes clusters.
My colleague Ivan Naranjo recently published a blog post that shows you how to take an ASP.NET Core app, containerize it with Docker and and run it on Google App Engine. In this post, we’ll take a containerized ASP.NET Core app and manage it with Kubernetes and Google Container Engine. You'll be surprised how easy it is, especially considering that running an ASP.NET app on a non-Windows platform was unthinkable until recently.
Prerequisites
I am assuming a Windows development environment, but the instructions are similar on Mac or Linux.
.NET Core comes with .NET Core Command Line Tools, which makes it really easy to create apps from command line. Let’s create a HelloWorld folder and create a web app using dotnet command:
Next, let’s pack the application and all of its dependencies into a folder to get it ready to publish.
$ dotnet publish -c Release
Once the app is published, we can test the resulting dll using the following:
$ cd bin/Release/netcoreapp1.0/publish/
$ dotnet HelloWorld.dll
Containerize the ASP.NET Core app with Docker
Let’s now take our HelloWorld app and containerize it with Docker. Create a Dockerfile in the root of our app folder:
FROM microsoft/dotnet:1.0.1-core
COPY . /app
WORKDIR /app
EXPOSE 8080/tcp
ENV ASPNETCORE_URLS http://*:8080
ENTRYPOINT ["dotnet", "HelloWorld.dll"]
This is the recipe for the Docker image that we'll create shortly. In a nutshell, we're creating an image based on microsoft/dotnet:latest image, copying the current directory to /app directory in the container, executing the commands needed to get the app running, making sure port 8080 is exposed and that ASP.NET Core is using that port.
Now we’re ready to build our Docker image and tag it with our Google Cloud project id:
Now, let’s push our image to Google Container Registry using gcloud, so we can later refer to this image when we deploy and run our Kubernetes cluster. In the Google Cloud SDK Shell, type:
This will take a little while but when the cluster is ready, you should see something like this:
Creating cluster hello-dotnet-cluster...done.
Deploy and run the app in Container Engine
At this point, we have our image hosted on Google Container Registry and we have our Kubernetes cluster ready in Google Container Engine. There’s only one thing left to do: run our image in our Kubernetes cluster. To do that, we can use the kubectl command line tool.
Create a deployment from our image in Kubernetes:
$ kubectl run hello-dotnet --image=gcr.io/<PROJECT_ID>hello-dotnet:v1 \
Finally, if you visit the external IP address on port 8080, you should see the default ASP.NET Core app managed by Kubernetes!
It’s fantastic to see the ASP.NET and Linux worlds are coming together. With Kubernetes, ASP.NET Core apps can benefit from automated deployments, scaling, reliable upgrades and much more. It’s a great time to be a .NET developer, for sure!
One of our goals here on the Google Cloud Platform team is to support the broadest possible array of platforms and operating systems. That’s why we’re so excited about the ASP.NET Core, the next generation of the open source ASP.NET web framework built on .NET Core. With it, .NET developers can run their apps cross-platform on Windows, Mac and Linux.
One thing that ASP.NET Core does is allow .NET applications to run in Docker containers. All of a sudden, we’ve gone from Windows-only web apps to lean cross-platform web apps running in containers. This has been great to see!
ASP.NET Core supports running apps across a variety of operating system platforms
Containers can provide a stable runtime environment for apps, but they aren’t always easy to manage. You still need to worry about how to automate deployment of containers, how to scale up and down and how to upgrade or downgrade app versions reliably. In short, you need a container management platform that you can rely on in production.
That’s where the open-source Kubernetes platform comes in. Kubernetes provides high-level building blocks such as pods, labels, controllers and services that collectively help maintenance of containerized apps. Google Container Engine provides a hosted version of Kubernetes which can greatly simplify creating and managing Kubernetes clusters.
My colleague Ivan Naranjo recently published a blog post that shows you how to take an ASP.NET Core app, containerize it with Docker and and run it on Google App Engine. In this post, we’ll take a containerized ASP.NET Core app and manage it with Kubernetes and Google Container Engine. You'll be surprised how easy it is, especially considering that running an ASP.NET app on a non-Windows platform was unthinkable until recently.
Prerequisites
I am assuming a Windows development environment, but the instructions are similar on Mac or Linux.
.NET Core comes with .NET Core Command Line Tools, which makes it really easy to create apps from command line. Let’s create a HelloWorld folder and create a web app using dotnet command:
Next, let’s pack the application and all of its dependencies into a folder to get it ready to publish.
$ dotnet publish -c Release
Once the app is published, we can test the resulting dll using the following:
$ cd bin/Release/netcoreapp1.0/publish/
$ dotnet HelloWorld.dll
Containerize the ASP.NET Core app with Docker
Let’s now take our HelloWorld app and containerize it with Docker. Create a Dockerfile in the root of our app folder:
FROM microsoft/dotnet:1.0.1-core
COPY . /app
WORKDIR /app
EXPOSE 8080/tcp
ENV ASPNETCORE_URLS http://*:8080
ENTRYPOINT ["dotnet", "HelloWorld.dll"]
This is the recipe for the Docker image that we'll create shortly. In a nutshell, we're creating an image based on microsoft/dotnet:latest image, copying the current directory to /app directory in the container, executing the commands needed to get the app running, making sure port 8080 is exposed and that ASP.NET Core is using that port.
Now we’re ready to build our Docker image and tag it with our Google Cloud project id:
Now, let’s push our image to Google Container Registry using gcloud, so we can later refer to this image when we deploy and run our Kubernetes cluster. In the Google Cloud SDK Shell, type:
This will take a little while but when the cluster is ready, you should see something like this:
Creating cluster hello-dotnet-cluster...done.
Deploy and run the app in Container Engine
At this point, we have our image hosted on Google Container Registry and we have our Kubernetes cluster ready in Google Container Engine. There’s only one thing left to do: run our image in our Kubernetes cluster. To do that, we can use the kubectl command line tool.
Create a deployment from our image in Kubernetes:
$ kubectl run hello-dotnet --image=gcr.io/<PROJECT_ID>hello-dotnet:v1 \
Finally, if you visit the external IP address on port 8080, you should see the default ASP.NET Core app managed by Kubernetes!
It’s fantastic to see the ASP.NET and Linux worlds are coming together. With Kubernetes, ASP.NET Core apps can benefit from automated deployments, scaling, reliable upgrades and much more. It’s a great time to be a .NET developer, for sure!