Category Archives: Google Cloud Platform Blog

Product updates, customer stories, and tips and tricks on Google Cloud Platform

Managing containerized ASP.NET Core apps with Kubernetes



One of our goals here on the Google Cloud Platform team is to support the broadest possible array of platforms and operating systems. That’s why we’re so excited about the ASP.NET Core, the next generation of the open source ASP.NET web framework built on .NET Core. With it, .NET developers can run their apps cross-platform on Windows, Mac and Linux.

One thing that ASP.NET Core does is allow .NET applications to run in Docker containers. All of a sudden, we’ve gone from Windows-only web apps to lean cross-platform web apps running in containers. This has been great to see!
ASP.NET Core supports running apps across a variety of operating system platforms
Containers can provide a stable runtime environment for apps, but they aren’t always easy to manage. You still need to worry about how to automate deployment of containers, how to scale up and down and how to upgrade or downgrade app versions reliably. In short, you need a container management platform that you can rely on in production.

That’s where the open-source Kubernetes platform comes in. Kubernetes provides high-level building blocks such as pods, labels, controllers and services that collectively help maintenance of containerized apps. Google Container Engine provides a hosted version of Kubernetes which can greatly simplify creating and managing Kubernetes clusters.

My colleague Ivan Naranjo recently published a blog post that shows you how to take an ASP.NET Core app, containerize it with Docker and and run it on Google App Engine. In this post, we’ll take a containerized ASP.NET Core app and manage it with Kubernetes and Google Container Engine. You'll be surprised how easy it is, especially considering that running an ASP.NET app on a non-Windows platform was unthinkable until recently.

Prerequisites

I am assuming a Windows development environment, but the instructions are similar on Mac or Linux.

First, we need to install .NET core, install Docker and install Google Cloud SDK for Windows. Then, we need to create a Google Cloud Platform project. We'll use this project later on to host our Kubernetes cluster on Container Engine.

Create a HelloWorld ASP.NET Core app

.NET Core comes with .NET Core Command Line Tools, which makes it really easy to create apps from command line. Let’s create a HelloWorld folder and create a web app using dotnet command:

$ mkdir HelloWorld
$ cd HelloWorld
$ dotnet new -t web

Restore the dependencies and run the app locally:

$ dotnet restore
$ dotnet run

You can then visit http://localhost:5000 to see the default ASP.NET Core page.

Containerize the ASP.NET Core app with Docker

Let’s now take our HelloWorld app and containerize it with Docker. Create a Dockerfile in the root of our app folder:

FROM microsoft/dotnet:1.0.1-core
COPY . /app
WORKDIR /app

RUN [“dotnet”, “restore”]
RUN [“dotnet”, “build”]

EXPOSE 8080/tcp
ENV ASPNETCORE_URLS http://*:8080

ENTRYPOINT [“dotnet”, “run”]

This is the recipe for the Docker image that we'll create shortly. In a nutshell, we're creating an image based on microsoft/dotnet:latest image, copying the current directory to /app directory in the container, executing the commands needed to get the app running, making sure port 8080 is exposed and that ASP.NET Core is using that port.

Now we’re ready to build our Docker image and tag it with our Google Cloud project id:

$ docker build -t gcr.io/<PROJECT_ID>/hello-dotnet:v1 .

To make sure that our image is good, let’s run it locally in Docker:

$ docker run -d -p 8080:8080 -t gcr.io/<PROJECT_ID>/hello-dotnet:v1

When you visit http://localhost:8080 to see the same default ASP.NET Core page, this time, it's running inside a Docker container.

Create a Kubernetes cluster in Container Engine

We're ready to create our Kubernetes cluster but first, let’s push our image to Google Container Registry using gcloud, so we can later refer to this image when we deploy and run our Kubernetes cluster. In the Google Cloud SDK Shell, type:


$ gcloud docker push gcr.io//hello-dotnet:v1

Create a Kubernetes cluster with two nodes in Container Engine:

$ gcloud container clusters create hello-dotnet-cluster --num-nodes 2 --machine-type n1-standard-1

This will take a little while but when the cluster's ready, you should see something like this:

Creating cluster hello-dotnet-cluster...done.

Deploy and run the app in Container Engine

At this point, we have our image hosted on Google Container Registry and we have our Kubernetes cluster ready in Container Engine. There’s only one thing left to do: run our image in our Kubernetes cluster. To do that, we can use the kubectl command line tool. Let’s first install kubectl. In Google Cloud SDK Shell:


$ gcloud components install kubectl

Configure kubectl command line access to the cluster with the following:

$ gcloud container clusters get-credentials hello-dotnet-cluster \
--zone europe-west1-b --project <PROJECT_ID>

Finally, create a deployment from our image in Kubernetes:

$ kubectl run hello-dotnet --image=gcr.io/hello-dotnet:v1 \
--port=8080
deployment “hello-dotnet” created

Make sure the deployment and pod are running:

$ kubectl get deployments
NAME           DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
hello-dotnet   1         1         1            0           28s

$ kubectl get pods
NAME                            READY     STATUS    RESTARTS   AGE
hello-dotnet-3797665162-gu99e   1/1       Running   0          1m

And expose our deployment to the outside world:

$ kubectl expose deployment hello-dotnet --type="LoadBalancer"
service "hello-dotnet" exposed

Once the service is ready, we can see the external IP address:

$ kubectl get services
NAME           CLUSTER-IP     EXTERNAL-IP      PORT(S)    AGE
hello-dotnet   XX.X.XXX.XXX   XXX.XXX.XX.XXX   8080/TCP   1m

Finally, if you visit the external IP address on port 8080, you should see the default ASP.NET Core app managed by Kubernetes!

It’s fantastic to see the ASP.NET and Linux worlds are coming together. With Kubernetes, ASP.NET Core apps can benefit from automated deployments, scaling, reliable upgrades and much more. It’s a great time to be a .NET developer, for sure!

Managing containerized ASP.NET Core apps with Kubernetes



One of our goals here on the Google Cloud Platform team is to support the broadest possible array of platforms and operating systems. That’s why we’re so excited about the ASP.NET Core, the next generation of the open source ASP.NET web framework built on .NET Core. With it, .NET developers can run their apps cross-platform on Windows, Mac and Linux.

One thing that ASP.NET Core does is allow .NET applications to run in Docker containers. All of a sudden, we’ve gone from Windows-only web apps to lean cross-platform web apps running in containers. This has been great to see!
ASP.NET Core supports running apps across a variety of operating system platforms
Containers can provide a stable runtime environment for apps, but they aren’t always easy to manage. You still need to worry about how to automate deployment of containers, how to scale up and down and how to upgrade or downgrade app versions reliably. In short, you need a container management platform that you can rely on in production.

That’s where the open-source Kubernetes platform comes in. Kubernetes provides high-level building blocks such as pods, labels, controllers and services that collectively help maintenance of containerized apps. Google Container Engine provides a hosted version of Kubernetes which can greatly simplify creating and managing Kubernetes clusters.

My colleague Ivan Naranjo recently published a blog post that shows you how to take an ASP.NET Core app, containerize it with Docker and and run it on Google App Engine. In this post, we’ll take a containerized ASP.NET Core app and manage it with Kubernetes and Google Container Engine. You'll be surprised how easy it is, especially considering that running an ASP.NET app on a non-Windows platform was unthinkable until recently.

Prerequisites

I am assuming a Windows development environment, but the instructions are similar on Mac or Linux.

First, we need to install .NET core, install Docker and install Google Cloud SDK for Windows. Then, we need to create a Google Cloud Platform project. We'll use this project later on to host our Kubernetes cluster on Container Engine.

Create a HelloWorld ASP.NET Core app

.NET Core comes with .NET Core Command Line Tools, which makes it really easy to create apps from command line. Let’s create a HelloWorld folder and create a web app using dotnet command:

$ mkdir HelloWorld
$ cd HelloWorld
$ dotnet new -t web

Restore the dependencies and run the app locally:

$ dotnet restore
$ dotnet run

You can then visit http://localhost:5000 to see the default ASP.NET Core page.

Get the app ready for publishing

Next, let’s pack the application and all of its dependencies into a folder to get it ready to publish.

$ dotnet publish -c Release

Once the app is published, we can test the resulting dll using the following:

$ cd bin/Release/netcoreapp1.0/publish/
$ dotnet HelloWorld.dll

Containerize the ASP.NET Core app with Docker

Let’s now take our HelloWorld app and containerize it with Docker. Create a Dockerfile in the root of our app folder:


FROM microsoft/dotnet:1.0.1-core
COPY . /app
WORKDIR /app
EXPOSE 8080/tcp
ENV ASPNETCORE_URLS http://*:8080
ENTRYPOINT ["dotnet", "HelloWorld.dll"]


This is the recipe for the Docker image that we'll create shortly. In a nutshell, we're creating an image based on microsoft/dotnet:latest image, copying the current directory to /app directory in the container, executing the commands needed to get the app running, making sure port 8080 is exposed and that ASP.NET Core is using that port.

Now we’re ready to build our Docker image and tag it with our Google Cloud project id:

$ docker build -t gcr.io/<PROJECT_ID>/hello-dotnet:v1 .

To make sure that our image is good, let’s run it locally in Docker:


$ docker run -d -p 8080:8080 -t gcr.io/<PROJECT_ID>/hello-dotnet:v1


Now when you visit http://localhost:8080 to see the same default ASP.NET Core page, it is running inside a Docker container.

Create a Kubernetes cluster in Container Engine

We are ready to create our Kubernetes cluster but first, let’s first install kubectl. In Google Cloud SDK Shell:

$ gcloud components install kubectl

Configure kubectl command line access to the cluster with the following:

$ gcloud container clusters get-credentials hello-dotnet-cluster \
   --zone europe-west1-b --project <PROJECT_ID>

Now, let’s push our image to Google Container Registry using gcloud, so we can later refer to this image when we deploy and run our Kubernetes cluster. In the Google Cloud SDK Shell, type:



$ gcloud docker push gcr.io/<PROJECT_ID>/hello-dotnet:v1

Create a Kubernetes cluster with two nodes in Container Engine:



$ gcloud container clusters create hello-dotnet-cluster --num-nodes 2 --machine-type n1-standard-1

This will take a little while but when the cluster is ready, you should see something like this:


Creating cluster hello-dotnet-cluster...done.

Deploy and run the app in Container Engine

At this point, we have our image hosted on Google Container Registry and we have our Kubernetes cluster ready in Google Container Engine. There’s only one thing left to do: run our image in our Kubernetes cluster. To do that, we can use the kubectl command line tool.

Create a deployment from our image in Kubernetes:


$ kubectl run hello-dotnet --image=gcr.io/<PROJECT_ID>hello-dotnet:v1 \
 --port=8080
deployment “hello-dotnet” created

Make sure the deployment and pod are running:


$ kubectl get deployments
NAME           DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
hello-dotnet   1         1         1            0           28s

$ kubectl get pods
NAME                            READY     STATUS    RESTARTS   AGE
hello-dotnet-3797665162-gu99e   1/1       Running   0          1m

And expose our deployment to the outside world:



$ kubectl expose deployment hello-dotnet --type="LoadBalancer"
service "hello-dotnet" exposed

Once the service is ready, we can see the external IP address:


$ kubectl get services
NAME           CLUSTER-IP     EXTERNAL-IP      PORT(S)    AGE
hello-dotnet   XX.X.XXX.XXX   XXX.XXX.XX.XXX   8080/TCP   1m

Finally, if you visit the external IP address on port 8080, you should see the default ASP.NET Core app managed by Kubernetes!

It’s fantastic to see the ASP.NET and Linux worlds are coming together. With Kubernetes, ASP.NET Core apps can benefit from automated deployments, scaling, reliable upgrades and much more. It’s a great time to be a .NET developer, for sure!

Managing containerized ASP.NET Core apps with Kubernetes



One of our goals here on the Google Cloud Platform team is to support the broadest possible array of platforms and operating systems. That’s why we’re so excited about the ASP.NET Core, the next generation of the open source ASP.NET web framework built on .NET Core. With it, .NET developers can run their apps cross-platform on Windows, Mac and Linux.

One thing that ASP.NET Core does is allow .NET applications to run in Docker containers. All of a sudden, we’ve gone from Windows-only web apps to lean cross-platform web apps running in containers. This has been great to see!
ASP.NET Core supports running apps across a variety of operating system platforms
Containers can provide a stable runtime environment for apps, but they aren’t always easy to manage. You still need to worry about how to automate deployment of containers, how to scale up and down and how to upgrade or downgrade app versions reliably. In short, you need a container management platform that you can rely on in production.

That’s where the open-source Kubernetes platform comes in. Kubernetes provides high-level building blocks such as pods, labels, controllers and services that collectively help maintenance of containerized apps. Google Container Engine provides a hosted version of Kubernetes which can greatly simplify creating and managing Kubernetes clusters.

My colleague Ivan Naranjo recently published a blog post that shows you how to take an ASP.NET Core app, containerize it with Docker and and run it on Google App Engine. In this post, we’ll take a containerized ASP.NET Core app and manage it with Kubernetes and Google Container Engine. You'll be surprised how easy it is, especially considering that running an ASP.NET app on a non-Windows platform was unthinkable until recently.

Prerequisites

I am assuming a Windows development environment, but the instructions are similar on Mac or Linux.

First, we need to install .NET core, install Docker and install Google Cloud SDK for Windows. Then, we need to create a Google Cloud Platform project. We'll use this project later on to host our Kubernetes cluster on Container Engine.

Create a HelloWorld ASP.NET Core app

.NET Core comes with .NET Core Command Line Tools, which makes it really easy to create apps from command line. Let’s create a HelloWorld folder and create a web app using dotnet command:

$ mkdir HelloWorld
$ cd HelloWorld
$ dotnet new -t web

Restore the dependencies and run the app locally:

$ dotnet restore
$ dotnet run

You can then visit http://localhost:5000 to see the default ASP.NET Core page.

Get the app ready for publishing

Next, let’s pack the application and all of its dependencies into a folder to get it ready to publish.

$ dotnet publish -c Release

Once the app is published, we can test the resulting dll using the following:

$ cd bin/Release/netcoreapp1.0/publish/
$ dotnet HelloWorld.dll


Containerize the ASP.NET Core app with Docker

Let’s now take our HelloWorld app and containerize it with Docker. Create a Dockerfile in the root of our app folder:


FROM microsoft/dotnet:1.0.1-core
COPY . /app
WORKDIR /app
EXPOSE 8080/tcp
ENV ASPNETCORE_URLS http://*:8080
ENTRYPOINT ["dotnet", "HelloWorld.dll"]


This is the recipe for the Docker image that we'll create shortly. In a nutshell, we're creating an image based on microsoft/dotnet:latest image, copying the current directory to /app directory in the container, executing the commands needed to get the app running, making sure port 8080 is exposed and that ASP.NET Core is using that port.

Now we’re ready to build our Docker image and tag it with our Google Cloud project id:

$ docker build -t gcr.io/<PROJECT_ID>/hello-dotnet:v1 .

To make sure that our image is good, let’s run it locally in Docker:


$ docker run -d -p 8080:8080 -t gcr.io/<PROJECT_ID>/hello-dotnet:v1


Now when you visit http://localhost:8080 to see the same default ASP.NET Core page, it is running inside a Docker container.


Create a Kubernetes cluster in Container Engine

We are ready to create our Kubernetes cluster but first, let’s first install kubectl. In Google Cloud SDK Shell:

$ gcloud components install kubectl

Configure kubectl command line access to the cluster with the following:

$ gcloud container clusters get-credentials hello-dotnet-cluster \
   --zone europe-west1-b --project <PROJECT_ID>

Now, let’s push our image to Google Container Registry using gcloud, so we can later refer to this image when we deploy and run our Kubernetes cluster. In the Google Cloud SDK Shell, type:



$ gcloud docker push gcr.io/<PROJECT_ID>/hello-dotnet:v1

Create a Kubernetes cluster with two nodes in Container Engine:



$ gcloud container clusters create hello-dotnet-cluster --num-nodes 2 --machine-type n1-standard-1

This will take a little while but when the cluster is ready, you should see something like this:


Creating cluster hello-dotnet-cluster...done.


Deploy and run the app in Container Engine

At this point, we have our image hosted on Google Container Registry and we have our Kubernetes cluster ready in Google Container Engine. There’s only one thing left to do: run our image in our Kubernetes cluster. To do that, we can use the kubectl command line tool.

Create a deployment from our image in Kubernetes:


$ kubectl run hello-dotnet --image=gcr.io/<PROJECT_ID>hello-dotnet:v1 \
 --port=8080
deployment “hello-dotnet” created

Make sure the deployment and pod are running:


$ kubectl get deployments
NAME           DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
hello-dotnet   1         1         1            0           28s

$ kubectl get pods
NAME                            READY     STATUS    RESTARTS   AGE
hello-dotnet-3797665162-gu99e   1/1       Running   0          1m

And expose our deployment to the outside world:



$ kubectl expose deployment hello-dotnet --type="LoadBalancer"
service "hello-dotnet" exposed

Once the service is ready, we can see the external IP address:


$ kubectl get services
NAME           CLUSTER-IP     EXTERNAL-IP      PORT(S)    AGE
hello-dotnet   XX.X.XXX.XXX   XXX.XXX.XX.XXX   8080/TCP   1m

Finally, if you visit the external IP address on port 8080, you should see the default ASP.NET Core app managed by Kubernetes!

It’s fantastic to see the ASP.NET and Linux worlds are coming together. With Kubernetes, ASP.NET Core apps can benefit from automated deployments, scaling, reliable upgrades and much more. It’s a great time to be a .NET developer, for sure!

New undersea cable expands capacity for Google APAC customers and users



Google’s mission is to connect people to the world’s information by providing fast and reliable infrastructure. From data centers to cables under the sea, we’re dedicated to building infrastructure that reaches more people than ever before.

Today, we announced that we will work with Facebook, Pacific Light Data Communication and TE Subcom to build the first direct submarine cable system between Los Angeles and Hong Kong with ultra high-capacity.

The Pacific Light Cable Network (PLCN) will have 12,800 km of fiber and an estimated cable capacity of 120 Tbps, making it the highest-capacity trans-Pacific route, a record currently held by another Google-backed cable system, FASTER. In other words, PLCN will provide enough capacity for Hong Kong to have 80 million concurrent HD video conference calls with Los Angeles  an example of Google Cloud Platform having the largest network backbone of any public cloud provider.

This is the sixth submarine cable in which Google has an ownership stake  joining the ranks of the Unity, SJC, FASTER, MONET and Tannat projects. We anticipate that PLCN will be operational in 2018.
From the get-go, PLCN is designed to accommodate evolving infrastructure technology, allowing us to independently choose network equipment and refresh optical technology as it advances. Most importantly, PLCN will bring lower latency, more security and greater bandwidth to Google users in the APAC region. In addition to our existing investments in APAC cloud regions and the FASTER cable system, PLCN expands our ability to serve people in Asia, including Google Cloud and G Suite customers.

Nei Hou, Hong Kong! We can’t wait to link up with you!

Transform your business; become a Google Certified Professional Data Engineer



Increasingly, data drives decisions and there’s a huge demand for technical professionals who can support decision makers by helping them to gain new insights from their data, in the right context and at the right time.

Today, IT professionals program data pipelines, tune databases, create reports and carry out statistical analysis on the current generation of IT technologies. Google Cloud Platform (GCP), especially fully managed services like Google BigQuery, Google Cloud Dataflow and Google Cloud Machine Learning, offers a full set of tools for ingesting, querying, transforming and deriving insight from data.

Last week, we announced our Google Cloud Certification Program, which will connect people with these technical skills to the companies looking to transform their businesses with data. By becoming a Google Certified Professional - Data Engineer, you signal to companies that you're skilled in our data technologies, and can help them tackle their challenging data problems.

Google Certified Professional - Data Engineer

We envision the Data Engineer as a key role in organizations of the future, helping to modernize the way they use data and infrastructure together to enable decision-making and business transformation. Data Engineers will build, maintain and troubleshoot data processing systems that are secure, reliable and scaleable, and this certification establishes a trusted standard of proficiency for the role.

The Google Certified Professional - Data Engineer can:
  • shape business outcomes by analyzing data
  • build statistical models that support smart decision-making
  • create machine learning models that automate and simplify business processes
  • design, build, maintain and troubleshoot data processing systems that are secure, reliable, fault-tolerant, scalable and efficient

How do I become a Google Certified Professional - Data Engineer?

First off, real-world, hands-on experience is the best preparation, so roll up your sleeves and start working with the GCP technologies. We also offer a number of educational resources and courses to help you on your journey.

To help you know where you may still need experience, we’ve developed the Google Data Engineer Certification Exam Guide. This lists all of the skills that we expect a Data Engineer to have.

Are you ready to show what you know? Sign up here and we’ll let you know when the beta exam launches for the Google Data Engineer Certification.

At Google Cloud, we're seeing incredible growth as more businesses move to the cloud, coupled with increasing demand for people who fit the Data Engineer profile. Our customers are looking to solve existing and new problems with data using some of the same technologies, tools and techniques that Google has used in solving data problems. A qualified Data Engineer will make all the difference.

Introducing Google Customer Reliability Engineering



In the 25 years that I’ve been in technology nearly everything has changed. Computers have moved out of the labs and into our pockets. They’re connected together 24/7 and the things we can do with them are starting to rival our most optimistic science fiction.

Almost nothing looks the same as it did back then  except customer support. Support is (basically) still people in call centers wearing headsets. In this new world, that old model just isn’t enough.

We want to change that.

Last week, we announced a brand new profession at Google: Customer Reliability Engineering, or CRE. The mission of this new role is to create shared operational fate between Google and our Google Cloud Platform customers, to give you more control over the critical applications you're entrusting to us, and share a fate greater than money.

Reducing customer anxiety

When you look out at organizations adopting cloud, you can’t help but notice high levels of anxiety.

It took me a while to figure out a reasonable explanation, but here’s where I finally landed:

Humans are evolutionarily disposed to want to control our environment, which is a really valuable survival attribute. As a result, we don’t react well when we feel like we’re losing that control. The higher the stakes, the more forcefully we react to the perceived loss.

Now think about the basic public cloud business model. It boils down to:

Give up control of your physical infrastructure and (to some extent) your data. In exchange for that uncertainty, the cloud will give you greater innovation, lower costs, better security and more stability.

It’s a completely rational exchange, but it also pushes against one of our strongest evolutionary impulses. No wonder people are anxious.

The last several years have taught me that many customers will not eat their anxieties in exchange for lower prices  at least not for long. This is especially true in cloud because of the stakes involved for most companies. There have already been a small number of high-profile companies going back on-prem because the industry hasn’t done enough to recognize this reality.

Cloud providers ignore this risk at their own peril and addressing this anxiety will be a central requirement to unlock the overwhelming majority of businesses not yet in the cloud.

The support mission


The support function in organizations used to be pretty straightforward: answer questions and fix problems quickly and efficiently. Over time, much of the entire IT support function has been boiled down to FAQs, help centers, checklists and procedures.

In the era of cloud technology, however, this is completely wrong.

Anxious customers need empathy, compassion and humanity. You need to know that you're not alone and that we take you seriously. You are, after all, betting your businesses on our platforms and tools.

There's only one true and proper mission of a support function in this day and age:

              Drive Customer Anxiety -> 0

People who aren’t feeling anxious don’t spend the time and effort to think seriously about leaving a platform that’s working for them. The decision to churn starts with an unresolved anxiety.

Anxiety = 1 / Reliability


It seems obvious to say that the biggest driver of customer anxiety is reliability.

Here’s the non-obvious part, though.

Cloud customers don’t really care about the reliability of their cloud provider you care about the reliability of your production application. You only indirectly care about the reliability of the cloud in which you run.

The reliability of an application is the product of two things:
  1. The reliability of the cloud provider
  2. The reliability inherent in the design, code and operations of your application

Item (1) is a pretty well understood problem in the industry. There are thousands of engineers employed at the major cloud vendors that focus exclusively on it.

Here at Google we pioneered a whole profession around it: Site Reliability Engineering (SRE).


We even wrote a book!

What about item (2)? Who’s worried about the reliability inherent in the design, implementation and operation of your production application?

So far, just you.

The standard answer in the industry is:
Here are some white papers, best practices and consultants. Don’t do silly things and your app will be mostly fine.

As an industry, we’re asking you to bet your livelihoods on our platforms, to let us be your business partner and to give up big chunks of control. And in exchange for that we’re giving you . . . whitepapers.

No wonder you’re anxious. You should be!

No matter how much innovation, speed or scale your cloud provider gives you, this arrangement will always feel unbalanced especially at 3am when something goes wrong.

Perhaps you think I’m overstating the case?

Just a few months ago Dropbox announced that it was leaving their public cloud provider to go back on-prem. They’ve spoken at length about their decision making process around this choice and have expressed a strong desire to more fully “control their own destiny.” The cumulative weight of their loss of control just got to be too much. So they left.

SRE 101

The idea behind Google CRE comes from the decade-long journey of Google SRE. I realize you might not be familiar with the history of SRE, so let me spend a couple paragraphs to catch you up . . .
. . .  there were two warring kingdoms  developers and operations.

The developers were interested in building and shipping interesting and useful features to users. The faster the innovation, the better. In the developer tribe’s perfect world there would never be a break in the development and deployment of new and awesome products.

The operations kingdom, on the other hand, was concerned with the reliability of the systems being shipped, because they were the ones getting paged at 3am when something went down. Once the system became stable they’d rather never ship anything new again since 100% of new bugs come from new code.

For decades these kingdoms warred and much blood was spilled. (OK. Not actual blood, but the emails could get pretty testy . . . )

Then, one day this guy had an idea.
Benjamin Treynor-Sloss, VP, 24x7, Father of SRE

He realized that the underlying assumptions of this age old conflict were wrong and recast the problem into an entirely new notion  the error budget.

No system you’re likely to build (except maybe a pacemaker) needs to be available 100% of the time. Users have lots of interruptions they never notice because they’re too busy living their lives.

It therefore follows that for nearly all systems there's a very small (but nonzero) acceptable quantity of unavailability. That downtime can be thought of as a budget. As long as a system is down less than its budget it is considered healthy.

For example, let’s say you need a system to be available 99.9% of the time (three nines). That means it’s OK for the system to be unavailable 0.1% of the time (for any given 30-day month, that’s 43 minutes).

As long as you don’t do anything that causes the system to be down more than 43 minutes you can develop and deploy to your heart’s content. Once you blow your budget, however, you need to spend 100% of your engineering time writing code that fixes the problem and generally makes your system more stable. The more stable you make things, the less likely you are to blow your error budget next month and the more new features you can build and deploy.

In short, the error budgets align the interests of the developer and operations tribes and create a virtuous circle.

From this, a new profession was born: Site Reliability Engineering (SRE).

At Google, there's a basic agreement between SREs and developers.

The SREs will accept the responsibility for the uptime and healthy operation of a system if:
  1. The system (as developed) can pass a strict inspection process — known as a Production Readiness Review (PRR)
  2. The development team who built the system agrees to maintain critical support systems (like monitoring) and be active participants in key events like periodic reviews and postmortems
  3. The system does not routinely blow its error budget

If the developers don’t maintain their responsibilities in the relationship then the SREs “offboard” the system. (And hand back the pagers!)

This basic relationship has helped create a culture of cooperation that has led to both incredible reliability and super fast innovation.

The Customer Reliability Engineering mission

At Google, we’ve decided we need a similar approach with our customers.

CRE is what you get when you take the principles and lessons of SRE and apply them towards customers.

The CRE team deeply inspects the key elements of a customer’s critical production application  code, design, implementation and operational procedures. We take what we find and put the application (and associated teams) through a strict PRR.

At the end of that process we'll tell you: “here are the reliability gaps in your system. Here is your error budget. If you want more nines here are the changes you should make.”

We'll also build common system monitoring so that we can have mutually agreed upon telemetry for paging and tickets.

It’ll be a lot of hard work on your part to get past our PRR, but in exchange for the effort you can expect the following:
  1. Shared paging. When your pagers go off, so will ours.
  2. Auto-creation and escalation of Priority 1 tickets
  3. CRE participation in customer war rooms (because despite everyone’s best efforts, bad things will inevitably happen)
  4. A Google-reviewed design and production system

Additional Cost: $0

Wait . . . that’s a lot of value. Why aren’t we charging money for it?

The most important lever SREs have in Google is the ability to hand back the pagers. It’s the same thing with CREs. When a customer fails to keep up their end of the work with timely bug fixes, participation in joint postmortems, good operational hygiene etc., we'll “hand back the pagers” too.

Please note, however, that $0 is not the same as “free.” Achieving Google-class operational rigor requires a sustained commitment on your part. It takes time and effort. We’ll be there on the journey, but you still need to walk the path. If you want some idea of what you’re signing up to, get a copy of the Site Reliability Engineering book and ask yourself how willing you are to do the things it outlines.

It’s fashionable for companies to tell their customers that “we’re in this together,” but they don’t usually act the part.

People who are truly “in it together” are accountable to one another and have mutual responsibilities. They work together as a team for a common goal and share a fate greater than the dollars that pass between them.

This program won’t be for everyone. In fact, we expect that the overwhelming majority of customers won’t participate because of the effort involved. We think big enterprises betting multi-billion dollars businesses on the cloud, however, would be foolish to pass this up. Think of it as a de-risking exercise with a price tag any CFO will love.

Lowering the anxiety with a new social contract

Over the last few weeks we’ve been quietly talking to customers to gauge their interest in the CRE profession and our plans for it. Every time we do, there’s a visible sigh, a relaxing of the shoulders and the unmistakable expression of relief on people's faces.

Just the idea that Google would invest in this way is lowering our customers’ anxiety.

This isn’t altruism, of course. It’s just good business. These principles and practices are a strong incentive for a customer to stay with Google. It’s an affinity built on human relations instead of technical lock-in.

By driving inherent reliability into your critical applications we also increase the practical reliability of our platform. That, in turn, lets us innovate faster (a thing we really like to do).

If you’re a cloud customer, this is the new social contract we think you deserve.

If you’re a service provider looking to expand and innovate your cloud practice, we’d like to work with you to bring these practices to scale.

If you’re another cloud provider, we hope you’ll join us in growing this new profession. It’s what all our customers truly need.

How to authenticate users on Google App Engine using Firebase



Google App Engine offers a variety of user sign-in options, but what if you need a full stack solution for authentication, including verifying access tokens on the server? Meet Firebase Authentication, a complete sign-in experience that provides a drop-in UI, powerful SDKs, and, yes, backend services for token verification.

Firebase, Google’s backend as a service offering for creating mobile and web applications, has Node.js and Java server SDKs for integrating your own servers with Firebase Authentication. The Firebase Java server SDK offers a built-in method for verifying and decrypting tokens that you can use for authentication in Java App Engine apps, but no such SDK currently exists for the rest of the backend languages in App Engine.

Because using third-party JSON Web Token (JWT) libraries to manage authentication for other languages can be difficult, we just published a solution for Firebase Authentication on App Engine with Python, the first in a series for App Engine languages.

The tutorial walks you through a simple note-taking application called Firenotes that stores users’ notes in their own personal notebooks, which are identified by a unique user ID generated by Firebase. The application stores credentials in Google Cloud Datastore, but you can store them in any database, and even integrate your existing credentialing system with the user-ID-based method that Firebase uses.
(click to enlarge)
As the diagram above demonstrates, Firebase both mints the access tokens and provides public key certificates to verify them, so you need only to implement the verification code, which we have written for you.

We hope this solution helps you handle authentication quickly, so you can get back to writing the meat of the next great app!

How to use Docker to run ASP.NET Core apps on Google App Engine



Ever wish you could run an ASP.NET Core app on Google App Engine? Now you can, by packaging it in a Docker container.

Google App Engine is a platform for building scalable web applications and mobile backends, and provides the built-in services and APIs common to most applications. Up until recently this infrastructure was only accessible from a handful of languages (Java, Python, Go and PHP), but that changed with the introduction of App Engine Flexible Environment, previously known as Managed VMs. App Engine Flexible allows you to use a Docker container of your choice as the backend for your app. And since it's possible to wrap an ASP.NET Core app in a Docker image, this allows us to run ASP.NET Core apps on App Engine Flexible.

Step 1: Run your ASP.NET Core app locally

There have been NuGet packages for Google Cloud APIs in .NET for a long time, but starting with version 1.15.0.560, these NuGet packages have started targeting the .NET Core runtime. This allows you to write ASP.NET Core apps that use the Google Cloud APIs, to take advantage of services like Google Cloud Storage, Google Cloud Pub/Sub, or perhaps the newer machine learning APIs.

To show you how to deploy an ASP.NET Core app to App Engine Flexible, we’re going to deploy a very simple app (see the documentation for information about how to build an ASP.NET app that uses the GCP APIs). Of course to use ASP.NET Core, you'll first need to install the .NET Core runtime and Visual Studio tooling, as well as Bower, which our ASP.NET Core app project uses to set up client-side dependencies.

Let’s start by creating a new ASP.NET Core app from Visual Studio. Open up Visual Studio and select “File > New Project…”. In the dialog, select the “Web” category and the “ASP.NET Core Web Application (.NET Core)” template:
Name the app “DemoFlexApp” and save it in the default “Projects” directory for Visual Studio. In the next dialog, select “Web Application” and press “OK”:
This will generate the app for you. Try it locally by pressing F5, which will build and run the app and open it in a browser window. Once you're done, stop the app by closing the browser and stopping the debugging session.

Step 2: Package it as a Docker container

Now let’s prepare our app to run on App Engine Flexible. The first step is to define the container and its contents. Don’t worry, you won’t need to install Docker  App Engine Flexible can build Docker images remotely as part of the deployment process.

For this section, we'll work from the command line. Open up a new command line window by using Win+R and typing cmd.exe in the dialog.

Now we need to navigate to the directory that contains the project you just created. You can get the path to the project by right clicking on the project in Visual Studio’s Solution Explorer and using the “Open Folder in File Explorer” option:
You can then copy the path from the File Explorer window and paste it into the command line window.

We'll start by creating the contents of the Docker image for our app, including all its packages, pages, and client side scripts in a single directory. The dotnet CLI creates this directory by “publishing” your app to it, with the following command:

dotnet publish -c Release

Your app is now published to the default publish directory in the Release configuration. During the process of publishing your app, the dotnet CLI resolves all of dependencies and gathers them together with all other files into the output directory. This directory is what Microsoft calls a .NET Core Portable App; it contains all of the files that compose your app, and can be used to run your app on any platform that NET Core supports. You can run your app from this directory with this command:

cd bin\Release\netcoreapp1.0\publish
dotnet DemoFlexApp.dll

Be sure to be in the published directory when you run this command so that all of the resources can be found.

The next step is to configure the app that we'll deploy to App Engine Flexible. This requires two pieces:

  • The Dockerfile that describes how to package the app files into a Docker container
  • The app.yaml file that tells the Google Cloud SDK tools how to deploy the app

We will deploy the app from the “published” directory that you created above.

Take the following lines and copy them to a new file called “Dockerfile” under the “published” directory:

FROM microsoft/dotnet:1.0.1-core
COPY . /app
WORKDIR /app
EXPOSE 8080
ENV ASPNETCORE_URLS=http://*:8080
ENTRYPOINT ["dotnet", "DemoFlexApp.dll"]

A Dockerfile describes the content of the Docker image starting from an existing image and adds files and other changes to it. Our repo Dockerfile starts from the Microsoft official image, which is already configured to run .NET Core apps and adds the app files and the tools necessary to run the app from the directory.

One important configuration included in our Dockerfile is the port on which the app listens for incoming traffic  port 8080, per App Engine Flexible requirements. This is accomplished by setting the ASPNETCORE_URLS environment variable, which ASP.NET Core apps use to determine the port to listen to.

The app.yaml file describes how to deploy the app to App Engine, in this case, the App Engine Flexible environment. Here's the minimum configuration file required to run on App Engine Flexible, specifying a custom runtime and the Flexible environment. Copy its contents and paste them into a new file called “app.yaml” under the “published” directory:

runtime: custom
vm: true

Step 3: Deploy to App Engine Flexible

Once you’ve saved the Dockerfile and app.yaml files to the published directory, you're ready to deploy your app to App Engine Flexible. We’re going to use the Google Cloud SDK to do this. Follow these steps to get the SDK fully set up on your box. You'll also need a Google Cloud Platform project with billing enabled.

Once you've fully configured the app and selected a project to deploy it to, you can finally deploy to App Engine Flexible. To do that run this command:

gcloud app deploy app.yaml

The command will take some time to complete, especially the first time since it has to perform all the setup. Once done, open a browser to the newly deployed app:

gcloud app browse

There! You’ve downloaded an ASP.NET Core app, packaged it as a Docker container, and deployed it to Google App Engine Flexible. We look forward to seeing more ASP.NET apps running on Google App Engine.


Powering geospatial analysis: public geo datasets now on Google Cloud



With dozens of public satellites in orbit and many more scheduled over the next decade, the size and complexity of geospatial imagery continues to grow. It has become increasingly difficult to manage this flood of data and use it to gain valuable insights. That's why we're excited to announce that we're bringing two of the most important collections of public, cost-free satellite imagery to Google Cloud: Landsat and Sentinel-2.

The Landsat mission, developed under a joint program of the USGS and NASA, is the longest continuous space-based record of Earth’s land in existence, dating back to 1972 with the Landsat 1 satellite. Landsat imagery sets the standard for Earth observation data due to the length of the mission and the rich data provided by its multispectral sensors. Landsat data has proven invaluable to agriculture, geology, forestry, regional planning, education, mapping, global change and disaster response. This collection includes the complete USGS archive of the Landsat 4, 5, 7 and 8 satellites, and the data is updated as new data arrives from Landsat 7 and 8. The collection is updated daily and contains a total of 4 million scenes and 1.3 petabytes of data covering 1984 to the present  over 35 years of imagery of our Earth ready for immediate analysis.

Sentinel-2, part of the European Union’s ambitious Copernicus Earth observation program, raised the bar for Earth observation data, with a Multi-Spectral Instrument (MSI) that produces images of the Earth with a resolution of up to 10 meters per pixel, far sharper than that of Landsat. Sentinel-2 data is especially useful for agriculture, forestry and other land management applications. For example, it can be used to study leaf area and chlorophyll and water content, to map forest cover and soils, and to monitor inland waterways and coastal areas. Images of natural disasters such as floods and volcanic eruptions can also be used for disaster mapping and humanitarian relief efforts. The collection currently contains 970,000 images and over 430 terabytes of data, updated daily.
Brisbane, Australia, as viewed by Sentinel-2 (ESA)
Here at Google, we have years of experience working with the Landsat and Sentinel-2 satellite imagery collections. Our Google Earth Engine product, a cloud-based platform for doing petapixel-scale analysis of geospatial data, was created to help make analyzing these datasets quick and easy. Earth Engine’s vast catalog of data, with petabytes of public data, combined with an easy to use scripting interface and the power of Google infrastructure, has helped to revolutionize Earth observation. Now, by bringing the two most important datasets from Earth Engine into Google Cloud, we're also enabling customer workflows using Google Compute Engine, Google Cloud Machine Learning and any other Google Cloud services.

One customer that has taken advantage of the powerful combination of Google Cloud and these datasets is Descartes Labs. Descartes Labs is focused on combining machine learning and geospatial data to forecast global crop production. “For an early stage technology startup, satellite imagery can be impossibly expensive,” said Descartes Labs CEO Mark Johnson. “To make accurate machine learning models of major crops, we needed decades of satellite imagery from the entire globe. Thanks to Google Earth Engine hosting the entire Landsat archive publicly on Google Cloud, we can focus on algorithms instead of worrying about collecting petabytes of data. Earth observation will continue to improve with every new satellite launch and so will our ability to forecast global food supply. We’re excited that Google sees the potential in hosting open geospatial data on Google Cloud, since it will enable companies like ours to better understand the planet we live on.”
Humboldt, Iowa (Landsat 8, USGS)
Agricultural field edge boundaries and field segmentation from July 2016 of Humboldt, Iowa, generated using machine learning and Landsat data on Google Cloud.
Spaceknow is another company using Google Cloud to mine Landsat data for unique insights. Spaceknow brings transparency to the global economy by tracking global economic trends from space. Spaceknow's Urban Growth Index analyzes massive amounts of multispectral imagery in China and elsewhere. Using a TensorFlow-based deep learning framework capable of predicting semantic labels for multi-channel satellite imagery, Spaceknow determines the percentage of land categorized as urban-type for a specified geographic region. Furthermore, its China Satellite Manufacturing Index uses proprietary algorithms to analyze Landsat 7 and 8 imagery of over 6,000 industrial facilities across China, measuring levels of Chinese manufacturing activity. Using 2.2 billion satellite observations, this index covers over 500,000 square kilometers, and it can be quickly updated when new images arrive from the satellites. According to Pavel Machalek, the CEO of Spaceknow: "Google Cloud provides us with the unique capability to develop, train and deploy neural networks at unprecedented scale. Our customers depend on the information we provide for critical, day-to-day decision making."
   Fuzhou, China 2000 (Landsat 7, USGS)                         Fuzhou, China 2016 (Landsat 8, USGS)
With over a petabyte of the world’s leading public satellite imagery data available at your fingertips, you can avoid the cost of storing the data and the time and cost required to download these large datasets and focus on what matters most: building products and services for your customers and users. Whether you're using Google Cloud’s leading machine learning and compute services or Earth Engine for simple and powerful analysis, we can help you turn pixels into knowledge to help your organization make better decisions.

Learn more about these new geo imagery datasets at http://cloud.google.com/storage/docs/public-datasets/ and about the full range of public datasets at http://cloud.google.com/public-datasets/.

Google Container Engine now on Kubernetes 1.4



Today, Kubernetes 1.4 is available to all Google Container Engine customers. In addition to all the new features Kubernetes 1.4 — including multi-cluster federation, simplified setup and one-command install for popular workloads like MySQL, MariaDB and Jenkins — we’ve also taken big steps to make Google Cloud Platform (GCP) the best place to run your Kubernetes workloads.

Container Engine has continued its rapid growth, doubling in usage every 90 days, while still providing a fully managed Kubernetes service with 99.5% uptime for applications large and small. We’ve also made a number of improvements to the platform to make it even easier to manage and more powerful to use:

  • One-click alpha clusters can be spun up as easily as a regular cluster, so testing Kubernetes’ alpha features like persistent application support is a one-click operation.
  • Support for AppArmor in the base image gives applications deployed to Container Engine multiple layers of defense-in-depth.
  • Integration with Kubernetes Cluster Federation allows you to add a Container Engine cluster to your existing federation, greatly simplifying cloud bursting and multi-cloud deployments.
  • Rich support for Google Cloud Identity & Access Management allows you to manage GKE clusters with the same multi-faceted roles you use across your GCP projects.
  • A new Google Container-VM Image makes upgrading a breeze and allows new upgrades to be automatically installed with a simple reboot.
  • Monitoring of all cluster add-ons ensures that all key functions for your cluster are available and ready to use — one less thing to think about when running a large distributed application.

From new startups to the largest organizations, we’ve seen tremendous adoption of Container Engine, here are a few unique highlights:

  • Niantic - creators of the global phenomenon Pokémon GO, relies on Container Engine to power their extraordinary growth.
  • Philips - smart connected lighting system Hue, receives 200 million transactions a day that are easily handled by Container Engine.
  • Google Cloud ML - the new Cloud Machine Learning service from GCP is also running fully on Container Engine.
  • And many more companies, from Box to Pearson, are choosing Kubernetes to manage their production workloads.

As always, if you’d like to help shape the future of Kubernetes, please participate in the Kubernetes community; we’d love to have you! Please join the google-containers mailing list or on the kubernetes-users or google-containers Slack channels.

Finally, if you’ve never tried GCP before, getting started is easy. Sign up for your free trial here.

Thank you for your support!