Tag Archives: Compute

Google Cloud Platform for data center professionals: what you need to know



At Google Cloud, we love seeing customers migrate to our platform. Companies move to us for a variety of reasons, from low costs to our machine learning offerings. Some of our customers, like Spotify and Evernote, have described the various reasons that motivated them to migrate to Google Cloud.

However, we recognize that a migration of any size can be a challenging project, so today we're happy to announce the first part of a new resource to help our customers as they migrate. Google Cloud Platform for Data Center Professionals is a guide for customers who are looking to move to Google Cloud Platform (GCP) and are coming from non-cloud environments. We cover the basics of running IT  Compute, Networking, Storage, and Management. We've tried to write this from the point of view of someone with minimal cloud experience, so we hope you find this guide a useful starting point.

This is the first part of an ongoing series. We'll add more content over time, to help describe the differences in various aspects of running your company's IT infrastructure.

We hope you find this useful in learning about GCP. Please tell us what you think and what else you would like to add, and be sure to follow along with our free trial when you sign up!

Build highly available services with general availability of Regional Managed Instance Groups



Businesses choose to build applications on Google Cloud Platform (GCP) for our low-latency and reliable global network. As customers build applications that are increasingly business-critical, designing for high-availability is no longer optional. That’s why we’re pleased to announce the general availability of Regional Managed Instance Groups in Google Compute Engine.

With virtually no effort on the part of customers, this release offers a fully managed service for creating highly available applications: simply specify the region in which to run your application, and Compute Engine automatically balances your machines across independent zones within the region. Combined with load balancing and autoscaling of your machine instances, your applications scale up and down gracefully based on policies fully within your control.

Distributing your application instances across multiple zones is a best practice that protects against adverse events such as a bad application build, networking problems or a zonal outage. Together with overprovisioning the size of your managed instance group, these practices ensure high availability for your applications in the regions where you serve your users.

Customers have vetted regional managed instance groups during our alpha and beta periods, ranging from major consumer-facing brands like Snap Inc. and Waze, to popular services like BreezoMeter, The Carousel and InShorts.
It’s easy to get started with regional managed instance groups. Or let us know if we can assist with architecting your most important applications with the reliability users expect from today’s best cloud apps.


Making every (leap) second count with our new public NTP servers



As if 2016 wasn’t long enough, this year, a leap second will cause the last day of December to be one second longer than normal. But don’t worry, we’ve built support for the leap second into the time servers that regulate all Google services.

Even better, our Network Time Protocol (NTP) servers are now publicly available to anyone who needs to keep local clocks in sync with VM instances running on Google Compute Engine, to match the time used by Google APIs, or for those who just need a reliable time service. As you would expect, our public NTP service is backed by Google’s load balancers and atomic clocks in data centers around the world.

Here’s how we plan to handle the leap second and keep things running smoothly here at Google. It’s based on what we learned during the leap seconds in 2008, 2012 and 2015.

Leap seconds compensate for small and unpredictable changes in the Earth's rotation, as determined by the International Earth Rotation and Reference Systems Service (IERS). The IERS typically announces them six months in advance but the need for leap seconds is very irregular. This year, the leap second will happen at 23:59:60 UTC on December 31, or 3:59:60 pm PST.

No commonly used operating system is able to handle a minute with 61 seconds, and trying to special-case the leap second has caused many problems in the past. Instead of adding a single extra second to the end of the day, we'll run the clocks 0.0014% slower across the ten hours before and ten hours after the leap second, and “smear” the extra second across these twenty hours. For timekeeping purposes, December 31 will seem like any other day.

All Google services, including all APIs, will be synchronized on smeared time, as described above. You’ll also get smeared time for virtual machines on Compute Engine if you follow our recommended settings. You can use non-Google NTP servers if you don’t want your instances to use the leap smear, but don’t mix smearing and non-smearing time servers.

If you need any assistance, please visit our Getting Help page.

Happy New Year, and let the good times roll.

Announcing GPUs for Google Cloud Platform


CPU-based machines in the cloud are terrific for general purpose computing, but certain tasks such as rendering or large-scale simulations are much faster on specialized processors. Graphics Processing Units (GPUs) contain hundreds of times as many computational cores as CPUs and are great at accelerating risk analysis, studying molecular binding or optimizing the shape of a turbine blade. If your CPU-based instance feels like a Formula One race car but you’re in need of a rocket, you’re going to love our new cloud GPUs.

Early in 2017, Google Cloud Platform will offer GPUs worldwide for Google Compute Engine and Google Cloud Machine Learning users. Complex medical analysis, financial calculations, seismic/subsurface exploration, machine learning, video rendering, transcoding and scientific simulations are just some of the applications that can benefit from the highly parallel compute power of GPUs. GPUs in Google Cloud give you the freedom to focus on solving challenging computational problems while accessing GPU-equipped machines from anywhere. Whether you need GPUs for a few hours or several weeks, we’ve got you covered.

Google Cloud will offer AMD FirePro S9300 x2 that supports powerful, GPU-based remote workstations. We'll also offer NVIDIA® Tesla® P100 and K80 GPUs for deep learning, AI and HPC applications that require powerful computation and analysis. GPUs are offered in passthrough mode to provide bare metal performance. Up to 8 GPU dies can be attached per VM instance including custom machine types.
Google Cloud GPUs give you the flexibility to mix and match infrastructure. You’ll be able to attach up to 8 GPU dies to any non-shared-core machine, whether you’re using an n1-highmem-8 instance with 3 TB of super-fast Local SSD or a custom 28 vCPU virtual machine with 180 GB of RAM. Like our VMs, GPUs will be priced per minute and GPU instances can be up and running within minutes from Google Cloud Console or from the gcloud command line. Whether you need one or dozens of instances, you only pay for what you use.

During an early access program, customers have been running machine learning training, seismic analysis, simulations and visualization on GPU instances. Startup MapD gets excellent results with a GPU-accelerated database.

"These new instances of GPUs in the Google Cloud offer extraordinary performance advantages over comparable CPU-based systems and underscore the inflection point we are seeing in computing today. Using standard analytical queries on the 1.2 billion row NYC taxi dataset, we found that a single Google n1-highmem-32 instance with 8 attached K80 dies is on average 85 times faster than Impala running on a cluster of 6 nodes each with 32 vCPUs. Further, the innovative SSD storage configuration via NVME further reduced cold load times by a factor of five. This performance offers tremendous flexibility for enterprises interested in millisecond speed at over billions of rows."

- Todd Mostak, MapD Founder and CEO

The Foundry, a visual effects software provider for the entertainment industry has been experimenting with workstations in the cloud.

"At The Foundry, we're really excited about VFX in the cloud, and with the arrival of GPUs on Google Cloud Platform, we'll have access to the cutting edge of visualization technology, available on-demand and charged by the minute. The ramifications for our industry are enormous."

- Simon Pickles, Lead Engineer, Pipeline-in-the-Cloud

Tell us about your GPU computing requirements and sign up to be notified about GPU-related announcements using this survey. Additional information is available on our webpage.

Google Cloud, HEPCloud and probing the nature of Nature



Understanding the nature of the universe isn't a game for the resource-constrained. Today, we probe the very structure of matter using multi-billion dollar experimental machinery, hundreds of thousands of computing cores and exabytes of data storage. Together, the European Center for Nuclear Research (CERN) and partners such as Fermilab built the Large Hadron Collider (LHC), the world's largest particle collider, to recreate and observe the first moments of the universe.

Today, we're excited to announce that Google Cloud Platform (GCP) is now a supported provider for HEPCloud, a project launched in June 2015 by Fermilab’s Scientific Computing Division to develop a virtual facility providing a common interface to local clusters, grids, high-performance computers and community and commercial clouds. Following the recommendations from a 2014 report by the Particle Physics Project Prioritization Panel to the national funding agencies, the HEPCloud project demonstrates the value of the elastic provisioning model using commercial clouds.

The need for compute resources by the high-energy physics (HEP) community is not constant. It follows cycles of peaks and valleys driven by experiment schedules and other constraints. However, the conventional method of building data centers is to provide all the capacity needed to meet peak loads, which can lead to overprovisioned resources. To help mitigate this, Grid federations such as the Open Science Grid offer opportunistic access to compute resources across a number of partner facilities. With the appetite for compute power expected to increase over 100-fold over the next decade, so too will the need to improve cost efficiency with an “elastic” model for dynamically provisioned resources.

With Virtual Machines (VMs) that boot within seconds and per-minute billing, Google Compute Engine lets HEPCloud pay for only the compute it uses. Because the simulations that Fermilab needs to perform are fully independent and parallelizable, this workload is appropriate for Preemptible Virtual Machines. Without the need for bidding, Preemptible VMs can be up to 80% cheaper compared to regular VMs. Combined with Custom Machine Types, Fermilab is able to double the computing power of the Compact Muon Solenoid (CMS) experiment by adding 160,000 virtual cores and 320 TB of memory in a single region, for about $1400 per hour.

At SC16 this week, Google and Fermilab will demonstrate how high-energy physics workflows can benefit from the elastic Google Cloud infrastructure. The demonstration involves computations that simulate billions of particles from the CMS detector (see fig. 1) at the LHC. Using Fermilab’s HEPCloud facility, the goal is to burst CMS workflows to Compute Engine instances for one week.
Fig. 1: The CMS detector before closure (credit: 2008 CERN, photo: Maximilien Brice, Michael Hoch, Joseph Gobin)

The demonstration also leverages HTCondor, a specialized workload management system for compute-intensive jobs, to manage resource provisioning and job scheduling. HTCondor manages VMs natively using the Compute Engine API. In conjunction with the HEPCloud Decision Engine component, it enables the use of the remote resources at scale at an affordable rate (fig. 2). With half a petabyte of input data in Google Cloud Storage, each task reads from the bucket via gcsfuse, performs its computation on Preemptible VMs, then transports the resulting output back to Fermilab through the US Department of Energy Office of Science's Energy Sciences Network (ESNet), a high-performance, unclassified network built to support scientific research.
Fig. 2: The flow of data from the CMS detector to scientific results through the CMS, HEPCloud and Google Cloud layers. Image of CMS event display © CERN by McCauley, Thomas; Taylor, Lucas; the CMS Collaboration is licensed under CC BY-SA 4.0.

The demonstration shows that HTCondor, HEPCloud and GCP all work together to enable real HEP science to be conducted in a cost-effective burst mode at a scale that effectively doubles the current capability. The Fermilab project plans to transition the HEPCloud facility into production use by the HEP community in 2018.
“Every year we have to plan to provision computing resources for our High-Energy Physics experiments based on their overall computing needs for performing their science. Unfortunately, the computing utilization patterns of these experiments typically exhibit peaks and valleys during the year, which makes cost-effective provisioning difficult. To achieve this cost effectiveness we need our computing facility to be able to add and remove resources to track the demand of the experiments as a function of time. Our collaboration with commercial clouds is an important component of our strategy for achieving this elasticity of resources, as we aim to demonstrate with Google Cloud for the CMS experiment via the HEPCloud facility at SC16.” 
- Panagiotis Spentzouris, Head of the Scientific Computing Division at Fermilab
If you're at SC16, stop by the Google booth and speak with experts on scalable high performance computing or spin up your own HTCondor cluster on Google Cloud Platform for your workloads.

Now shipping: Windows Server 2016 images on Google Compute Engine



The Google Cloud Platform (GCP) team is working hard to make GCP the best environment to run enterprise Windows workloads. To that end, we're happy to announce support for Windows Server 2016 Datacenter Edition, the latest version of Microsoft’s server operating system, on Google Compute Engine. Starting this week, you can launch instances with Google Compute Engine VM images with Microsoft Windows Server 2016 preinstalled. In addition, we now also support images for Microsoft SQL Server 2016 with Windows Server 2016. Specifically, we now support the following versions in GA:

  • Windows Server 2016 Datacenter Edition
  • SQL Server Standard 2016 with Windows Server 2016
  • SQL Server Web 2016 with Windows Server 2016
  • SQL Server Express 2016 with Windows Server 2016
  • SQL Server Standard (2012, 2014, 2016) with Windows Server 2012 R2
  • SQL Server Web (2012, 2014, 2016) with Windows Server 2012 R2
  • SQL Server Express (2012, 2014, 2016) with Windows Server 2012 R2
  • and coming soon, SQL Server Enterprise (2012, 2014, 2016) with Windows Server (2012, 2016)

Enterprise customers can leverage Windows Server 2016’s advanced multi-layer security, powerful storage and management capabilities and support for Windows containers. Windows runs on Google’s world-class infrastructure, with dramatic price-to-performance advantages, customizable VM sizes, and state-of-the-art networking and security capabilities. In addition, pricing for Windows Server 2016 and SQL Server 2016 remains the same as previous versions of both products.


Getting started

Sign up for a free trial to deploy your Windows applications and receive a $300 credit. Use this credit toward spinning up instances with pre-configured images for Windows Server, Microsoft SQL Server and your .NET applications. You can create instances directly from the Cloud Console or launch a solution for Windows Server from Cloud Launcher. Here's the detailed documentation on how to create Microsoft Windows Server and SQL Server instances on GCP.
(click to enlarge)
(click to enlarge)

The team is continuing the momentum for Windows on GCP since we announced comprehensive .NET developer solutions back in August, including a .NET client library for all Cloud Platform APIs available through NuGet. The Cloud Platform team has hand-authored libraries for Cloud Platform APIs available as open source projects on GitHub to which the community continues to collaborate and add features. Learn how to build ASP.NET applications on GCP, or check out more resources on Windows Server and Microsoft SQL Server on GCP at cloud.google.com/windows and cloud.google.com/sql-server. If you need help migrating your Windows workloads, please contact the GCP team. We're eager to hear your feedback!


Managing containerized ASP.NET Core apps with Kubernetes



One of our goals here on the Google Cloud Platform team is to support the broadest possible array of platforms and operating systems. That’s why we’re so excited about the ASP.NET Core, the next generation of the open source ASP.NET web framework built on .NET Core. With it, .NET developers can run their apps cross-platform on Windows, Mac and Linux.

One thing that ASP.NET Core does is allow .NET applications to run in Docker containers. All of a sudden, we’ve gone from Windows-only web apps to lean cross-platform web apps running in containers. This has been great to see!
ASP.NET Core supports running apps across a variety of operating system platforms
Containers can provide a stable runtime environment for apps, but they aren’t always easy to manage. You still need to worry about how to automate deployment of containers, how to scale up and down and how to upgrade or downgrade app versions reliably. In short, you need a container management platform that you can rely on in production.

That’s where the open-source Kubernetes platform comes in. Kubernetes provides high-level building blocks such as pods, labels, controllers and services that collectively help maintenance of containerized apps. Google Container Engine provides a hosted version of Kubernetes which can greatly simplify creating and managing Kubernetes clusters.

My colleague Ivan Naranjo recently published a blog post that shows you how to take an ASP.NET Core app, containerize it with Docker and and run it on Google App Engine. In this post, we’ll take a containerized ASP.NET Core app and manage it with Kubernetes and Google Container Engine. You'll be surprised how easy it is, especially considering that running an ASP.NET app on a non-Windows platform was unthinkable until recently.

Prerequisites

I am assuming a Windows development environment, but the instructions are similar on Mac or Linux.

First, we need to install .NET core, install Docker and install Google Cloud SDK for Windows. Then, we need to create a Google Cloud Platform project. We'll use this project later on to host our Kubernetes cluster on Container Engine.

Create a HelloWorld ASP.NET Core app

.NET Core comes with .NET Core Command Line Tools, which makes it really easy to create apps from command line. Let’s create a HelloWorld folder and create a web app using dotnet command:

$ mkdir HelloWorld
$ cd HelloWorld
$ dotnet new -t web

Restore the dependencies and run the app locally:

$ dotnet restore
$ dotnet run

You can then visit http://localhost:5000 to see the default ASP.NET Core page.

Get the app ready for publishing

Next, let’s pack the application and all of its dependencies into a folder to get it ready to publish.

$ dotnet publish -c Release

Once the app is published, we can test the resulting dll using the following:

$ cd bin/Release/netcoreapp1.0/publish/
$ dotnet HelloWorld.dll

Containerize the ASP.NET Core app with Docker

Let’s now take our HelloWorld app and containerize it with Docker. Create a Dockerfile in the root of our app folder:


FROM microsoft/dotnet:1.0.1-core
COPY . /app
WORKDIR /app
EXPOSE 8080/tcp
ENV ASPNETCORE_URLS http://*:8080
ENTRYPOINT ["dotnet", "HelloWorld.dll"]


This is the recipe for the Docker image that we'll create shortly. In a nutshell, we're creating an image based on microsoft/dotnet:latest image, copying the current directory to /app directory in the container, executing the commands needed to get the app running, making sure port 8080 is exposed and that ASP.NET Core is using that port.

Now we’re ready to build our Docker image and tag it with our Google Cloud project id:

$ docker build -t gcr.io/<PROJECT_ID>/hello-dotnet:v1 .

To make sure that our image is good, let’s run it locally in Docker:


$ docker run -d -p 8080:8080 -t gcr.io/<PROJECT_ID>/hello-dotnet:v1


Now when you visit http://localhost:8080 to see the same default ASP.NET Core page, it is running inside a Docker container.

Create a Kubernetes cluster in Container Engine

We are ready to create our Kubernetes cluster but first, let’s first install kubectl. In Google Cloud SDK Shell:

$ gcloud components install kubectl

Configure kubectl command line access to the cluster with the following:

$ gcloud container clusters get-credentials hello-dotnet-cluster \
   --zone europe-west1-b --project <PROJECT_ID>

Now, let’s push our image to Google Container Registry using gcloud, so we can later refer to this image when we deploy and run our Kubernetes cluster. In the Google Cloud SDK Shell, type:



$ gcloud docker push gcr.io/<PROJECT_ID>/hello-dotnet:v1

Create a Kubernetes cluster with two nodes in Container Engine:



$ gcloud container clusters create hello-dotnet-cluster --num-nodes 2 --machine-type n1-standard-1

This will take a little while but when the cluster is ready, you should see something like this:


Creating cluster hello-dotnet-cluster...done.

Deploy and run the app in Container Engine

At this point, we have our image hosted on Google Container Registry and we have our Kubernetes cluster ready in Google Container Engine. There’s only one thing left to do: run our image in our Kubernetes cluster. To do that, we can use the kubectl command line tool.

Create a deployment from our image in Kubernetes:


$ kubectl run hello-dotnet --image=gcr.io/<PROJECT_ID>hello-dotnet:v1 \
 --port=8080
deployment “hello-dotnet” created

Make sure the deployment and pod are running:


$ kubectl get deployments
NAME           DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
hello-dotnet   1         1         1            0           28s

$ kubectl get pods
NAME                            READY     STATUS    RESTARTS   AGE
hello-dotnet-3797665162-gu99e   1/1       Running   0          1m

And expose our deployment to the outside world:



$ kubectl expose deployment hello-dotnet --type="LoadBalancer"
service "hello-dotnet" exposed

Once the service is ready, we can see the external IP address:


$ kubectl get services
NAME           CLUSTER-IP     EXTERNAL-IP      PORT(S)    AGE
hello-dotnet   XX.X.XXX.XXX   XXX.XXX.XX.XXX   8080/TCP   1m

Finally, if you visit the external IP address on port 8080, you should see the default ASP.NET Core app managed by Kubernetes!

It’s fantastic to see the ASP.NET and Linux worlds are coming together. With Kubernetes, ASP.NET Core apps can benefit from automated deployments, scaling, reliable upgrades and much more. It’s a great time to be a .NET developer, for sure!

Managing containerized ASP.NET Core apps with Kubernetes



One of our goals here on the Google Cloud Platform team is to support the broadest possible array of platforms and operating systems. That’s why we’re so excited about the ASP.NET Core, the next generation of the open source ASP.NET web framework built on .NET Core. With it, .NET developers can run their apps cross-platform on Windows, Mac and Linux.

One thing that ASP.NET Core does is allow .NET applications to run in Docker containers. All of a sudden, we’ve gone from Windows-only web apps to lean cross-platform web apps running in containers. This has been great to see!
ASP.NET Core supports running apps across a variety of operating system platforms
Containers can provide a stable runtime environment for apps, but they aren’t always easy to manage. You still need to worry about how to automate deployment of containers, how to scale up and down and how to upgrade or downgrade app versions reliably. In short, you need a container management platform that you can rely on in production.

That’s where the open-source Kubernetes platform comes in. Kubernetes provides high-level building blocks such as pods, labels, controllers and services that collectively help maintenance of containerized apps. Google Container Engine provides a hosted version of Kubernetes which can greatly simplify creating and managing Kubernetes clusters.

My colleague Ivan Naranjo recently published a blog post that shows you how to take an ASP.NET Core app, containerize it with Docker and and run it on Google App Engine. In this post, we’ll take a containerized ASP.NET Core app and manage it with Kubernetes and Google Container Engine. You'll be surprised how easy it is, especially considering that running an ASP.NET app on a non-Windows platform was unthinkable until recently.

Prerequisites

I am assuming a Windows development environment, but the instructions are similar on Mac or Linux.

First, we need to install .NET core, install Docker and install Google Cloud SDK for Windows. Then, we need to create a Google Cloud Platform project. We'll use this project later on to host our Kubernetes cluster on Container Engine.

Create a HelloWorld ASP.NET Core app

.NET Core comes with .NET Core Command Line Tools, which makes it really easy to create apps from command line. Let’s create a HelloWorld folder and create a web app using dotnet command:

$ mkdir HelloWorld
$ cd HelloWorld
$ dotnet new -t web

Restore the dependencies and run the app locally:

$ dotnet restore
$ dotnet run

You can then visit http://localhost:5000 to see the default ASP.NET Core page.

Get the app ready for publishing

Next, let’s pack the application and all of its dependencies into a folder to get it ready to publish.

$ dotnet publish -c Release

Once the app is published, we can test the resulting dll using the following:

$ cd bin/Release/netcoreapp1.0/publish/
$ dotnet HelloWorld.dll


Containerize the ASP.NET Core app with Docker

Let’s now take our HelloWorld app and containerize it with Docker. Create a Dockerfile in the root of our app folder:


FROM microsoft/dotnet:1.0.1-core
COPY . /app
WORKDIR /app
EXPOSE 8080/tcp
ENV ASPNETCORE_URLS http://*:8080
ENTRYPOINT ["dotnet", "HelloWorld.dll"]


This is the recipe for the Docker image that we'll create shortly. In a nutshell, we're creating an image based on microsoft/dotnet:latest image, copying the current directory to /app directory in the container, executing the commands needed to get the app running, making sure port 8080 is exposed and that ASP.NET Core is using that port.

Now we’re ready to build our Docker image and tag it with our Google Cloud project id:

$ docker build -t gcr.io/<PROJECT_ID>/hello-dotnet:v1 .

To make sure that our image is good, let’s run it locally in Docker:


$ docker run -d -p 8080:8080 -t gcr.io/<PROJECT_ID>/hello-dotnet:v1


Now when you visit http://localhost:8080 to see the same default ASP.NET Core page, it is running inside a Docker container.


Create a Kubernetes cluster in Container Engine

We are ready to create our Kubernetes cluster but first, let’s first install kubectl. In Google Cloud SDK Shell:

$ gcloud components install kubectl

Configure kubectl command line access to the cluster with the following:

$ gcloud container clusters get-credentials hello-dotnet-cluster \
   --zone europe-west1-b --project <PROJECT_ID>

Now, let’s push our image to Google Container Registry using gcloud, so we can later refer to this image when we deploy and run our Kubernetes cluster. In the Google Cloud SDK Shell, type:



$ gcloud docker push gcr.io/<PROJECT_ID>/hello-dotnet:v1

Create a Kubernetes cluster with two nodes in Container Engine:



$ gcloud container clusters create hello-dotnet-cluster --num-nodes 2 --machine-type n1-standard-1

This will take a little while but when the cluster is ready, you should see something like this:


Creating cluster hello-dotnet-cluster...done.


Deploy and run the app in Container Engine

At this point, we have our image hosted on Google Container Registry and we have our Kubernetes cluster ready in Google Container Engine. There’s only one thing left to do: run our image in our Kubernetes cluster. To do that, we can use the kubectl command line tool.

Create a deployment from our image in Kubernetes:


$ kubectl run hello-dotnet --image=gcr.io/<PROJECT_ID>hello-dotnet:v1 \
 --port=8080
deployment “hello-dotnet” created

Make sure the deployment and pod are running:


$ kubectl get deployments
NAME           DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
hello-dotnet   1         1         1            0           28s

$ kubectl get pods
NAME                            READY     STATUS    RESTARTS   AGE
hello-dotnet-3797665162-gu99e   1/1       Running   0          1m

And expose our deployment to the outside world:



$ kubectl expose deployment hello-dotnet --type="LoadBalancer"
service "hello-dotnet" exposed

Once the service is ready, we can see the external IP address:


$ kubectl get services
NAME           CLUSTER-IP     EXTERNAL-IP      PORT(S)    AGE
hello-dotnet   XX.X.XXX.XXX   XXX.XXX.XX.XXX   8080/TCP   1m

Finally, if you visit the external IP address on port 8080, you should see the default ASP.NET Core app managed by Kubernetes!

It’s fantastic to see the ASP.NET and Linux worlds are coming together. With Kubernetes, ASP.NET Core apps can benefit from automated deployments, scaling, reliable upgrades and much more. It’s a great time to be a .NET developer, for sure!

Compute Engine now with 3 TB of high-speed Local SSD and 64 TB of Persistent Disk per VM

To help your business grow, we are significantly increasing size limits of all Google Compute Engine block storage products, including Local SSD and both types of Persistent Disk.

Now up to 64TB of Persistent Disk may be attached per VM for most machine types, including both Standard and SSD-backed Persistent Disk. The volume size limit has increased to 64 TB also, eliminating the need to stripe disks for larger volumes.

Persistent Disk provides fantastic price-performance and offers excellent usability for workloads that rely on durable block storage. Persistent Disk SSD delivers 30 IOPS per 1 GB provisioned, up to 15,000 IOPS per instance. Persistent Disk Standard is great value at $0.04 per GB-mo and provides 0.75 read IOPS per GB and 1.5 write IOPS per GB. Performance limits are set at an instance level, and can be achieved with just a single Persistent Disk.

We have also increased the amount of Local SSD that can be attached to a single virtual machine to 3 TB. Available in Beta today, you can attach twice as many partitions of Local SSD to Google Compute Engine instances. Up to eight 375 GB partitions or 3 TB of high IOPS SSD can now be attached to any machine with at least one virtual CPU.

We talked with Aaron Raddon, Founder and CTO at Lytics who tested our larger Local SSDs. He found they improved Cassandra performance by 50% and provide provisioning flexibility that can lead to additional savings.
The new, larger SSD has the same incredible IOPS performance we announced in January, topping out at 680,000 random 4K read IOPS and 360,000 random 4K write IOPS. With Local SSD you can achieve multiple millions of operations per second for key-value stores and a million writes per second using as few as 50 servers on NoSQL databases.

Local SSD retains the competitive pricing of $0.218 per GB/month while continuing to support extraordinary IOPS performance. As always, data stored in Local SSD is encrypted and our live migration technology means no downtime during maintenance. Local SSD also retains the flexibility of attaching to any instance type.

Siddharth Choudhuri, Principal Engineer at Levyx stated that doubling capacity on local SSDs with the same high IOPS is a game changer for businesses seeking low-latency and high throughput on large datasets. It enables them to index billions of objects on a single, denser node in real-time on Google Cloud Platform when paired with Levyx’s Helium data store.

To get started, head over to the Compute Engine console or read about Persistent Disk and Local SSD in the product documentation.

- Posted by John Barrus, Senior Product Manager, Google Cloud Platform

Improved Compute Engine Quota experience

As part of our constant improvements to the Google Cloud Platform console we’ve recently updated our Google Compute Engine quotas page. Now you can easily see quota consumption levels and sort to find your most-used resources. This gives you a head start on determining and procuring any additional capacity you need so you hit fewer speed bumps on your road to growth and success.
We’ve also improved the process of requesting more quota, which can be initiated directly from the quotas page by clicking on the “Request increase” button. We’ve added additional checks to the request form that help speed up our response processing time; now most requests are completed in minutes. With these changes, we’re making it even easier to do more with Cloud Platform.

You can access your console at https://console.cloud.google.com and learn more about how GCP can help you build better applications faster on the https://cloud.google.com web page.

Posted by Roy Peterkofsky, Product Manager