Tag Archives: Compute

With Google Kubernetes Engine regional clusters, master nodes are now highly available



We introduced highly available masters for Google Kubernetes Engine earlier this fall with our alpha launch of regional clusters. Today, regional clusters are in beta and ready to use at scale in Kubernetes Engine.

Regional clusters allow you to create a Kubernetes Engine cluster with a multi-master, highly available control plane that helps ensure higher cluster uptime. With regional clusters in Kubernetes Engine, you gain:
  • Resilience from single zone failure - Because your masters and nodes are available across a region rather than a single zone, your Kubernetes cluster is still fully functional if a zone goes down.
  • No downtime during master upgrades - Kubernetes Engine minimizes downtime during all Kubernetes master upgrades, but with a single master, some downtime is inevitable. By using regional clusters, the control plane remains online and available, even during upgrades.

How regional clusters work


When you create a regional cluster, Kubernetes Engine spreads your masters and nodes across three zones in a region, ensuring that you can experience a zonal failure and still remain online.

By default, Kubernetes Engine creates three nodes in each zone (giving you nine total nodes), but you can change the number of nodes in your cluster with the --num-nodes flag.
Creating a Kubernetes Engine regional cluster is simple. Let’s create a regional cluster with two nodes in each zone.

$ gcloud beta container clusters create my-regional-cluster --region=us-central1 --num-nodes=2

Or you can use the Cloud Console to create a regional cluster:
For a more detailed explanation of the regional clusters feature along with additional flags you can use, check out the documentation.

Kubernetes Engine regional clusters are offered at no additional charge during the beta period. We will announce pricing as part of general availability. Until then, please send any feedback to [email protected].


Meet the Kubernetes Engine team at #KubeCon


This week the Kubernetes community gathers in Austin for the annual #KubeCon conference. The Google Cloud team will host various activities throughout the week. Join us for parties, workshops, and more than a dozen talks by experts. More info and ways to RSVP at g.co/kubecon.

Manage Google Kubernetes Engine from Cloud Console dashboard, now generally available



There are two main ways to manage Google Kubernetes Engine: the kubectl command line interface and Cloud Console, a web-based dashboard. Cloud Console for Kubernetes Engine is now generally available, and includes several new and exciting features to help you understand the state of your app, troubleshoot it and perform fixes.

Troubleshoot your app


To walk you through these new features, we’d like to introduce you to Alice, a DevOps admin running her environment on Kubernetes Engine. Alice logs into Cloud Console to see the status of her apps. She starts by looking at the unified Workloads view where she can inspect all her apps, no matter which cluster they run on. This is especially handy for Alice, as her team has different clusters for different environments. In this example she spots an issue with one of the frontends – its status is showing up red.
(click to enlarge)

By clicking on the name of the workload, Alice sees a detailed view where she can start debugging. Here she sees graphs for CPU, memory and disk utilization and spots a sudden spike in the resource usage.
(click to enlarge)

Before she starts to investigate the root cause, Alice decides to turn on Horizontal Pod Autoscaling to mitigate the application outage. The autoscale action is available from the menu at the top of the Cloud Console page. She increases the number of maximum replicas to 15 and enables autoscaling.
(click to enlarge)

Now that the service is scaling up and and can handle user traffic again, Alice decides to investigate the root cause of the increased CPU usage. She starts by investigating one of the pods and sees that it has high CPU usage. To look into this further she opens the Logs tab to browse the recent logs for the offending pod.
(click to enlarge)
The logs indicate that the problem is with the frontend’s http server. With this insight, Alice decides to connect to the running pod to debug it further. She opens Cloud Shell directly from Cloud Console and attaches to the selected pod. Alice does not need to worry about remembering the exact commands, finding the right credentials and setting kubectl context—the correct command is fully populated when Cloud Shell loads.

By running the linux "top" command, Alice can see that the http server process is the culprit behind the spiking CPU. She can now investigate the code, find the bug and fix it using her favorite tools. Once the new code is ready, Alice comes back to the UI to perform a rolling update. Again, she finds the rolling update action in the top of the UI, and updates the image version. Cloud Console then performs the rolling update, displays its progress and highlights any problems that might have occurred during the update.
(click to enlarge)
(click to enlarge)
Alice now inspects resource usage charts, status and logs for the frontend deployment to verify that it is working correctly. She can also perform the same rolling update action on a similar frontend deployment on a different cluster, without having to context-switch and provide new credentials.

Kubernetes Engine Cloud Console comes with other features to assist Kubernetes administrators with their daily routines. For example, it includes a YAML editor to modify Kubernetes objects, or service visualizations that aggregate its related resources like pods or load balancers. You can learn more about those features through the Kubernetes Engine dashboards documentation.

Manage Kubernetes Engine clusters


Kubernetes Engine’s new Cloud Console experience also offers improvements for cluster administrators. Bob works in the same company as Alice and is responsible for administering the cluster where the frontend app lives.

While investigating the the list of nodes in the cluster Bob notices that all the nodes are running close to full utilization and that there's not enough capacity left in the cluster to schedule other workloads. He clicks on one of the nodes to investigate what’s happening with the pods scheduled there. He quickly realizes that due to Alice turning on the Horizontal Pod Autoscaler there are now multiple replicas of the frontend pods that take up all the space in the cluster.

Bob decides to edit the the cluster right from Cloud Console and turn on cluster autoscaling. After a couple of minutes to scale up the cluster, everything starts to work again.

These are just some of the things that you can do from the Kubernetes Engine Cloud Console dashboard. To get started, simply login to Cloud Console and click on the Kubernetes Engine tab. Let us know how you like it by clicking on the feedback button in the upper righthand corner of the UI.

Meet the Kubernetes Engine team at #KubeCon


This week the Kubernetes community gathers in Austin for the annual #KubeCon conference. The Google Cloud team will host various activities throughout the week. Join us for parties, workshops, and more than a dozen talks by experts. More info and ways to RSVP at g.co/kubecon.

Introducing an easy way to deploy containers on Google Compute Engine virtual machines



Containers are a popular way to deploy software thanks to their lightweight size and resource requirements, dependency isolation and portability. Today, we’re introducing an easy way to deploy and run containers on Google Compute Engine virtual machines and managed instance groups. This feature, which is currently in beta, allows you to take advantage of container deployment consistency while staying in your familiar IaaS environment.

Now you can easily deploy containers wherever you may need them on Google Cloud: Google Kubernetes Engine for multi-workload, microservice friendly container orchestration, Google App Engine flexible environment, a fully managed application platform, and now Compute Engine for VM-level container deployment.

Running containers on Compute Engine instances is handy in a number of scenarios: when you need to optimize CI/CD pipeline for applications running on VMs, finetune VM shape and infrastructure configuration for a specialized workload, integrate a containerized application into your existing IaaS infrastructure or launch a one-off instance of an application.

To run your container on a VM instance, or a managed instance group, simply provide an image name and specify your container runtime options when creating a VM or an instance template. Compute Engine takes care of the rest including supplying an up-to-date Container-Optimized OS image with Docker and starting the container upon VM boot with your runtime options.

You can now easily use containers without having to write startup scripts or learn about container orchestration tools, and can migrate to full container orchestration with Kubernetes Engine when you’re ready. Better yet, standard Compute Engine pricing applies VM instances running containers cost the same as regular VMs.

How to deploy a container to a VM


To see the new container deployment method in action, let’s deploy an NGINX HTTP server to a virtual machine. To do this, you only need to configure three settings when creating a new instance:
  • Check Deploy a container image to this VM instance.
  • Provide Container image name. 
  • Check Allow HTTP traffic so that the VM instance can receive HTTP requests on port 80. 
Here's how the flow looks in Google Cloud Console:

Run a container from the gcloud command line

You can run a container on a VM instance with just one gcloud command:

gcloud beta compute instances create-with-container nginx-vm \
  --container-image gcr.io/cloud-marketplace/google/nginx1:1.12 \
  --tags http-server

Then, create a firewall rule to allow HTTP traffic to the VM instance so that you can see the NGINX welcome page:

gcloud compute firewall-rules create allow-http \
  --allow=tcp:80 --target-tags=http-server

To update such a container is just as easy:

gcloud beta compute instances update-container nginx-vm \
  --container-image gcr.io/cloud-marketplace/google/nginx1:1.13

Run a container on a managed instance group

With managed instance groups, you can take advantage of VM-level features like autoscaling, automatic recreation of unhealthy virtual machines, rolling updates, multi-zone deployments and load balancing. Running containers on managed instance groups is just as easy as on individual VMs and takes only two steps: (1) create an instance template and (2) create a group.

Let’s deploy the same NGINX server to a managed instance group of three virtual machines.

Step 1: Create an instance template with a container.

gcloud beta compute instance-templates create-with-container nginx-it \
  --container-image gcr.io/cloud-marketplace/google/nginx1:1.12 \
  --tags http-server

The http-server tag allows HTTP connections to port 80 of the VMs, created from the instance template. Make sure to keep the firewall rule from the previous example.

Step 2: Create a managed instance group.

gcloud compute instance-groups managed create nginx-mig \
  --template nginx-it \
  --size 3

The group will have three VM instances, each running the NGINX container.

Get started!

Interested in deploying containers on Compute Engine VM instances or managed instance groups? Take a look at the detailed step-by-step instructions and learn how to configure a range of container runtime options including environment variables, entrypoint command with parameters and volume mounts. Then, help us help you make using containers on Compute Engine even easier! Send your feedback, questions or requests to [email protected].

Sign up for Google Cloud today and get $300 in credits to try out running containers directly on Compute Engine instances.

New lower prices for GPUs and preemptible Local SSDs



We’ve been seeing customers (like Shazam and Schlumberger) harnessing the scale of Google Cloud, and the power of NVIDIA Tesla GPUs to innovate, accelerate and save money. Today we’re extending the benefits of GPUs by cutting the price of NVIDIA Tesla GPUs attached to on-demand Google Compute Engine virtual machines by up to 36 percent. In US regions, each K80 GPU attached to a VM is priced at $0.45 per hour while each P100 costs $1.46 per hour.

Lower priced GPUs, together with Custom VM shapes and Sustained Usage Discounts, which provide up to an additional 30% off of instance pricing, allow you to run highly parallelized compute tasks on GPUs with strong performance, all at a great price.

Our GPU virtual machines allow you to create exact performance and cost VM configuration for your workload. Specifically, we enable you to create VM shapes with the right number of vCPUs, GPUs and memory for your specific application. Optionally, if you need fast disk performance with your GPUs, you can attach up to 3TB of Local SSD to any GPU-enabled VM. In addition, to help ensure our Cloud GPU customers receive bare metal performance, the hardware is passed through directly to the virtual machine.

Scientists, artists and engineers need access to massively parallel computational power. Deep learning, physical simulation and molecular modeling can take hours instead of days on NVIDIA Tesla GPUs.

Regardless of the size of your workload, GCP can provide the right amount of computational power to help you get the job done.

As an added bonus, we’re also lowering the price of preemptible Local SSDs by almost 40 percent compared to on-demand Local SSDs. In the US this means $0.048 per GB-month.

We hope that the price reduction on NVIDIA Tesla GPUs and preemptible Local SSDs unlocks new opportunities and helps you solve more interesting business, engineering and scientific problems.

For more details, check out our documentation for GPUs. For more pricing information, take a look at the Compute Engine GPU pricing page or try out our pricing calculator. If you have questions or feedback, go to the Getting Help page.

Getting started with the power of GPU-enabled instances is easy—just start one up in the Google Cloud Platform Console. If you don’t have a GCP account yet, sign up today and get $300 in credits.

Introducing VRay GPU rendering and CaraVR support on Zync Render



Rendering visual effects is a great way to use state-of-the-art, pay-as-you-go cloud resources. Case in point, our Zync Render solution now leverages the recent release of NVIDIA GPUs on Google Cloud Platform with support for VRay GPU for Autodesk Maya and 3ds Max. Using Zync’s turnkey system and the hybrid render support in VRay 3.6, you can now spin up hundreds of GPUs on-demand to quickly render scenes optimized for VRay GPU workloads.

We also now support Foundry’s VR toolset CaraVR on ZYNC Render. Running on Zync Render, CaraVR users can now leverage the massive scalability of Google Compute Engine to stitch together their large virtual reality datasets.

Rendering on GCP can be cost-effective, too. We recently moved to per-second billing with a one-minute minimum (down from 10 minutes), opening you up to use Zync for even the smallest job.

Google Cloud’s suite of media and entertainment offerings is expansive – from content ingestion and creation to graphics rendering to distribution. Combined with our online video platform Anvato, core infrastructure offerings around compute, GPU and storage, cutting-edge machine learning and Hollywood studio-specific security engagements, Google Cloud provides comprehensive and end-to-end solutions for creative professionals to build media solutions of their choosing.

To learn more about Google Cloud in the media and entertainment field, visit our Google Cloud Media Solutions page. And to experience the power of GCP for yourself, sign up for a free trial at no cost.

Skylake processors now available in seven regions



Earlier this year, Intel Xeon server processor (codenamed Skylake) became generally available on Google Compute Engine, providing you with the most powerful and technically advanced processors in the cloud. Paired with Compute Engine, Skylake benefits from finer-grained controls over your VMs, the ability to select your host CPU platform for any of our predefined and custom machine types, and new machine types that extend to 96 vCPUs and 1.4TB of memory per instance.

And now, we offer Skylake in our South Carolina, Mumbai and Singapore regions, joining Iowa, Oregon, Belgium and Taiwan, and bringing the total number of GCP regions with Skylake to seven globally. We’re also lowering the cost of Skylake VMs by 6-10 percent, depending on your specific machine configuration. With this price drop, we’re making it easier for you to choose the best platform for your applications. Just select the number of cores and amount of RAM you need and get all the computational power that Skylake on Compute Engine makes possible.

Already, what you’ve done with this additional computational power has been amazing! In the last six months, thousands of Compute Engine customers have used Skylake to run their applications faster, to achieve better performance, and to utilize new instruction sets like AVX512 to optimize their applications. Here’s what a few customers have to say about taking advantage of Compute Engine’s Skylake processors.

Alpha Vertex develops cognitive systems that provide advanced analytical capabilities to the financial community. Using compute engine 64-core machine types with Skylake for their ML systems allowed them to cut on training times for machine learning models.
“Using Google Cloud Platform, our Machine Learning (ML) training times have improved by 15 percent. We were able to build a Kubernetes cluster of 150 64-core Skylake processors in 15 minutes.” 
Michael Bishop, CTO, Alpha Vertex Inc

Milk VFX runs thousands of preemptible cores using one of the larger machine types (n1-highcpu-96) to create innovative and complex sequences for high-end television and feature films. At this scale, better performance means decreasing the runtime by days. With 96 vCPUs instances and preemptible machines, they were able to reduce the number of nodes they needed, and decrease their costs.
“By using Skylake with larger core machines we've been able to process more data faster, enabling our artists to be more productive, creative and cost effective. With preemptible machines we've cut the cost even more, so much so that we're already seeing savings made in such a short timeframe. More importantly, for the past 12 weeks since we started rendering all 3D on the GCP, we have met our deadlines without any late nights or weekend work and everyone is really happy.” 
Dave Goodbourn, Head of Systems, Milk Visual Effects

QuantConnect breaks down barriers to algorithmic trading by providing market data and a cluster computer so any engineer can quickly design an algorithmic trading system. They're constantly seeking the latest infrastructure innovations on Compute Engine.
“Our work at QuantConnect is constantly pushing the boundaries of cloud computing. When we learned of the addition of Skylake processors to the Google compute platform, we quickly joined as one of the early beta testers and converted infrastructure to harness it. The Skylake vCPUs improved our web-compiler speeds by 10 to 15 percent, making a measurable improvement to our user coding experience and increasing user satisfaction overall.” 
Jared Broad, Founder, QuantConnect

We're committed to making all our infrastructure innovations accessible to all Compute Engine customers. To start using Skylake processors in Compute Engine today, sign up for a new GCP account and get $300 in free trial credits to use on Skylake-powered VMs.

Intel Performance Libraries and Python Distribution enhance performance and scaling of Intel® Xeon® Scalable (‘Skylake’) processors on GCP



Google was pleased to be the first cloud vendor to offer the latest-generation Intel® Xeon® Scalable (‘Skylake’) processors in February 2017. With their higher core counts, improved on-chip interconnect with the new Intel® Mesh Architecture, enhanced memory subsystems and Intel® Advanced Vector Extensions-512 (AVX-512) functional units, these processors are a great fit for demanding HPC applications that need high floating-point operation rates (FLOPS) and the operand bandwidth to feed the processing pipelines.
New Intel® Mesh Architecture for Xeon Scalable Processors

Skylake raises the performance bar significantly, but a processor is only as powerful as the software that runs on it. So today we're announcing that the Intel Performance Libraries are now freely available for Google Cloud Platform (GCP) Compute Engine. These libraries, which include the Intel® Math Kernel Library, Intel® Data Analytics Acceleration Library, Intel® Performance Primitives, Intel® Threading Building Blocks, and Intel® MPI Library, integrate key communication and computation kernels that have been tuned and optimized for this latest Intel processor family, in terms of both sequential pipeline flow and parallel execution. These components are useful across all the Intel Xeon processor families in GCP, but they're of particular interest for applications that can use them to fully exploit the scale of 96 vCPU instances on Skylake-based servers.

Scaling out to Skylake can result in dramatic performance improvements. This parallel SGEMM matrix multiplication benchmark result, run by Intel engineers on GCP, shows the advantage obtained by going from a 64 vCPU GCP instance on an Intel® Xeon processor E5 (“Broadwell”) system to an instance with 96 vCPUs on Intel Xeon Scalable (“Skylake”) processors, using the Intel® MKL on GCP. Using half or fewer of the available vCPUs reduces hyper-thread sharing of AVX-512 functional units and leads to higher efficiency.

In addition to pre-compiled performance libraries, GCP users now have free access to the Intel® Distribution for Python, a distribution of both python2 and python3, which uses the Intel instruction features and pipelines for maximum effect.

The following chart shows example performance improvements delivered by the optimized scikit-learn K-means functions in the Intel® Distribution for Python over the stock open source Python distribution.
We’re delighted that Google Cloud Platform users will experience the best of Intel® Xeon® Scalable processors using the Intel® Distribution for Python and the Intel performance libraries Intel® MKL, Intel® DAAL, Intel® TBB, Intel® IPP and Intel® MPI. These software tools are carefully tuned to deliver the workload-optimized performance benefits of the advanced processors that Google has deployed, including 96 vCPUs and workload-optimized vector capabilities provided by Intel® AVX-512.”  
Sanjiv Shah, VP and GM, Software Development tools for technical, enterprise, and cloud computing at Intel
For more information about Intel and GCP, or to access the installation instructions for the Intel Performance Library and Python packages, visit the Intel and Google Cloud Platform page.

Introducing Certified Kubernetes (and Google Kubernetes Engine!)



When Google launched Kubernetes three years ago, we knew based on our 10 years of experience with Borg how useful it would be to developers. But even we couldn’t have predicted just how successful it would become. Kubernetes is one of the world’s highest velocity open source projects, supported by a diverse community of contributors. It was designed at its heart to run anywhere, and dozens of vendors have created their own Kubernetes offerings.

It's critical to Kubernetes users that their applications run reliably across different Kubernetes environments, and that they can access the new features in a timely manner. To ensure a consistent developer experience across different Kubernetes offerings, we’ve been working with the Cloud Native Computing Foundation (CNCF) and the Kubernetes community to create the Certified Kubernetes Conformance Program. The Certified Kubernetes program officially launched today, and our Kubernetes service is among the first to be certified.

Choosing a Certified Kubernetes platform like ours and those from our partners brings both benefits and peace of mind, especially for organizations with hybrid deployments. With the greater compatibility of Certified Kubernetes, you get:
  • Smooth migrations between on-premises and cloud environments, and a greater ability to split a single workload across multiple environments 
  • Consistent upgrades
  • Access to community software and support resources
The CNCF hosts a complete list of of Certified Kubernetes platforms and distributions. If you use a Kubernetes offering that's not on the list, encourage them to become certified as soon as possible!

Putting the K in GKE


One of the benefits of participating in the Certified Kubernetes Conformance Program is being able to use the name “Kubernetes” in your product. With that, we’re taking this opportunity to rename Container Engine to Kubernetes Engine. From the beginning, Container Engine’s acronym has been GKE in a nod to Kubernetes. Now, as a Certified Kubernetes offering, we can officially put the K in GKE.

While the Kubernetes Engine name is new, everything else about the service is unchanged—it’s still the same great managed environment for deploying containerized applications that you trust to run your production environments. To learn more about Kubernetes Engine, visit the product page, or the documentation for a wealth of quickstarts, tutorials and how-tos. And as always, if you’re just getting started with containers and Google Cloud Platform, be sure to sign up for a free trial.

Commvault and Google Cloud partner on cloud-based data protection and simpler “lift and shift” to the cloud



Today at Commvault Go 2017, we announced a new strategic alliance with Commvault to enable you to benefit from advanced data protection in the cloud as well as on-premises, and to make it easier to “lift-and-shift” workloads to Google Cloud Platform (GCP).

At Google Cloud, we strive to provide you with the best offerings not just to store but also to use your data. For example, if you’re looking for data protection, you can benefit from our unique Coldline class as part of Google Cloud Storage, which provides immediate access to your data at archival storage prices. You can test this for free. Try serving an image or video directly from the Coldline storage tier and it will return within milliseconds. Then there’s our partner Forsythe, whose data analytics-as-a-service offering allows you to bring your backup data from Commvault to Google Cloud Storage and then analyze it using GCP machine learning and data loss prevention services.

We work hard with our technology partners to deliver solutions that are easy to use and cost-effective. We're working with Commvault on a number of initiatives, specifically:
  • Backup to Google Cloud Storage Coldline: If you use Commvault, you can now use Coldline in addition to Regional and Nearline classes as your storage target. Check out this video to see how easy it is to set up Cloud Storage with Commvault.
  • Protect workloads in the cloud: As enterprises move their applications to Google Compute Engine, you can use the same data protection policies that you use on-premises with Commvault’s data protection software. Commvault supports a wide range of common enterprise applications from SAP, Exchange, SQL, DB2, and PostgreSQL, to big data applications such as GPFS, MongoDB, Hadoop and many more.
  • G Suite backup with Commvault: You can now use the Commvault platform to backup and recover data from G Suite applications such as Gmail and Drive.
We're excited to work with Commvault to bring more capabilities to our joint customers in the future, such as enhanced data visibility via analytics and the ability to migrate and/or recover VMs in Compute Engine for on-premises workloads.

If you’re planning to attend Commvault Go this week, visit our booth to learn more about our partnership with Commvault and how to use GCP for backup and disaster recovery with Commvault!

Announcing Go 1.8 on App Engine Standard Environment


We’re happy to announce that Go 1.8 for the Google App Engine standard environment is now generally available and is covered by the App Engine Service Level Agreement (SLA). As of today, Go 1.8 will also be the default for newly-deployed apps that specify "api_version:go1" in their "app.yaml" files. Existing deployments will not be modified.

Go 1.8 brings a number of library, runtime, performance and security improvements (see the the Go 1.7 Release Notes and the Go 1.8 Release Notes for details). We encourage you to test and re-deploy your apps to make use of them.

Note that the old x/net/context package was moved to the standard library as the “context” package starting in Go 1.7. You can automatically update your imports via "go tool fix -r context" if you have Go 1.8 installed.

If you need to continue deploying apps using the old Go 1.6 runtime, you can update your "app.yaml" file to specify "api_version:go1.6".