Tag Archives: Weekly Roundups

Last month today: July on GCP

The month of July saw our Google Cloud Next ‘18 conference come and go, and there was plenty of exciting news, updates and demos to share from the show. Here’s a look at some of the most-read blog posts from July.

What caught your attention this month: Creating the open cloud
  • One of the most-read posts this month covered the launch of our Cloud Services Platform, which allows you to build a true hybrid cloud infrastructure. Some of the key components of Cloud Services Platform include the managed Istio service mesh, Google Kubernetes Engine (GKE) On-Prem and GKE Policy Management, Cloud Build for fully managed CI/CD, and several serverless offerings (more on that below). Combined, these technologies can help you gain consistency, security, speed and flexibility of the cloud in your local data center, along with the freedom of workload portability to the environment of your choice.
  • Another popular read was a rundown of Google Cloud’s new serverless offerings. These include core serverless compute announcements such as new App Engine runtimes, Cloud Functions general availability and more. It also included serverless containers, so you can run serverless workloads in a fully managed container environment; GKE Serverless add-on to easily run serverless workloads on Kubernetes Engine; and Knative, the open-source project on which that add-on is built. There are even more features included in this post, too, like Cloud Build, Stackdriver monitoring and Cloud Firestore integration with GCP. 
Bringing detailed metrics and Kubernetes apps to the forefront
  • Another must-read post this month for many of you was Transparent SLIs: See Google Cloud the way your application experiences it, announcing the availability of detailed data insights on GCP services that your workloads use—helping you see like a Google site reliability engineer (SRE). These new service-level indicators (SLIs) go way beyond basic uptime and downtime to delve into response codes, latency and more. You can then separate out metrics by GCP service to see things like API version, location and protocol. The result is that you can filter and sort to get extremely fine-grained information on your software and the GCP services you use, which helps cut resolution times and improve the support experience. Transparent SLIs are available now through the Stackdriver monitoring console. Learn more here about the basics of using SLIs and other SRE tools to measure and manage availability.
  • It’s also now faster and easier to find production-ready commercial Kubernetes apps in the GCP Marketplace. These apps are prepackaged and configured to get up and running easily, whether on Kubernetes Engine or other Kubernetes clusters, and run the gamut from security, data analytics and developer tools to storage, machine learning and monitoring.
There was obviously a lot to talk about at the show, and you can get even more detail on what happened at Next ‘18 here.

Building the cloud back-end
  • For all of you developing cloud apps with Java, the availability of Jib was an exciting announcement last month. This open-source container image builder, available as Gradle and Maven plugins, cuts out several steps from the Docker build flow. Jib does all the work required to package your app into a container image—you don’t need to write a Dockerfile or even have Docker installed. You end up with faster builds and reproducible container images.
  • And on that topic, this best practices for building containers post was a hit, too, giving you tips that will set you up to run your environment more smoothly. The tips in this blog post cover graceful application shutdowns, how to simplify containers and how to choose and tag the container images you’ll use. 
It’s been a busy month at GCP, and we’re glad to share lots of new tools with you. Till next time, build away!

5 must-see network sessions at Google Cloud NEXT 2018



Whether you’re moving data to or from Google Cloud, or are knee-deep plumbing your cloud network architecture, there’s a lot to learn at Google Cloud Next 2018 next week in San Francisco). Here’s our shortlist of the five must-see networking breakout sessions at the show, in chronological order from Wednesday to Thursday.
Operations engineer, Rebekah Roediger, delivering cloud network capacity one link at a time, in our Netherlands cloud region (europe-west4).

GCP Network and Security Telemetry
Speakers: Ines Envid, Senior Product Manager, Yuri Solodkin, Staff Software Engineer and Vineet Bhan, Head of Security Partnerships
Network and security telemetry is fundamental to operate your deployments in public clouds with confidence, providing the required visibility on the behavior of your network and access control firewalls.
When: July 24th, 2018 12:35pm


A Year in GCP Networking
Speakers: Srinath Padmanabhan, Networking Product Marketing Manager, Google Cloud and Nick Jacques, Lead Cloud Engineer, Target
In this session, we will talk about the valuable advancements that have been made in GCP Networking over the last year. We will introduce you to the GCP Network team and will tell you about what you can do to extract the most value from your GCP Deployment.
When: July 24th, 2018 1:55pm


Cloud Load Balancing Deep Dive and Best Practices
Speakers: Prajakta Joshi, Sr. Product Manager and Mike Columbus, Networking Specialist Team Manager
Google Cloud Load Balancing lets enterprises and cloud-native companies deliver highly available, scalable, low-latency cloud services with a global footprint. You will see demos and learn how enterprise customers deploy Cloud Load Balancing and the best practices they use to deliver smart, secure, modern services across the globe.
When: July 25th, 2018 12:35pm


Hybrid Connectivity - Reliably Extending Your Enterprise Network to GCP
Speaker: John Veizades, Product Manager, Google Cloud
In this session, you will learn how to connect to GCP with highly reliable and secure networking to support extending your data center networks into the cloud. We will cover details of resilient routing techniques, access to Google API from on premise networks, connection locations, and partners that support connectivity to GCP -- all designed to support mission-critical network connectivity to GCP.
When: July 26th, 2018 11:40am


VPC Deep Dive and Best Practices
Speakers: Emanuele Mazza, Networking Product Specialist, Google, Neha Pattan, Software Engineer, Google and Kamal Congevaram Muralidharan, Senior Member Technical Staff, Paypal
This session will walk you through the unique operational advantages of GCP VPC for your enterprise cloud deployments. We’ll go through detailed use cases, how to seal and audit your VPC, how to extend your VPC to on-prem in hybrid scenarios, and how to deploy highly available services.
When: July 26th, 2018 9:00am


Be sure to reserve your spot in these sessions today—space is filling up!

Last month today: GCP in June

In June, we had a lot to discuss about getting the most out of the cloud for your business, from speeding up web traffic to running fully managed apps easily. Here’s a quick look at some of the highlights from Google Cloud Platform (GCP) news this month.

What caught your attention this month

Some of the most-read stories this month reflected new technology developments or integrations that will be useful for developers and engineers.
  • You can now deploy your Node.js app to the Google App Engine standard environment—and based on readership, many of you are excited about this. Node.js works easily on App Engine, without any language, module or API restrictions. You’ll get very quick deployment times, and a fully managed experience once you’ve deployed those apps, just as in other apps on the fully managed App Engine.
  • QUIC is a transport protocol, optimized for HTTPS, that makes web traffic run faster. The protocol itself isn’t new, but last month we announced QUIC support for our HTTPS load balancers. Network performance is a huge part of a successful public cloud operation, so this new support could make a big impact on web page load times for your cloud services. Enabling QUIC means your connections can be established faster, which is especially useful for latency-prone connections, and clients who don’t yet support QUIC will seamlessly continue to use HTTPS.
  • If you’re a Kubernetes fan, you may have already explored the new kubemci command-line interface (CLI). It lets you configure ingress for multi-cluster Kubernetes Engine environments, using Cloud Load Balancer. It’s also the first step in a long-term solution that will consist of a multi-cluster ingress system controlled via kubectl CLI or Kubernetes API calls.

Hot topics

You can now run your GCP workloads in Finland to improve availability and reduce your latency in the Nordics, and we announced that the Los Angeles region will open next month.

We also added some new storage tools to your arsenal. We’re adding Cloud Filestore as a GCP storage option so you can run enterprise applications that need a file system interface and shared file system for data. It’s fully managed and offers high performance for applications that need low latency and high throughput. For those of you supporting and running creative industry applications on GCP infrastructure, Cloud Filestore works great for render farms, website hosting and content management systems.

In addition, the Transfer Appliance became generally available in June, allowing a type of cloud data migration that will work well if you’ve got more than 20TB of data to upload to GCP, or that would take more than a week to upload. In early use, Transfer Appliance customers have gotten quick starts on analytics projects by moving test data to GCP, along with moving backup data and some or all of a data center to GCP.

And in the “Cloud powers some very cool projects” category, take a look at how the new Dragon Ball Legends game creator built the backend on GCP. Bandai Namco Entertainment knew that players of the latest addition to their Dragon Ball Z franchise would want to play against one another in real-time, with players around the globe. They turned to GCP for the scalability, global reach and real-time analytics they needed to make that possible.

Behind the compute curtain

This news of sole-tenant nodes for Google Compute Engine will come in handy for those of you at companies that need dedicated cloud servers. With this option, it’s possible to launch new VM instances as usual, but on server capacity dedicated to you. This choice is nice for industries with strict compliance and regulatory rules around data, and for getting higher utilization from VM instances along with instance placement, done either manually or by Compute Engine.

Building applications on GCP involves some upfront choices for app developers: Which compute offering will you pick, and what language will you use? Whether you’re a fan of containers or VMs, containers, App Engine or Cloud Functions, you’ll find in this post some excellent concrete examples the time and effort involved in building a “Hello, World” app in each of GCP’s four compute platforms.

That’s a wrap for June. This month brings the Next ‘18 conference, July 24-26. Join us and thousands of other IT practitioners in San Francisco to learn all you need to know about building a modern cloud infrastructure. Till then, build away!

Last month today: GCP in May



When it comes to Google Cloud Platform (GCP), every month is chock full of news and information. We’re kicking off a monthly recap of key moments you may have missed.

What caught your attention this month:

Announcements about open source projects were some of the most-read this month.
  • Open-sourcing gVisor, a sandboxed container runtime was by far your favorite post in May. gVisor is a sandbox that lets you run containers in strongly isolated environments. It’s isolated like a virtual machine, but more lightweight and also more flexible, since it interfaces with the host OS just like another process.
  • Our introduction of Asylo, an open-source framework for confidential computing, also got your attention. As more and more sensitive workloads move to cloud, lots of businesses want to be able to verify that they’re properly isolated, inside a closed environment that’s only available to authorized users. Asylo democratizes trusted execution environments (TEEs) by allowing them to run on generic hardware. With Asylo, developers will be able to run their workloads encrypted in a highly secure environment, whether it’s on-premises or in the cloud.
  • Rounding out the open-source fun for the month was our introduction of the beta availability of Cloud Memorystore, a fully managed in-memory data store service for Redis. Cloud Memorystore gives you the caching power of Redis to reduce latency, without having to manage the details.


Hot topics: Kubernetes, DevOps and SRE

Google Kubernetes Engine 1.10 debuted in May, and we had a lot to say about the new features that this version enables—from security to brand-new monitoring functionality via Stackdriver Kubernetes Monitoring to networking. Start with this post to see what’s new and how customers like Spotify are using Kubernetes Engine on Google Cloud.

And one of our recent posts also struck a chord, as two of our site reliability engineering (SRE) experts delved into the differences—and similarities—between SRE and DevOps. They have similar goals, mostly around creating flexible, agile dev environments, but SRE generally gets much more specific and prescriptive than DevOps in accomplishing them.

Under the radar: GCP adds infrastructure options

As you look for new ways to use GCP to run your business, our engineers are adding features and new releases to give you more power, resources and coverage.

First, we introduced ultramem Google Compute Engine machine types, which offer more memory and compute resources than any other Compute Engine VM instance. These machines types are especially useful for those of you running enterprise workloads that need a lot of memory, like data analytics or high-performance applications.

We’ve also been busy on the back-end in other ways too, as we continue adding new regional cloud computing infrastructure. Our third zone of the Singapore region opened in May, and we’ll open a Zurich region next year.

Stay tuned in June for more on the technologies behind Google Cloud—we’ve got lots up our sleeve.