Tag Archives: Compute

Bringing you more flexibility and better Cloud Networking performance, GA of HTTPS Load Balancing and Akamai joins CDN Interconnect

Google’s global network is a key piece of our foundation, enabling all of Google Cloud Platform’s services. Our customers have reiterated to us the critical importance of business continuity and service quality for their key processes, especially around network performance given today’s media-rich web and mobile applications.

We’re making several important announcements today: the general availability of HTTPS Load Balancing, and sustained performance gains from our software-defined network virtualization stack Andromeda, from which customers gain immediate benefits. We’re also introducing Cloud Router and Subnetworks, which together enable fine-grained network management and control demanded by our leading enterprise customers.

In line with our belief that speed is a feature, we’re also extremely pleased to welcome Akamai into our CDN Interconnect program. Origin traffic from Google egressing out to select Akamai CDN locations will take a private route on Google’s edge network, helping to reduce latency and egress costs for our joint customers. Akamai’s peering with Google at a growing number of points-of-presence across Google’s extensive global networking footprint enables us to deliver to our customers the responsiveness they expect from Google’s services.

General Availability of HTTPS Load Balancing. Google’s private fiber network connects our data centers where your applications run to one of more than 70 global network points of presence. HTTPS Load Balancing deployed at these key points across the globe dramatically reduces latency and increases availability for your customers critically important to achieving the responsiveness users expect from today’s most demanding web and mobile apps. For full details, see the documentation.
Figure 1 Our global load balancing location





Andromeda. Over the past year, we’ve written about our innovations made in Google’s data centers and networking to serve world-class services like Search, YouTube, Maps and Drive. The Cloud Platform team ensures that the benefits from these gains are passed onto customers with no additional effort on their part. Andromeda, Google’s software-defined network virtualization stack, quantifies some of these gains especially around performance. The chart below shows network throughput gains in Gbits/sec: in a little over a year, throughput has doubled for both single-stream and 200-stream benchmarks.




Subnetworks. Subnetworks allow you to segment IP space into regional prefixes. As a result, you gain fine-grained control over the full logical range in your private IP space, avoiding the need to create multiple networks, and providing full flexibility to create your desired topology.

Additionally, if you’re a VPN customer, you’ll see immediate enhancement as subnetworks allow you to configure your VPN gateway with different destination IP ranges per-region in the same network. In addition to providing more control over VPN routes, regional targeting affords lower latency compared to a single IP range spanning across all regions. Get started with subnetworks here.

Cloud Router. With Cloud Router, your enterprise-grade VPN to Google gets dynamic routing. Network topology changes on either end propagate automatically using BGP, eliminating the need to configure static routes or restart VPN tunnels. You get seamless connectivity with no traffic disruption. Learn more here.

Akamai and CDN Interconnect. Cloud Platform traffic egressing out to select Akamai CDN locations travel over direct peering links and are priced based on Google Cloud Interconnect fares. More information on using Akamai as a CDN Interconnect provider can be found here.

We’ll continue to invest and innovate in our networking capabilities, and pass the benefits of Google’s major networking enhancements to Cloud Platform customers. We always appreciate feedback and would love to learn how we can support your mission-critical workloads. Contact the Cloud Networking team to get started!

Posted by Morgan Dollard, Cloud Networking Product Management Lead

Enhancements to Container Engine and Container Registry

DevOps teams are adopting containers to make their development and deployment simpler. Google Cloud Platform has a complete suite of container offerings including Google Container Engine and Google Container Registry. Today we’re introducing some enhancements to them both, along with updates to our ecosystem to give you more options in managing container images and running services.


Container Registry


Docker Registry V2 API support. You can now push and pull Docker images to Container Registry using the V2 API. This allows you to have content addressable references, parallel layer downloads and digest-based pulls. Docker versions 1.6 and above support the v2 API, it’s recommended to upgrade to the latest version. If you’re using a mix of Docker client versions, see the newest Docker documentation to check compatibility.


Performance enhancements. Based on internal performance testing, this update pulls images 40% faster than the previous version.

Advanced Authentication. If you use a continuous delivery system (and we hope you do), it’s even easier to make it work with Container Registry, see the auth documentation page for details and setup. Learn how it works with popular CI/CD systems including Circle, Codeship, Drone.io, Jenkins, Shippable and Wercker.

TwistLock Integration. TwistLock provides rule violation detection and policy enforcement for containers in a registry or at runtime. They recently completed a Beta with 15 customers with positive results. Using TwistLock with GCR and GKE is really simple. See their blog for more details.



Container Engine


Today, on the heels of the Kubernetes 1.1 release, we’re bring the latest from Kubernetes to Container Engine users. The performance improvements in this release ensure you can run Google Container Engine in high-scale environments. Additional highlights of this release include:




  • Horizontal pod autoscaling helps resolve the uneven experiences users see when workloads go through spiky periods of utilization, meaning your pods can scale up and down based on CPU usage.

  • HTTP load balancer that enables routing traffic to different Kubernetes services based on HTTP traffic, such as using different services for sub-URLs.

  • A re-architected networking system that allows native iptables and reduces tail latency by up to 80%, virtually eliminating CPU overhead and improving reliability. Available in Beta, you can manually choose to enable this in GKE by running the following shell command:
             for node in $(kubectl get nodes -o name | cut -f2 -d/); do
                   kubectl annotate node $node 
                      net.beta.kubernetes.io/proxy-mode=iptables;
                   gcloud compute ssh --zone=us-central1-b $node 
                      --command="sudo /etc/init.d/kube-proxy restart";
             done

These and other updates in the 1.1 release will be rolled out to all Container Engine users over the next week. Send us your feedback and connect with the community on the google-containers mailing list or on the Kubernetes google-containers Slack channel.

If you’re new to the Google Cloud Platform, getting started is easy. Sign up for your free trial here.

- Posted by Kit Merker, Product Manager, Google Cloud Platform