Tag Archives: networking

What a year! Google Cloud Platform in 2017



The end of the year is a time for reflection . . . and making lists. As 2017 comes to a close, we thought we’d review some of the most memorable Google Cloud Platform (GCP) product announcements, white papers and how-tos, as judged by popularity with our readership.

As we pulled the data for this post, some definite themes emerged about your interests when it comes to GCP:
  1. You love to hear about advanced infrastructure: CPUs, GPUs, TPUs, better network plumbing and more regions. 
  2.  How we harden our infrastructure is endlessly interesting to you, as are tips about how to use our security services. 
  3.  Open source is always a crowd-pleaser, particularly if it presents a cloud-native solution to an age-old problem. 
  4.  You’re inspired by Google innovation — unique technologies that we developed to address internal, Google-scale problems. So, without further ado, we present to you the most-read stories of 2017.

Cutting-edge infrastructure

If you subscribe to the “bigger is always better” theory of cloud infrastructure, then you were a happy camper this year. Early in 2017, we announced that GCP would be the first cloud provider to offer Intel Skylake architecture, GPUs for Compute Engine and Cloud Machine Learning became generally available and Shazam talked about why cloud GPUs made sense for them. In the spring, you devoured a piece on the performance of TPUs, and another about the then-largest cloud-based compute cluster. We announced yet more new GPU models and topping it all off, Compute Engine began offering machine types with a whopping 96 vCPUs and 624GB of memory.

It wasn’t just our chip offerings that grabbed your attention — you were pretty jazzed about Google Cloud network infrastructure too. You read deep dives about Espresso, our peering-edge architecture, TCP BBR congestion control and improved Compute Engine latency with Andromeda 2.1. You also dug stories about new networking features: Dedicated Interconnect, Network Service Tiers and GCP’s unique take on sneakernet: Transfer Appliance.

What’s the use of great infrastructure without somewhere to put it? 2017 was also a year of major geographic expansion. We started out the year with six regions, and ended it with 13, adding Northern Virginia, Singapore, Sydney, London, Germany, Sao Paolo and Mumbai. This was also the year that we shed our Earthly shackles, and expanded to Mars ;)

Security above all


Google has historically gone to great lengths to secure our infrastructure, and this was the year we discussed some of those advanced techniques in our popular Security in plaintext series. Among them: 7 ways we harden our KVM hypervisor, Fuzzing PCI Express and Titan in depth.

You also grooved on new GCP security services: Cloud Key Management and managed SSL certificates for App Engine applications. Finally, you took heart in a white paper on how to implement BeyondCorp as a more secure alternative to VPN, and support for the European GDPR data protection laws across GCP.

Open, hybrid development


When you think about GCP and open source, Kubernetes springs to mind. We open-sourced the container management platform back in 2014, but this year we showed that GCP is an optimal place to run it. It’s consistently among the first cloud services to run the latest version (most recently, Kubernetes 1.8) and comes with advanced management features out of the box. And as of this fall, it’s certified as a conformant Kubernetes distribution, complete with a new name: Google Kubernetes Engine.

Part of Kubernetes’ draw is as a platform-agnostic stepping stone to the cloud. Accordingly, many of you flocked to stories about Kubernetes and containers in hybrid scenarios. Think Pivotal Container Service and Kubernetes’ role in our new partnership with Cisco. The developers among you were smitten with Cloud Container Builder, a stand-alone tool for building container images, regardless of where you deploy them.

But our open source efforts aren’t limited to Kubernetes — we also made significant contributions to Spinnaker 1.0, and helped launch the Istio and Grafeas projects. You ate up our "Partnering on open source" series, featuring the likes of HashiCorp, Chef, Ansible and Puppet. Availability-minded developers loved our Customer Reliability Engineering (CRE) team’s missive on release canaries, and with API design: Choosing between names and identifiers in URLs, our Apigee team showed them a nifty way to have their proverbial cake and eat it too.

Google innovation


In distributed database circles, Google’s Spanner is legendary, so many of you were delighted when we announced Cloud Spanner and a discussion of how it defies the CAP Theorem. Having a scalable database that offers strong consistency and great performance seemed to really change your conception of what’s possible — as did Cloud IoT Core, our platform for connecting and managing “things” at scale. CREs, meanwhile, showed you the Google way to handle an incident.

2017 was also the year machine learning became accessible. For those of you with large datasets, we showed you how to use Cloud Dataprep, Dataflow, and BigQuery to clean up and organize unstructured data. It turns out you don’t need a PhD to learn to use TensorFlow, and for visual learners, we explained how to visualize a variety of neural net architectures with TensorFlow Playground. One Google Developer Advocate even taught his middle-school son TensorFlow and basic linear algebra, as applied to a game of rock-paper-scissors.

Natural language processing also became a mainstay of machine learning-based applications; here, we highlighted with a lighthearted and relatable example. We launched the Video Intelligence API and showed how Cloud Machine Learning Engine simplifies the process of training a custom object detector. And the makers among you really went for a post that shows you how to add machine learning to your IoT projects with Google AIY Voice Kit. Talk about accessible!

Lastly, we want to thank all our customers, partners and readers for your continued loyalty and support this year, and wish you a peaceful, joyful, holiday season. And be sure to rest up and visit us again Next year. Because if you thought we had a lot to say in 2017, well, hold onto your hats.

One year of Cloud Performance Atlas



In March of this year, we kicked off a new content initiative called Cloud Performance Atlas, where we highlight best practices for GCP performance, and how to solve the most common performance issues that cloud developers come across.

Here’s the top topics from 2017 that developers found most useful.


5. The bandwidth delay problem


Every now and again, I’ll get a question from a company who recently updated their connection bandwidth from their on-premises systems to Google Cloud, and for some reason, aren’t getting any better performance as a result. The issue, as we’ve seen multiple times, usually resides in an area of TCP called “the bandwidth delay problem.”

The TCP algorithm works by transferring data in packets between two connections. A packet is sent to a connection, and then an acknowledgement packet is returned. To get maximum performance in this process, the connection between the two endpoints has to be optimized so that neither the sender or receiver is waiting around for acknowledgements from prior packets.

The most common way to address this problem is to adjust the window sizes for the packets to match the bandwidth of the connection. This allows both sides to continue sending data until an ACK arrives back from the client for an earlier packet, thereby creating no gaps and achieving maximum throughput. As such, a low window size will limit your connection throughput, regardless of the available or advertised bandwidth between instances.

Find out more by checking out the video, or article!

4. Improving CDN performance with custom keys


Google Cloud boasts an extremely powerful CDN that can leverage points-of-presence around the globe to get your data to users as fast as possible.

When setting up Cloud CDN for your site, one of the most important things is to ensure that you’re using the right Custom Cache Keys to configure what assets get cached, and which ones don’t. In most cases, this isn’t an issue, but if you’re leveraging a large site with content re-used across protocols (i.e., http and https) you can run into a problem where your cache fill costs can increase more than expected.

You can see how we helped a sports website get their CDN keys just right in the video, and article.


3. Google Cloud Storage and the sequential filename challenge


Google Cloud Storage is a one-stop-shop for all your content serving needs. However, one developer continued to run into a problem of slow upload speeds when pushing their content into the cloud.

The issue was that Cloud Storage uses the file path and name of the files being uploaded to segment and shard the connection to multiple frontends (improving performance). As we found out, if those file names are sequential then you could end up in a situation where multiple connections get squashed down to a single upload thread (thus hurting performance)!

As shown in the video and article, we were able to help a nursery camera company get past this issue with a few small fixes.

2. Improving Compute Engine boot time with custom images


Any cloud-based service needs to grow and shrink its resource allocations to respond to traffic load. Most of the time, this is a good thing, especially during the holiday season. ;) As traffic increases to your service/application, your backends will need to spin up more Compute Engine VMs to provide a consistent experience to your users.

However, if it takes too long for your VMs to start up, then the quality and performance for you users can be negatively impacted, especially if your VM needs to do a lot of things during its startup script, like compile code, or install large packages.

As we showed in the video, (article) you can pre-compute a lot of that work into a custom image of boot disks. When your VMs are loaded, they simply need to copy in the custom image to the disk (with everything already installed), rather than doing everything from scratch.

If you’re looking to improve your GCE boot performance, custom images are worth checking out!

1. App Engine boot time


Modern managed languages (Java, Python, Javascript, etc.) typically have a run-time dependencies step that occurs at the init phase of the program when code is imported and instantiated.

Before execution can begin, any global data, functions or state information are also set up. Most of the time, these systems are global in scope, since they need to be used by so many subsystems (for example, a logging system).

In the case of App Engine, this global initialization work can end up delaying start-time, since it must complete before a request can be serviced. And as we showed in the video and article, as your application responds to spikes in workload, this type of global variable contention can put a hurt on your request response times.


See you soon!


For the rest of 2017 our Cloud Performance team is enjoying a few hot cups of tea, relaxing with the holidays and counting down the days until the new year. In 2018, we’ve got a lot of awesome new topics to cover, including increased networking performance, Cloud Functions and Cloud Spanner!

Until then, make sure you check out the Cloud Performance Atlas videos on Youtube or our article series on Medium.

Thanks again for a great year everyone, and remember, every millisecond counts!

What a year! Google Cloud Platform in 2017



The end of the year is a time for reflection . . . and making lists. As 2017 comes to a close, we thought we’d review some of the most memorable Google Cloud Platform (GCP) product announcements, white papers and how-tos, as judged by popularity with our readership.

As we pulled the data for this post, some definite themes emerged about your interests when it comes to GCP:
  1. You love to hear about advanced infrastructure: CPUs, GPUs, TPUs, better network plumbing and more regions. 
  2.  How we harden our infrastructure is endlessly interesting to you, as are tips about how to use our security services. 
  3.  Open source is always a crowd-pleaser, particularly if it presents a cloud-native solution to an age-old problem. 
  4.  You’re inspired by Google innovation — unique technologies that we developed to address internal, Google-scale problems. So, without further ado, we present to you the most-read stories of 2017.

Cutting-edge infrastructure

If you subscribe to the “bigger is always better” theory of cloud infrastructure, then you were a happy camper this year. Early in 2017, we announced that GCP would be the first cloud provider to offer Intel Skylake architecture, GPUs for Compute Engine and Cloud Machine Learning became generally available and Shazam talked about why cloud GPUs made sense for them. In the spring, you devoured a piece on the performance of TPUs, and another about the then-largest cloud-based compute cluster. We announced yet more new GPU models and topping it all off, Compute Engine began offering machine types with a whopping 96 vCPUs and 624GB of memory.

It wasn’t just our chip offerings that grabbed your attention — you were pretty jazzed about Google Cloud network infrastructure too. You read deep dives about Espresso, our peering-edge architecture, TCP BBR congestion control and improved Compute Engine latency with Andromeda 2.1. You also dug stories about new networking features: Dedicated Interconnect, Network Service Tiers and GCP’s unique take on sneakernet: Transfer Appliance.

What’s the use of great infrastructure without somewhere to put it? 2017 was also a year of major geographic expansion. We started out the year with six regions, and ended it with 13, adding Northern Virginia, Singapore, Sydney, London, Germany, Sao Paolo and Mumbai. This was also the year that we shed our Earthly shackles, and expanded to Mars ;)

Security above all


Google has historically gone to great lengths to secure our infrastructure, and this was the year we discussed some of those advanced techniques in our popular Security in plaintext series. Among them: 7 ways we harden our KVM hypervisor, Fuzzing PCI Express and Titan in depth.

You also grooved on new GCP security services: Cloud Key Management and managed SSL certificates for App Engine applications. Finally, you took heart in a white paper on how to implement BeyondCorp as a more secure alternative to VPN, and support for the European GDPR data protection laws across GCP.

Open, hybrid development


When you think about GCP and open source, Kubernetes springs to mind. We open-sourced the container management platform back in 2014, but this year we showed that GCP is an optimal place to run it. It’s consistently among the first cloud services to run the latest version (most recently, Kubernetes 1.8) and comes with advanced management features out of the box. And as of this fall, it’s certified as a conformant Kubernetes distribution, complete with a new name: Google Kubernetes Engine.

Part of Kubernetes’ draw is as a platform-agnostic stepping stone to the cloud. Accordingly, many of you flocked to stories about Kubernetes and containers in hybrid scenarios. Think Pivotal Container Service and Kubernetes’ role in our new partnership with Cisco. The developers among you were smitten with Cloud Container Builder, a stand-alone tool for building container images, regardless of where you deploy them.

But our open source efforts aren’t limited to Kubernetes — we also made significant contributions to Spinnaker 1.0, and helped launch the Istio and Grafeas projects. You ate up our "Partnering on open source" series, featuring the likes of HashiCorp, Chef, Ansible and Puppet. Availability-minded developers loved our Customer Reliability Engineering (CRE) team’s missive on release canaries, and with API design: Choosing between names and identifiers in URLs, our Apigee team showed them a nifty way to have their proverbial cake and eat it too.

Google innovation


In distributed database circles, Google’s Spanner is legendary, so many of you were delighted when we announced Cloud Spanner and a discussion of how it defies the CAP Theorem. Having a scalable database that offers strong consistency and great performance seemed to really change your conception of what’s possible — as did Cloud IoT Core, our platform for connecting and managing “things” at scale. CREs, meanwhile, showed you the Google way to handle an incident.

2017 was also the year machine learning became accessible. For those of you with large datasets, we showed you how to use Cloud Dataprep, Dataflow, and BigQuery to clean up and organize unstructured data. It turns out you don’t need a PhD to learn to use TensorFlow, and for visual learners, we explained how to visualize a variety of neural net architectures with TensorFlow Playground. One Google Developer Advocate even taught his middle-school son TensorFlow and basic linear algebra, as applied to a game of rock-paper-scissors.

Natural language processing also became a mainstay of machine learning-based applications; here, we highlighted with a lighthearted and relatable example. We launched the Video Intelligence API and showed how Cloud Machine Learning Engine simplifies the process of training a custom object detector. And the makers among you really went for a post that shows you how to add machine learning to your IoT projects with Google AIY Voice Kit. Talk about accessible!

Lastly, we want to thank all our customers, partners and readers for your continued loyalty and support this year, and wish you a peaceful, joyful, holiday season. And be sure to rest up and visit us again Next year. Because if you thought we had a lot to say in 2017, well, hold onto your hats.

5 steps to better GCP network performance



We’re admittedly a little biased, but we’re pretty proud of our networking technology. Jupiter, the Andromeda network virtualization stack and TCP-BBR all ride on datacenters around the world and  the intercontinental cables that connect them all.

As a Google Cloud customer, your applications already have access to this fast, global network, giving your VM-to-VM communication top-tier performance. Furthermore, because Google peers its egress traffic directly with a number of companies (including Cloudflare), you can get content to your customers faster, with lower egress costs.

With that in mind, it’s really easy to make small configuration changes, location updates or architectural changes that can inadvertently limit the networking performance of your system. Here are the top five things you can do to get the most out of Google Cloud.

1. Know your tools

Testing your networking performance is the first step to improving your environment. Here are the tools I use on a daily basis:
  • Iperf is a commonly used network testing tool that can create TCP/UDP data streams and measure the throughput of the network that carries them. 
  • Netperf is another good network testing tool, which is also used by the PerfKitBenchmark suite to test performance and benchmark the various cloud providers against one another. 
  • traceroute is a computer network diagnostic tool to measure and display packets’ routes across a network. It records the route’s history as the round-trip times of the packets received from each successive host in the route; the sum of the mean times in each hop is a measure of the total time spent to establish the connection.
These tools are battle-hardened, really well documented, and should be the cornerstone of your performance efforts.

2. Put instances in the right zones


One important thing to remember about network latency is that it’s a function of physics.

The speed of light traveling in a vacuum is 300,000 km/s, meaning that it takes about 10ms to travel a distance of ~3000km — about the distance of New York to Santa Fe. But because the internet is built on fiber-optic cable, which slows things down by a factor of ~1.52, data can only travel 1013km one way in that same 10ms.

So, the farther away two machines are, the higher their latency will be. Thankfully, Google has datacenter locations all around the world, making it easy to put your compute close to your users.


It’s worthwhile to take a regular look at where your instances are deployed, and see if there’s an opportunity to open up operations in a new region. Doing so will help reduce latency to the end user, and also help create a system of redundancy to help safeguard against various types of networking calamity.

3. Choose the right core-count for your networking needs


According to the Compute Engine documentation:

Outbound or egress traffic from a virtual machine is subject to maximum network egress throughput caps. These caps are dependent on the number of vCPUs that a virtual machine instance has. Each core is subject to a 2 Gbits/second (Gbps) cap for peak performance. Each additional core increases the network cap, up to a theoretical maximum of 16 Gbps for each virtual machine.

In other words, the more virtual CPUs in a guest, the more networking throughput you get. You can see this yourself by setting up a bunch of instance types, and logging their IPerf performance:
You can clearly see that as the core count goes up, so does the avg. and max. throughput. Even with our simple test, we can see that hard 16Gbps limit on the higher machines.

As such, it’s critically important to choose the right type of instance for your networking needs. Picking something too large can cause you to over-provision (and over pay!), while too few cores places a hard limit on your maximum throughput speeds.

4. Use internal over external IPs


Any time you transfer data or communicate between VMs, you can achieve max performance by always using the internal IP to communicate. In many cases, the difference in speed can be drastic. Below, you can see for a N1 machine, the bandwidth measured through iperf to the external IP was only 884 MBits/sec

user@instance-2:~$ iperf -c 104.155.145.79 ------------------------------------------------------------
Client connecting to 104.155.145.79, TCP port 5001
TCP window size: 45.0 KByte (default)
------------------------------------------------------------
[  3] local 10.128.0.3 port 53504 connected with 104.155.145.79 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.03 GBytes   884 Mbits/sec

However, the internal IP between the two machines boasted 1.56 GBits / sec:

user@instance-2:~$ iperf -c 10.128.0.2
------------------------------------------------------------
Client connecting to 10.128.0.2, TCP port 5001
TCP window size: 45.0 KByte (default)
------------------------------------------------------------
[  3] local 10.128.0.3 port 38978 connected with 10.128.0.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  2.27 GBytes  1.95 Gbits/sec

5. Rightsize your TCP window


If you have ever wondered why a connection transmits at a fraction of the available bandwidth — even when both the client and the server are capable of higher rates — then it might be due to a window size mismatch.

The Transmission Control Protocol (aka TCP) works by sending windows of data over the internet, relying on a straightforward system of handshakes and acknowledgements to ensure arrival and integrity of the data, and in some cases, to resend it. On the plus side, this results in a very stable internet. On the downside, it results in lots of extra traffic. And when the sender or receiver stop and wait for ACKs for previous windows/packets, this creates gaps in the data flow, limiting the maximum throughput of the connection.

Imagine, for example, a saturated peer that is advertising a small receive window, bad network weather and high packet loss resetting the congestion window, or explicit traffic shaping limiting the throughput of your connection. To address this problem, window sizes should be just big enough such that either side can continue sending data until it receives an ACK for an earlier packet. Keeping windows small limits your connection throughput, regardless of the available or advertised bandwidth between instances.

For the best performance possible in your application, you should really fine-tune window sizes depending on your client connections, estimated egress and bandwidth constraints. The good news is that the TCP window sizes on standard GCP VMs are are tuned for high-performance throughput. So be sure you test the defaults before you make any changes (sometimes, it might not be needed!)


Every millisecond counts

Getting peak performance across a cloud-native architecture is rarely achieved by fixing just one problem. It’s usually a combination of issues, the “death by a thousand cuts” as it were, that chips away at your performance, piece by piece. By following these five steps, you’ll be able to isolate, identify and address some of the most common culprits of poor network performance, to help you take advantage of all the networking performance that’s available to you.

If you’d like to know more about ways to optimize your Google Cloud applications, check out the rest of the Google Cloud Performance Atlas blog posts and videos. Because, when it comes to performance, every millisecond counts.

DNSSEC now available in Cloud DNS



Today, we're excited to announce that Google is adding DNSSEC support (beta) to our fully managed Google Cloud DNS service. Now you and your users can take advantage of the protection provided by DNSSEC without having to maintain it once it's set up.

Why is DNSSEC an important add-on to DNS?

Domain Name System Security Extensions (DNSSEC) adds security to the Domain Name System (DNS) protocol by enabling DNS responses to be validated. Having a trustworthy Domain Name System (DNS) that translates a domain name like www.example.com into its associated IP address is an increasingly important building block of today’s web-based applications. Attackers can hijack this process of domain/IP lookup and redirect users to a malicious site through DNS hijacking and man-in-the-middle attacks. DNSSEC helps mitigate the risk of such attacks by cryptographically signing DNS records. As a result, it prevents attackers from issuing fake DNS responses that may misdirect browsers to nefarious websites.

Google Cloud DNS and DNSSEC

Cloud DNS is a fast, reliable and cost-effective Domain Name System that powers millions of domains on the internet. DNSSEC in Cloud DNS enables domain owners to take easy steps to protect their domains against DNS hijacking and man-in-the-middle attacks. Advanced users may choose to use different signing algorithms and denial-of-existence types. We support several sizes of RSA and ECDSA keys, as well as both NSEC and NSEC3. Enabling support for DNSSEC brings no additional charges or changes to the terms of service. 
To start using DNSSEC, simply turn the feature to "on" within your DNS zone.
DNSSEC will be automatically enabled for that zone.
To learn more about getting started with DNSSEC for Cloud DNS, please refer to the documentation page.

Andromeda 2.1 reduces GCP’s intra-zone latency by 40%



Google Cloud customers now enjoy significantly improved intra-zone network latency with the release of Andromeda 2.1, a software-defined network (SDN) stack that underpins all of Google Cloud Platform (GCP). The latest version of Andromeda reduces network latency between Compute Engine VMs by 40% over Andromeda 2.0 and by nearly a factor of 8 since we first launched Andromeda in 2014.

This kind of network performance is especially important as more applications move into the cloud and are accessed via web browsers. While the headline metric is often bandwidth, network latency is frequently the more important determiner of application performance. For example, low latency is essential for financial transactions, ad-tech, video, gaming and retail, as well as workloads such as HPC applications, memcache and in-memory databases. Likewise, HTTP-based microservices will see significant improvement in responsiveness with reduced latency.

Andromeda 2.1 latency improvements come from a form of hypervisor bypass that builds on virtio, the Linux paravirtualization standard for device drivers. Andromeda 2.1 enhancements enable the Compute Engine guest VM and the Andromeda software switch to communicate directly via shared memory network queues, bypassing the hypervisor completely for performance-sensitive per-packet operations.

In our previous approach, the hypervisor thread served as a bridge between the guest VM and the Andromeda software switch. Packets flowed from the VM to a hypervisor thread, to the local host’s Andromeda software switch, then over the physical network to another Andromeda software switch, and back up through the hypervisor to the VM. Further, any time the thread wasn’t bridging packets, it was descheduled, increasing tail latency for new packet processing. In many cases, a single network round-trip required four costly hypervisor thread wakeups!

Andromeda 2.1's optimized datapath using hypervisor bypass.


Andromeda 2.1 performance in action


The new Andromeda 2.1 stack delivers noteworthy reductions in VM-to-VM network latency. The figure below shows the factor by which the latency has reduced over time compared to the median round-trip time of the original stack.
Factor by which latency has improved over time

This reduction in network round-trip times translates into real-world performance boosts for latency sensitive applications. Take Aerospike, a high-performance in-memory NoSQL database. The new Andromeda stack delivers both a reduction in request latency and improved request throughput for Aerospike, as shown below.



Considering Andromeda SDN is a foundational building block of Google Cloud, you should see similar improvements in intra-zone latency, regardless of what applications you're running.

Andromeda SDN delivers flexibility and reliability 


Andromeda SDN enables more flexibility than other hardware-based stacks. With SDN, we can quickly develop and overhaul our entire virtual network infrastructure. We can roll out new cloud network services and features, apply security patches and gain significant performance improvements. Better yet, we can confidently deploy to Google Cloud with no downtime, reboots or even VM migrations, because the flexibility of SDN allows us to thoroughly test our code. Watch this space to learn about the new features and enhanced network performance made possible by our Andromeda SDN foundation.

Google Cloud Dedicated Interconnect gets global routing, more locations, and is GA



We have major updates to Dedicated Interconnect, which helps enable fast private connections to Google Cloud Platform (GCP) from numerous facilities across the globe, so you can extend your on-premises network to your GCP Virtual Private Cloud (VPC) network. With faster private connections offered by Dedicated Interconnect, you can build applications that span on-premises infrastructure and GCP without compromising privacy or performance.

Dedicated Interconnect is now GA and ready for production-grade workloads, and covered by a service level agreement. Dedicated Interconnect can be configured to offer a 99.9% or a 99.99% uptime SLA. Please see the Dedicated Interconnect documentation for details on how to achieve these SLAs.

Going global with the help of Cloud Router


Dedicated Interconnect now supports global routing for Cloud Router, a new feature that allows subnets in GCP to be accessible from any on-premise network through the Google network. This feature presents a new flag in Cloud Router that allows the network to advertise all the subnets in a project. For example, a connection from your on-premise data center in Chicago to GCP’s Dedicated Interconnect location in Chicago now gives you access to all subnets running in all GCP regions around the globe, including those in the Americas, Asia and Europe. We believe this functionality is unique among leading cloud providers. This feature is generally available, and you can learn more about it in the Cloud Router documentation.
Using Cloud Router Global Routing to connect on-premises workloads via "Customer Peering Router" with GCP workloads in regions anywhere in the world.

Dedicated Interconnect is your new neighbor


Dedicated Interconnect is also available from four new locations: Mumbai, Munich, Montreal and Atlanta. This means you can connect to Google’s network from almost anywhere in the world. For a full list of locations, visit the Dedicated Interconnect locations page. Please note, in the graphic below, many locations (blue dots) offer service from more than one facility.
In addition to those four new Google locations, we’re also working with Equinix to offer Dedicated Interconnect access in multiple markets across the globe, ensuring that no matter where you are, there's a Dedicated Interconnect connection close to you.
"By providing direct access to Google Cloud Dedicated Interconnect, we are helping enterprises leverage Google’s network  the largest in the world and accelerate their hybrid cloud strategies globally. Dedicated Interconnect offered in collaboration with Equinix enables customers to easily build the cloud of their choice with dedicated, low-latency connections and SLAs that enterprise customers have come to expect from hybrid cloud architectures." 
Ryan Mallory, Vice President, Global Solutions Enablement, Equinix

Here at Google Cloud, we’re really excited about Dedicated Interconnect, including the 99.99% uptime SLA, four new locations, and Cloud Router Global Routing. Dedicated Interconnect will make it easier for more businesses to connect to Google Cloud, and we can’t wait to see the next generation of enterprise workloads that Dedicated Interconnect makes possible.

If you’d like to learn which connection option is right for you, more about pricing and whole lots more, please take a look at the Interconnect product page.

GCP adds support for multiple network interfaces



By default, VM instances in a Virtual Private Cloud (VPC) have a single network interface. Sometimes you need more than that, say, to enforce networking or security functions in the instance, or across isolated VPCs. That’s why today, we’re excited to announce that multiple network interface support is generally available, allowing you to provision up to eight network interfaces on a single VM instance.

With multiple network interfaces available to an instance, you can:
  • Connect virtual network and security appliances 
  • Isolate public-facing services from an internal network and its services 
  • Separate management, control, storage and data plane networks 
  • Create an inexpensive fault-tolerant solution 
With multiple network interfaces, you can host virtualized networking or security functions that apply to communication across separate VPC networks, for example, from public to VPC network domains and vice versa. Examples of these VPC network and security functions include load balancers, Intrusion Detection and Prevention Systems (IDS/IPS), Web Application Firewalls (WAF) and WAN optimization. Having multiple network interfaces is also useful when applications running in an instance need to separate traffic, for example data plane traffic from management plane traffic.

Here’s an example of creating a VM instance with multiple network interfaces, in this case, an inside network and an outside network.
Below is a sample architectural diagram of a security appliance with four network interfaces. As you can see, you can create North-South networks (e.g., the outbound network on the left) or East-West (e.g., the inbound networks on the bottom). [Editor’s note: If you’d like to build your own architectural diagrams such as this, check out these sample diagrams and our icon library.]
Support for multiple network interfaces makes it possible for enterprises to migrate sensitive applications to Google Cloud, and our partners are weaving this functionality into their products.
"We have been working closely with Google Cloud on design and use cases for this capability. The multiple network interface VM will enable Palo Alto Networks to provide the same enterprise-grade security that customers are used to in their private data centers. Customers will be able to inspect not just the traffic coming into GCP, but also the East-West traffic between their GCP projects and across VPCs." 
Adam Geller, VP, Product Management for Virtualization and Cloud at Palo Alto Networks
"We are delighted to have worked with Google to demonstrate how NETSCOUT’s packet-based application assurance can be extended to multiple interface GCP compute instances. This will allow GCP customers to leverage the benefits of multiple network interfaces, while minimizing the disruption of cloud migration and hybrid cloud deployments through the proactive identification of issues impacting user experience, operational efficiency and productivity." 
Paul Barrett, CTO for Enterprise Business Operations

To learn more about configuring and using multiple NICs, visit the documentation. To participate as a GCP partner, join the partner community. Then get ready to build cloud applications that deliver the flexibility, security features and agility that enterprises have come to expect from cloud networks.

Announcing IPv6 global load balancing GA



Google Cloud users deploy Cloud Load Balancing to instantiate applications across the globe, architect for the highest levels of availability, and deliver applications with low latency. Today, we’re excited to announce that IPv6 global load balancing is now generally available (GA).

Until today, global load balancing was available only for IPv4 clients. With this launch, your IPv6 clients can connect to an IPv6 load balancing VIP (Virtual IP) and get load balanced to IPv4 application instances using HTTP(S) Load Balancing, SSL proxy, and TCP proxy. You now get the same management simplicity of using a single anycast IPv6 VIP for application instances in multiple regions.

Home Depot serves 75% of homedepot.com out of Google Cloud Platform (GCP) and uses global load balancing to achieve a global footprint and resiliency for its service with low management overhead.
"On the front-end, we use the Layer 7 load balancer with a single global IP that intelligently routes customer requests to the closest location. Global load balancing will allow us to easily add another region in the future without any DNS record changes, or for that matter, doing anything besides adding VMs in the right location."  
Ravi Yeddula, Senior Director Platform Architecture and Application Development, The Home Depot

IPv6 support unlocks new capabilities 


With IPv6 global load balancing, you can build more scalable and resilient applications on GCP, with the following benefits:
  • Single Anycast IPv6 VIP for multi-region deployment: Now, you only need one Load Balancer IPv6 VIP for application instances running across multiple regions. This means that your DNS server has a single AAAA record and that you don’t need to load-balance among multiple IPv6 VIPs. Caching of AAAA records by clients is not an issue since there's only one IPv6 VIP to cache. User requests to IPv6 VIP are automatically load balanced to the closest healthy instance with available capacity.
  • Support for a variety of traffic types: You can load balance HTTP, HTTPS, HTTP/2, TCP and TLS (non-HTTP) IPv6 client traffic. 
  • Cross-region overflow with a single IPv6 Load Balancer VIP: If instances in one region are out of resources, the IPv6 global load balancer automatically directs requests from users closest to this region to another region with available resources. Once the closest region has available resources, global load balancing reverts back to serving user requests via instances in this region. 
  • Cross-region failover with single IPv6 Load Balancer VIP: If the region with instances closest to the user experiences a failure, IPv6 global load balancing automatically directs traffic to another region with healthy instances. 
  • Dual-stack applications: To serve both IPv6 and IPv4 clients, create two load balancer IPs  one with an IPv6 VIP and the other with an IPv4 VIP and associate both VIPs with the same IPv4 application instances. IPv4 clients connect to the IPv4 Load Balancer VIP while IPv6 clients connect to IPv6 Load Balancer VIP. These clients are then automatically load balanced to the closest healthy instance with available capacity. We provide IPv6 VIPs (forwarding rules) without charge, so you pay for only the IPv4 ones.
    (click to enlarge)

A global, scalable, resilient foundation 


Global load balancing for both IPv6 and IPv4 clients benefits from its scalable, software-defined architecture that reduces latency for end users and ensures a great user experience.
  • Software-defined, globally distributed load balancing: Global load balancing is delivered via software-defined, globally distributed systems. This means that you won’t hit performance bottlenecks with the load balancer and it can handle 1,000,000+ queries per second seamlessly. 
  • Reduced latency through edge-based architecture: Global load balancing is delivered at the edge of Google's global network from 80+ points of presence (POPs) across the globe. User connections terminate at the POP closest to them and travel over Google's global network to the load-balanced instance in Google Cloud. 
    (click to enlarge)
  • Seamless autoscaling: Global load balancing scales application instances up or down automatically based on traffic  no pre-warming of instances required. 

Take IPv6 global load balancing for a spin 


Earlier this year, we gave a sneak preview of IPv6 global load balancing at Google Cloud Next ‘17. You can test drive this feature using the same setup.

In this setup:
  • v6.gcpnetworking.com is served by IPv4 application instances in multiple Google Cloud regions across the globe. 
  • A single anycast IPv6 Load Balancer IP, 2600:1901:0:ab8:: fronts the IPv4 application instances across regions 
  • When you connect using an IPv6 address to this website, IPv6 global load balancing directs you to a healthy Google Cloud instance that's closest to you and has available capacity. 
  • The website is programmed to display your IPv6 address, the Load Balancer IPv6 VIP and information about the instance serving your request. 
  • v6.gcpnetworking.com will only work with IPv6 clients. You can test drive gcpnetworking.com instead if you want to test with both IPv4 and IPv6 clients.
For example, when I connect to v6.gcpnetworking.com from California, my request connects to an IPv6 global load balancer with IP address 2600:1901:0:ab8:: and is served out of an instance in us-west1-c, the closest region to California in the set-up.

Give it a try, and you'll observe that while your request connects to the same IPv6 VIP address 2600:1901:0:ab8::, it's served by an instance closest to you that has available capacity.

You can learn more by reading about IPv6 global load balancing, and taking it for a spin. We look forward to your feedback!

Announcing Dedicated Interconnect: your fast, private on-ramp to Google Cloud



Easy to manage, high bandwidth, private, network connectivity is essential for large enterprises. That’s why today we’re announcing Dedicated Interconnect, a new way to connect to Google Cloud and access the world’s largest cloud network.

Dedicated Interconnect lets you establish a private network connection directly to Google Cloud Platform (GCP) through one of our Dedicated Interconnect locations. Dedicated Interconnect also offers increased throughput and even a potential reduction in network costs.

Companies with data and latency-sensitive services, such as Metamarkets, a real-time analytics firm, benefit from Dedicated Interconnect.

"Accessing GCP with high bandwidth, low latency, and consistent network connectivity is critical for our business objectives. Google's Dedicated Interconnect allows us to successfully achieve higher reliability, higher throughput and lower latency while reducing the total cost of ownership by more than 60%, compared to solutions over the public internet.” 
– Nhan Phan, VP of Engineering at Metamarkets 

Dedicated Interconnect enables you to extend the corporate datacenter network and RFC 1918 IP space into Google Cloud as part of a hybrid cloud deployment. If you work with large or real-time data sets, Dedicated Interconnect can also help you control how that data is routed.

Dedicated Interconnect features 

With Dedicated Interconnect you get a direct connection to GCP VPC networks with connectivity to internal IP addresses in RFC 1918 address space. It’s available in 10 gigabits per second (Gb/s) increments, and you can select from 1 to 8 circuits from the Cloud Console.
Dedicated Interconnect can be configured to offer a 99.9% or a 99.99% uptime SLA. Please see the Dedicated Interconnect documentation for details on how to achieve these SLAs.
Because it combines point and click deployment with ongoing monitoring, Dedicated Interconnect is easy to provision and to manage. Once you have it up and running, you can add an additional VLAN with a point and click configuration — no physical plumbing necessary.

Locations 


Dedicated Interconnect is available today in many locations — with more coming soon. This means you can connect to Google’s network from almost anywhere in the world. For a full list of locations, visit the Dedicated Interconnect locations page. Note that many locations offer service from more than one facility.
Once connected, the Google network provides access to all GCP regions using a private fiber network that connects more than 100 points of presence around the globe. The Google network is the largest cloud network in the world, by several measures, including by the number of points of presence.


Is Dedicated Interconnect right for you? 


Here’s a simple decision tree that can help you determine whether Dedicated Interconnect is right for your organization

Get started with Dedicated Interconnect 

Use Cloud Console to place an order for Dedicated Interconnect.
Dedicated Interconnect will make it easier for more businesses to connect to Google Cloud. We can’t wait to see the next generation of enterprise workloads that Dedicated Interconnect makes possible.