Tag Archives: Products

Upload massive lists of products to Merchant Center using Centimani

Posted by Hector Parra, Jaime Martínez, Miguel Fernandes, Julia Hernández

Merchant Center lets merchants manage how their in-store and online product inventory appears on Google. It allows them to reach hundreds of millions of people looking to buy products like yours each day.

To upload their products, merchants can make use of feeds, that is, files with a list of products in a specific format. These can be shared with Merchant Center in different ways: using Google Sheets, SFTP or FTP shares, Google Cloud Storage or manually through the user interface. These methods work great for the majority of cases. But, if a merchant's product list grows over time, they might reach the usage limits of the feeds. Depending on the case, quota extensions could be granted, but if the list continues to grow, it might reach a point where feeds no longer support that scale, and the Content API for Shopping would become the recommended way to go forward.

The main issue is, if a merchant is recommended to stop using feeds and start using the Content API due to scale problems, it means that the number of products is massive, and trying to use the Content API directly will give them usage and quota errors, as the QPS and products per call limits will be exceeded.

For this specific use case, Centimani becomes critical in helping merchants handle the upload process through the Content API in a controlled manner, avoiding any overload of the API.

Centimani is a configurable massive file processor able to split text files in chunks, process them following a strategic pattern and store the results in BigQuery for reporting. It provides configurable options for chunk size and number of retries, and takes care of exponential backoff to ensure all requests have enough retries to overcome potential temporary issues or errors. Centimani comes with two operators: Google Ads Offline Conversions Uploader, and Merchant Center Products Uploader, but it can be extended to other uses easily.

Centimani uses Google Cloud as its platform, and makes use of Cloud Storage for storing the data, Cloud Functions to do the data processing and the API calls, Cloud Tasks to coordinate the execution of each call, and BigQuery to store the audit information for reporting.

Centimani Architecture

To start using Centimani, a couple of configuration files need to be prepared with information about the Google Cloud Project to be used (including the element names), the credentials to access the Merchant Center accounts and how the load will be distributed (e.g., parallel executions, number of products per call). Then, the deployment is done automatically using a deployment script provided by the tool.

After the tool is deployed, a cloud function will be monitoring the input bucket in Cloud Storage, and every time a file is uploaded there, it will be processed. The tool uses the name of the file to select the operator that is going to be used (“MC” indicates Merchant Center Products Uploader), and the particular configuration to use (multiple configurations can be used to connect to Merchant Center accounts with different access credentials).

Whenever a file is uploaded, it will be sliced in parts if it is greater than the number of products allowed per call, they will be stored in the output bucket in Cloud Storage, and Cloud Tasks will start launching the API calls until all files are processed. Any file with errors will be stored in a folder called “slices_failed” to help troubleshoot any issues found in the process. Also, all the information about the executions will be stored temporarily in Datastore and then moved to BigQuery, where it can be used for monitoring the whole process from a centralized place.

Centimani Status Dashboard Architecture

Centimani provides an easy way for merchants to start using the Content API for Shopping to manage their products, without having to deal with the complexity of keeping the system under the limits.

For more information you can visit the Centimani repository on Github.

The best hardware, software and AI—together

Today, we introduced our second generation family of consumer hardware products that are coming to Canada, all made by Google: new Pixel phones, Google Home Mini and Max, an all new Pixelbook, Google Pixel Buds, and an updated Daydream View headset. We see tremendous potential for devices to be helpful, make your life easier, and even get better over time when they’re created at the intersection of hardware, software and advanced artificial intelligence (AI). 

Why Google? 
These days many devices—especially smartphones—look and act the same. That means in order to create a meaningful experience for users, we need a different approach. A year ago, Sundar outlined his vision of how AI would change how people would use computers. And in fact, AI is already transforming what Google’s products can do in the real world. For example, swipe typing has been around for a while, but AI lets people use Gboard to swipe-type in two languages at once. Google Maps uses AI to figure out what the parking is like at your destination and suggest alternative spots before you’ve even put your foot on the gas. But, for this wave of computing to reach new breakthroughs, we have to build software and hardware that can bring more of the potential of AI into reality—which is what we’ve set out to do with this year’s new family of products.

Hardware, built from the inside out 
We’ve designed and built our latest hardware products around a few core tenets. First and foremost, we want them to be radically helpful. They’re fast, they’re there when you need them, and they’re simple to use. Second, everything is designed for you, so that the technology doesn’t get in the way and instead blends into your lifestyle. Lastly, by creating hardware with AI at the core, our products can improve over time. They’re constantly getting better and faster through automatic software updates. And they’re designed to learn from you, so you’ll notice features—like the Google Assistant—get smarter and more assistive the more you interact with them.

You’ll see this reflected in our 2017 lineup of new Made by Google products:

  • The Pixel 2 has the best camera of any smartphone, again, along with a gorgeous display and augmented reality capabilities. Pixel owners get unlimited storage for their photos and videos, and an exclusive preview of Google Lens, which uses AI to give you helpful information about the things around you. 
  • Google Home Mini brings the Assistant to more places throughout your home, with a beautiful design that fits anywhere. And Max, which is coming later to Canada, is our biggest and best-sounding Google Home device, powered by the Assistant. And with AI-based Smart Sound, Max has the ability to adapt your audio experience to you—your environment, context, and preferences. 
  • With Pixelbook, we’ve reimagined the laptop as a high-performance Chromebook, with a versatile form factor that works the way you do. It’s the first laptop with the Assistant built in, and the Pixelbook Pen makes the whole experience even smarter. 
  • Our new Pixel Buds combine Google smarts and the best digital sound. You’ll get elegant touch controls that put the Assistant just a tap away, and they’ll even help you communicate in a different language. 
  • The updated Daydream View is the best mobile virtual reality (VR) headset on the market, and the simplest, most comfortable VR experience. 

Assistant, everywhere 
Across all these devices, you can interact with the Google Assistant any way you want—talk to it with your Google Home or your Pixel Buds, squeeze your Pixel 2, or use your Pixelbook’s Assistant key or circle things on your screen with the Pixelbook Pen. Wherever you are, and on any device with the Assistant, you can connect to the information you need and get help with the tasks to get you through your day. No other assistive technology comes close, and it continues to get better every day.

Google’s hardware business is just getting started, and we’re committed to building and investing for the long run. We couldn’t be more excited to introduce you to our second-generation family of products that truly brings together the best of Google software, thoughtfully designed hardware with cutting-edge AI. We hope you enjoy using them as much as we do.

Here’s some more info on where and when you can get our new hardware in Canada. Visit The Google Store for more info.

  • Pixel 2 and Pixel 2 XL are available for pre-order today, starting at $899, on The Google Store, Bell, Best Buy Canada, Fido, Freedom Mobile, Koodo, Rogers, The Source, TELUS, Tbooth wireless, Walmart, WIRELESSWAVE, Videotron, and Virgin. 
  • Pixel Buds will be available later this year for $219 on The Google Store and Best Buy Canada. 
  • Pixelbook is available in three configurations starting at $1299, so you can choose the processing power, memory and storage you want. The Pixelbook Pen is $129. Both will be available for pre-order today in Canada, with the exception of Quebec, and on sale at The Google Store and select retailers, including Best Buy Canada. We’re working to bring Pixelbook to Quebec in the future. 
  • Google Home Mini is available for pre-order today for $79 on The Google Store, Best Buy Canada and select retailers. 
  • The new Google Daydream View is available for pre-order today for $139 on The Google Store and select retailers. 

Posted by Rick Osterloh, SVP, Hardware

Finding the perfect hotel and flight for your trip

With summer in full swing, and the holiday travel season right around the corner, it’s time to plan those last-minute vacations and start thinking about where you’ll be spending your holidays this year. Today we’re announcing several updates to Google Flights and hotel search on Google to make it easier to find the right flight or the right hotel, at the right price, for your next trip.

Track airfare changes with Google Flights

Want to stay up to date on changing flight fares without having to continually check prices? You can now track fare changes for a date and route combination, or track specific flights in Google Flights, by simply selecting to track prices after you’ve searched for a flight. When prices either increase or decrease significantly, you’ll be notified by email and Google Now cards. This is rolling out over the course of the next few weeks in all 26 countries where Google Flights is available.

Google Flights: tracked flights on mobile and email notification of price change

Find the right hotel at the right price with Google

When it comes to booking a hotel, making the best decision can be stressful. We’re now making it easier for you to find the right hotel at the right price.

Everyone loves a good deal. When searching on Google for “hotels in [location]”, you may now see a "Deal" label indicating that a hotel’s price is lower than usual compared to historical pricing, or if there are discounts to the normal rate for those dates. These deals are automatically identified by our algorithms when we see a significant reduction in price. This will be rolling out globally over the next few months.

You can now filter results according to your preferences, like “Price” or “Top rated”, with one tap on your phone. You can even combine multiple criteria like “Top-rated pet-friendly hotels in San Francisco under $200” when searching for the perfect hotel. This is now available in the US and will roll out globally later this year.

Smart filters and deals in hotel search on Google

If there is an opportunity to save money or find better availability by moving your dates slightly, we may show you Tips. For example, you may see a Tip like, “Save $105 if you stay Wed, Jul 13 - Fri, Jul 15”. If the new dates work for you, simply tap to update your search and take advantage of the savings. We’ll be rolling Tips out globally over the coming months.

We hope these updates make planning your next trip easier and stress-free!

Posted by Richard Holden, VP of Product Management, Travel

Source: Google Travel

Introducing the new Google Cloud Load Balancing UI

Our user interface (UI) is everything that you see and interact with. While the technologies that power Google Cloud Platform are complex, the UI for using the resources on GCP should be simple and intuitive. We’re paying close attention to this in our Cloud Load Balancing service and are excited to introduce a new UI that aims to simplify Cloud Load Balancing configuration.

You’ll now be able to configure all Cloud Load Balancing flavors through a unified interface. This UI is also designed to seamlessly accommodate new load balancing features that are expected to land in 2016 and beyond, and deliver a simpler, more intuitive user experience.

Here’s an overview of the new UI, recapping Cloud Load Balancing config basics first. Cloud Load Balancing comes in multiple flavors — HTTP(S), TCP, SSL(TLS) and UDP, and distributes traffic among your configured backends. Your backend configuration consists of instance groups and instances that will service your user traffic. Your frontend configuration is comprised of the anycast IP address to which your users connect along with port, protocol and other related information.

Of course, HTTP(S), TCP and UDP load balancers have flavor-specific configuration nuances, but we maintain a similar overall flow for configuring all of these flavors in the new UI.

You’ll begin your configuration by selecting the flavor of traffic you want to load balance: HTTP(S), TCP or UDP. Note that we’ll add support for SSL(TLS) in the new UI once this feature enters beta. To make your selection easier, we present you with a picker page as shown below:

Let’s say you want to configure HTTP load balancer. Start by clicking the configure button below HTTP(S) Load Balancing. You’ll now be presented with the page to configure the load balancer name, backend configuration, host/path rules, which are relevant if you want to perform request routing based on the client request URL, and finally the frontend configuration.

Once you’re done with the above steps, you can review and finalize your configuration. You can view all your configured load balancers as shown below:

If you’d like additional information on any of your configured load balancers, you can simply use the drop-down card as shown below to view these details, including configuration as well as monitoring information.

You can edit your configured load balancers any time by clicking the edit button shown above.

We’ve created this UI quickstart video to help you get started. After watching this video, we recommend that you play with the new UI and configure and edit HTTP(S), TCP and UDP load balancers to familiarize yourself with the UI flow and configuration options. You can also send in your feedback using the “Send feedback” button as shown below.

This is the first release of the new Google Cloud Load Balancing UI. We’ll continue to iterate, make improvements and most importantly incorporate your feedback into future UI releases. So take the new UI for a spin and let us know what works well and what you’d love to see next.

For those of you who attended GCP NEXT, we hope you enjoyed the opportunity to learn about Google’s global network and the software-defined and distributed systems technologies that power Google Cloud Load Balancing. If you missed it, here’s the Global Load Balancing talk and our TCP/UDP network load balancing talk at NSDI last month.

Happy load balancing and scale on!

Google Cloud Datastore gets faster cross-platform API

Today we’re announcing another important update to our NoSQL database, Google Cloud Datastore. We’ve redesigned the underlying architecture that supports the cross-platform API for accessing Datastore outside of Google App Engine, such as from Google Container Engine and Google Compute Engine, dramatically improving performance and reliability of the database. This follows new and simpler pricing for Cloud Datastore, announced last week.

The new Cloud Datastore API version (v1beta3) is available now. You need to enable this API before you can use it, even if you previously enabled an earlier version of the API.

Enable Cloud Datastore API

We’re also publishing a Service Level Agreement (SLA) for the API, which will take effect upon its General Availability release.

Now that v1beta3 is available, we’re deprecating the old v1beta2 API with a six-month grace period before decommissioning it on September 30th, 2016.

New Beta API revision

In the new release, we re-architected the entire serving path with an eye on performance and reliability. Cloud Datastore API revision v1beta3 has lower latency in both the average and long tail cases. Whether it’s magical items transferring to a player’s inventory faster, or browsing financial reports on a website that’s snappy  everyone loves fast.

In addition to these significant performance improvements, the v1beta3 API gives us a new platform upon which we can continue to improve performance and functionality.

You can use v1beta3 using the idiomatic Google Cloud Client Libraries (in Node.js, Python, Java, Go, and Ruby), or alternatively via the low-level native client libraries for JSON and Protocol Buffers over gRPC. You can learn more about the various client libraries in our documentation.

Cloud Datastore Service Level Agreement

Today we’re publishing a SLA for the General Availability release. Accessing Google Cloud Datastore via the Beta API is not covered by an SLA, although the SLA we’re publishing can help you estimate the expected performance of the Beta API. The SLA will only take effect when we reach General Availability.

App Engine Client libraries for Cloud Datastore are still covered as part of the App Engine SLA.

If you're using the Google Cloud Client Libraries, upgrading is as simple as updating the client libraries from GitHub. We look forward to what you build next with our faster cross-platform API for Cloud Datastore.

To learn more about Cloud Datastore, check out our getting started guide.

- Posted by Dan McGrath, Product Manager, Google Cloud Platform

Introducing Style Detection for Google Cloud Vision API

At Google Cloud Platform, we’re thrilled by the developer community’s enthusiastic response to the beta release of Cloud Vision API and our broader Cloud Machine Learning product family unveiled last week at GCP NEXT.

Cloud Vision API is a tool that enables developers to understand the contents of an image, from identifying prominent natural or man-made landmarks to detecting faces and emotion. Right now, Vision API can even recognize clothing in an image and label dominant colors, patterns and garment types.

Today, we’re taking another step forward. Why only evaluate individual components of an outfit when we could evaluate the full synthesis — the real impact of what you wear in today’s culture?

We’re proud to announce Style Detection, the newest Cloud Vision AP feature. Using millions of hours of deep learning, convolutional neural networks and petabytes of source data, Vision API can now not just identify clothing, but evaluate the nuances of style to a relative degree of uncertainty.

Style Detection aims to help people improve their style — and lives — by navigating the complex and fickle landscape of fashion. Does a brown belt go with black shoes? Pleats or no pleats? “To tuck or not to tuck?” is now no longer a question. With Style Detection, we’re able to mine our nearly bottomless combined data sets of selfies, fashion periodicals and the unstructured ramblings of design bloggers into a coherent and actionable tool for picking tomorrow’s trousers.

We’re already seeing incredible results. Across our training corpus, we were able to detect the majority of personal style choices and glean with 52-97% accuracy not just what people were wearing, but what those clothes might say about them. The possibilities are endless — and it could mean the end of spandex forever!

Learn more about Style Detection and the Cloud Vision API here. We’re offering it to a small group of developers in alpha today (obviously, there are still details to iron out).

- Posted by Miles Ward, Global Head of Solutions

Google Cloud Datastore simplifies pricing, cuts cost dramatically for most use-cases

Google Cloud Datastore is a highly-scalable NoSQL database for web and mobile applications. Today we’re announcing much simpler pricing, and as a result, many users will see significant cost-savings for this database service.

Along with the simpler pricing model, there’ll be a more transparent method of calculating stored data in Cloud Datastore. The new pricing and storage calculations will go into effect on July 1st, 2016. For the majority of our customers, this will effectively result in a price reduction.

New pricing structure

We’ve listened to your feedback and will be simplifying our pricing. The new pricing will go into effect on July 1st, 2016, regardless of how you access Datastore. Not only is it simpler, but also the majority of our customers will see significant cost savings. This change removes the disincentive our current pricing imposes on using the powerful indexing features, freeing developers from over-optimizing index usage.

We’re simplifying pricing for entity writes, reads and deletes by moving from internal operation counting to a more direct entity counting model as follows:

Writes: In the current pricing, writing a single entity translated into one or more write operations depending on the number and type of indexes. In the new pricing, writing a single entity only costs 1 write regardless of indexes and will now cost $0.18 per 100,000. This means writes are more affordable for people using multiple indexes. You can use as many indexes as your application needs without increases in write costs. Since on average the vast majority of Entity writes previously translated to more than 4 write operations per entity, this represents significant costs savings for developers.

Reads: In the current pricing, some queries would charge a read operation per entity retrieved plus an extra read operation for the query. In the new pricing, you'll only be charged per entity retrieved. Small ops (projections and keys-only queries) will stay the same in only charging a single read for the entire query. The cost per Entity read stays the same as the old per operation cost of $0.06 per 100,000. This means that most developers will see reduced costs in reading entities.

Deletes: In the current pricing model, deletes translated into 2 or more writes depending on the number and type of indexes. In the new pricing, you'll only be charged a delete operation per entity deleted. Deletes are charged at the rate of $0.02 per 100,000. This means deletes are now discounted by at least 66% and often by more.

Free Quota: The free quota limit for Writes is now 20,000 requests per day since we no longer charge multiple write operations per entity written. Deletes now fall under their own free tier of 20,000 requests per day. Over all, this means more free requests per day for the majority of applications.

Network: Standard Network costs will apply.

New storage usage calculations

To coincide with our pricing changes on July 1st, Cloud Datastore will also use a new method for calculating bytes stored. This method will be transparent to developers so you can accurately calculate storage costs directly from the property values and indexes of the Entity. This new method will also result in decreased storage costs for the majority of customers.

Our current method relies heavily on internal implementation details that can change, so we’re moving to a fixed system calculated directly from the user data submitted. As the new calculation method gets finalized, we’ll post the specific details so developers can use it to estimate storage costs.

Building what’s next

With simpler pricing for Cloud Datastore, you can spend less time micro-managing indexes and focus more on building what’s next.

Learn more about Google Cloud Datastore or check out the our getting started guide.

- Posted by Dan McGrath, Product Manager, Google Cloud Platform

Introducing online resizing of Google Cloud Persistent Disks without downtime

Google Compute Engine provides Persistent Disks to use as the primary block storage for your virtual machine instances. Provisioning the appropriate size of block storage has been a challenge for many cloud and on-premise customers because it requires planning for future data growth and performance needs. When a virtual machine runs out of space, there was no easy way to scale the size of your block storage.

Today we're announcing general availability of online resizing for Persistent Disks. It’s as easy as a button click or a single API call. It doesn’t cause any downtime to Google Compute Engine instances and doesn’t require snapshotting. It applies to all Persistent Disks, including the recently announced 64 TB volumes.

With the introduction of this feature, Persistent Disk capacity planning becomes much simpler. Persistent Disks can be provisioned based on immediate needs and increased in size later when you require more space or performance1. Instead of implementing a complex workflow that would take the system offline  such as snapshot the disk, restore the snapshot to a larger device then bring back online again  there's a single command that makes physical devices larger. The device immediately has higher IOPS and throughput limits. After you resize a disk that's already mounted on a VM instance, resize the file system. Usually it's as simple as running resize2fs on Linux or resizing partitions in Windows Disk Manager.

Internally we've been using online disk resizing with Cloud SQL Second Generation. It has enabled automatic growth of Persistent Disks used by Google Cloud SQL with no downtime.

We hope you enjoy the new feature!

- Posted by Igor Belianski, Software Engineer, Google Compute Engine

1 Persistent Disk performance depends on the size of the volume and the type of disk you select. Larger volumes can achieve higher I/O levels than smaller volumes.

Introducing Google Stackdriver: unified monitoring and logging for GCP and AWS

GCP NEXT 2016  SAN FRANCISCO  We’re excited to introduce Google Stackdriver, a unified monitoring, logging and diagnostics service that makes ops easier, whether you’re running applications on Google Cloud Platform (GCP), Amazon Web Services (AWS)1, or a combination of the two.

Stackdriver is the first service to include rich dashboards, uptime monitoring, alerting, log analysis, tracing, error reporting and production debugging, across GCP and AWS, in a single, unified offering. This combination significantly reduces the time that teams spend finding and fixing issues in production.

A unified view across cloud platforms

If you're running an application that spans two or more infrastructure platforms, you’re not alone. We’ve found teams using hybrid infrastructure for a variety of reasons, whether you’re replicating across cloud providers for higher availability, migrating from one cloud to another, or simply choosing the services that best meet the need of each application or component.

To support teams who choose to use GCP and AWS, Stackdriver offers native monitoring, logging and error reporting for both. With Stackdriver, you could start with a single dashboard to monitor the health of an application that's split across clusters on GCP and AWS.

Stackdriver Console for hybrid environment

Likewise, you can define an alerting policy to notify you if either cluster reaches capacity.
Alerting policy incorporating GCP and AWS capacity metrics

You can search for errors in your AWS and EC2 logs in a single interface.
Logs Viewer - search by GCP or AWS service
Finally, Stackdriver will send you error reports when new errors are detected within applications running on either platform:

Strong support for AWS is an essential part of Stackdriver. If you’re running a web application behind an Elastic Load Balancer, for example, Stackdriver provides you with a comprehensive view of the health of that cluster with no setup, including configuration information, uptime, recent events and summary metrics as well as per-availability zone and per-host breakdowns.

The same support for AWS is maintained throughout Stackdriver, from IAM-based setup and API integration to preconfigured dashboards for widely used AWS services to support for SNS Topics as an alerting mechanism and more.

Eliminate data silos, fix problems faster

Stackdriver drastically reduces the number of disparate tools necessary to identify and troubleshoot issues. Within Stackdriver, you can configure uptime checks that monitor the availability of your application endpoints. You can incorporate logs and metrics from your cloud platforms, systems, databases, messaging queues, web servers and application tier into the same monitoring system. You can maintain critical context, such as the timeframe of an issue, as you follow an issue across the monitoring, logging and diagnostics components. For many customers, this will eliminate the need to manually correlate this information across five or more disconnected tools, saving valuable time during incidents and outages.

Your team’s primary starting point might be a summary dashboard that provides an at-a-glance view into the health of your application. That view can include metrics from your cloud platform, system agents, uptime checks, logs and more.
Sample Custom Dashboard with AWS and GCP metrics

Stackdriver can alert your team when issues occur. To avoid dealing with alerts from many different systems when a single issue occurs, you can define alerting policies in Stackdriver that trigger when multiple conditions are true, such as a URL failing an uptime check and latency increases by over 30 percent over a 15-minute period.
Alerting policy with ELB Uptime Check and Latency Threshold

When you discover an issue, Stackdriver helps you follow the trail to the root cause. For example, upon receiving an error report for your Google App Engine application, you may choose to view a summary dashboard, drill down to traces of the latency per URL that your application is serving, and ultimately view logs of specific requests.
Stackdriver Trace Overview

You can also take advantage of integrations with an ecosystem of services to extend the value of Stackdriver. For example, you can stream Stackdriver logs to BigQuery to perform ad-hoc analysis. Likewise, you can use Google Cloud Datalab to perform ad-hoc visualization of time series data. Finally, you can choose among a variety of alerting integrations to ensure that your team receives alert notifications in the appropriate format, including Slack, HipChat, Campfire, and PagerDuty.

Get started in 2 minutes, nothing to maintain or scale

Getting started with Stackdriver is easy. Once you create your account and configure integration with AWS (if applicable), Stackdriver will automatically discover your cloud resources and provide an initial set of metrics and dashboards. From there, you can create uptime checks and deploy our open source agents (packages of Collectd for metrics, Fluentd for logs) to get deeper visibility into your virtual machines, databases, web servers and other components in just a couple of commands.

Stackdriver is built on top of the same powerful technologies that provide monitoring, logging and diagnostics for Google, so you can rest assured that Stackdriver will scale with you as your environment grows. And since Stackdriver is a hosted service, Google takes care of the operational overhead associated with monitoring and maintaining the service for you.

Try Google Stackdriver free during Beta

We're excited to introduce Google Stackdriver and hope you find it valuable in making ops easier  whether you're running on AWS, GCP or both. The service is currently in Beta. Learn more and try it for free at http://cloud.google.com/stackdriver.

Please note that we’ll continue to support existing Stackdriver customers and work closely with them to migrate to Google Stackdriver once it’s generally available.

- Posted by Dan Belcher, Product Manager

1 "Amazon Web Services" and "AWS" are trademarks of Amazon.com, Inc. or its affiliates in the United States and/or other countries.

Google Cloud Platform adds two new regions, 10 more to come

The public cloud is a network, not just a collection of data centers. Our global network has allowed us to build products that billions of users around the world can depend on. Whether you’re in Taipei or Tijuana, you can get Gmail, Search, Maps or your Google Cloud Platform services with Google speed and reliability.

We’re adding to this global network for Cloud Platform customers by expanding our roster of existing Cloud Platform regions with two more — both operational later this year:
  • US Western region in Oregon
  • East Asia region in (Tokyo) Japan

As always, each region has multiple availability zones, so that we can offer high-availability computing to our customers in each locale.

These are the first two of more than 10 additional GCP regions we'll be adding to our network through 2017. This follows the opening last year of a US east coast region in South Carolina for all major GCP services.

We’re opening these new regions to help Cloud Platform customers deploy services and applications nearer to their own customers, for lower latency and greater responsiveness. With these new regions, even more applications become candidates to run on Cloud Platform, and get the benefits of Google-level scale and industry leading price/performance.

The Japan region will be in beta for at least a month. You can fill out this survey to sign up for the beta, and we’ll notify you as soon as it’s ready. If you're interested in Oregon, please fill out this survey to be notified.

To learn how to make the best use of Cloud Platform regions for your application needs, please see the Geography and Regions details page.

- Posted by Varun Sakalkar, Product Manager