Category Archives: Google Cloud Platform Blog

Product updates, customer stories, and tips and tricks on Google Cloud Platform

OpenStack users: Backup your Cinder volumes to Google Cloud Storage



OpenStack Mitaka has just launched and we’re super excited about it. In collaboration with Red Hat and Biarca, we’ve developed an OpenStack Cinder backup driver for Google Cloud Storage, available in the Mitaka release.
Google joined the OpenStack Foundation in July 2015, when we announced Kubernetes integration with OpenStack. Our work on Mitaka is the next step on our roadmap to making Google Cloud Platform a seamless public cloud complement for OpenStack environments.

Backup and recovery services represent one of the most costly and complex aspects of large scale infrastructure management. OpenStack provides an efficient mechanism for allocation and management of persistent block storage through Cinder. In an OpenStack deployment, Cinder volumes house virtual machine data at rest as well as, potentially, the operating system boot device. In production deployments, it’s critical that this persistent data is protected as part of a comprehensive business continuity and disaster recovery strategy. To satisfy this requirement, Cinder provides a backup service that includes a backup driver specification allowing storage vendors to add support for additional backup targets.

This is where we come in. The addition of highly durable and available cloud-scale object storage allows organizations to shift from bulk commodity storage for backup to a more operationally efficient and cost-effective architecture, all while avoiding additional capital expenditures and the complexity of managing storage device scale out. The traditional barrier to adoption for object storage is the engineering effort required to adapt existing software and systems, designed for either file or block storage access, to object store native REST interfaces. The Cinder backup driver model provides the potential to abstract this engineering complexity for OpenStack users. As long as an appropriate backup driver is installed, the backup target works with Cinder as intended.

Our Openstack Cinder backup driver is included as part of the standard Cinder backup driver set in Mitaka and requires minimal setup to get up and running. Full Cinder backup functionality was successfully tested with the Cloud Storage driver against 1GB, 5GB and 10GB Cinder volume sizes. In addition, the driver provides the following user configurable parameters to allow administrators to tune the installation:

ParameterPurpose
backup_gcs_credential_file
Denotes the full path of the json file of the Google service account (downloaded from the Google Developer Console in step 3)
backup_gcs_bucket
GCS bucket name to use for backup. Please refer to the official bucket naming guidelines.
backup_gcs_driverUsed for selecting the Google backup driver
backup_gcs_project_id
Denotes the project ID where the backup bucket will be created
backup_gcs_object_size
The size in bytes of GCS backup objects.

default: 52428800 bytes
backup_gcs_block_size
The change tracking size for incremental backup, in bytes.

backup_gcs_object_size has to be a multiple of backup_gcs_block_size

default: 327678 bytes
backup_gcs_user_agenthttp user-agent string for the gcs API
backup_gcs_reader_chunk_sizeChunk size for GCS object downloads in bytes.

default: 2097152 bytes
backup_gcs_writer_chunk_size
Chunk size for GCS object uploads in bytes. Pass in a value of -1 to cause the file to be uploaded as a single chunk.

default: 2097152 bytes
backup_gcs_num_retries/td>Number of times to retry transfers.

default: 3
backup_gcs_bucket_locationLocation of GCS bucket.

default: ‘US’
backup_gcs_storage_classStorage class of GCS bucket.

default: ‘NEARLINE’
backup_gcs_retry_error_codesList of GCS error codes for which to initiate a retry.

default: [‘429’]
backup_gcs_enable_progress_timerEnable or Disable the timer to send the periodic progress notifications to Ceilometer when backing up the volume to the GCS backend storage. The default value is True to enable the timer.

default: True



The Cinder backup driver works with any class of Cloud Storage, including our Google Cloud Storage Nearline archival option. Nearline provides the full durability of Standard storage, at a slightly lower level of availability and with a slightly higher latency and offers read performance of 4MB/TB stored, scaling with storage density. As an example, 3TB of backup data can be restored at 12MB/s. The low cost yet high performance of Nearline makes backing up Cinder volumes economical while offering the ability to quickly restore if necessary.

If you’re running OpenStack, there’s no need to invest in additional storage systems or build out a second datacenter for backup and recovery. You can now use Cloud Storage in a hybrid scenario, optimized via the Cinder backup driver now available in Mitaka.

Monitor your application errors with Stackdriver Error Reporting



We're excited to introduce Stackdriver Error Reporting to help you quickly understand your application’s top or new errors. Stackdriver Error Reporting counts, analyzes and aggregates in real time the crashes in your running cloud services, and notifies you when there's something new.
Stackdriver Error Reporting: listing errors sorted by occurrence count



Stackdriver Error Reporting allows you to monitor your application’s errors, aggregated into meaningful groups tailored to your programming language and framework. This helps you see the problems rather than the noise.

Maybe you want to watch out for recently occurred errors in a given service, or judge the user impact of an outage. Just sort by first/last seen date, occurrences, or number of affected users to get the information you need.
Email notification for an error that could not be grouped with previously received errors



You can opt in to be notified when a new error cannot be grouped with the previously received ones, and jump directly from the email to the details of the new error.

The “detail view” presents key error information to help you assess its severity and understand the root cause: a bar chart over time, the first time this error has been seen, and the affected service versions. Look through error samples to better diagnose the problem: inspect the stack trace focusing on its relevant parts and start to debug in Stackdriver Debugger, learn more about the request that triggered it and navigate to the associated logs.

Details of an error: understand the root cause



While an immediate next step could be to rollback your service, you also want to work on fixing the errors. Stackdriver Error Reporting integrates with your regular workflow by allowing you to link an error to an issue from your issue tracker. Once done, you can see at a glance which errors have associated issues.

Link an error to an issue URL in your favorite bug tracker



The feedback from our alpha testers has been extremely positive. A frequent response we heard was that Stackdriver Error Reporting helped you identify some hard-to-catch intermittent errors that were hidden in logs, increasing product quality. Thank you for the feedback!

Get started

Stackdriver Error Reporting is now available in beta for everyone to try. Zero setup is needed for App Engine applications and requires just a few configuration steps on other platforms.

Visit http://console.cloud.google.com/errors to get started with your project.

Lessons from a Google App Engine SRE on how to serve over 100 billion requests per day

In this blog post we caught up with Chris Jones, a Site Reliability Engineer on Google App Engine for the past three-and-a-half years and SRE at Google for almost 9 years, to find out more about running production systems at Google. Chris is also one of the editors of Site Reliability Engineering: How Google Runs Production Systems, published by O’Reilly and available today.

Google App Engine serves over 100 billion requests per day. You might have heard about how our Site Reliability Engineers, or SREs, make this happen. It’s a little bit of magic, but mostly about applying the principles of computer science and engineering to the design and development of computing systems —generally very large, distributed ones.

Site Reliability Engineering is a set of engineering approaches to technology that lets us or anyone run better production systems. It went on to inform the concept of DevOps for the wider IT community. It’s interesting because it’s a relatively straightforward way of improving performance and reliability at planet-scale, but can be just as useful for any company for say, rolling out Windows desktops. Done right, SRE techniques can increase the effectiveness of operating any computing service.

Q: Chris, tell us how many SREs operate App Engine and at what scale?

CJ: We have millions of apps on App Engine serving over 100 billion requests per day supported by dozens of SREs.

Q: How do we do that with so few people?

CJ: SRE is an engineering approach to operating large-scale distributed computing services. Making systems highly standardized is critical. This means all systems work in similar ways to each other, which means fewer people are needed to operate them since there are fewer complexities to understand and deal with.

Automation is also important: our turn-up processes to build new capacity or scale load balancing are automated so that we can scale these processes nicely with computers, rather than with more people. If you put a human on a process that’s boring and repetitive, you’ll notice errors creeping up. Computers’ response times to failures are also much faster than ours. In the time it takes us to notice the error the computer has already moved the traffic to another data center, keeping the service up and running. It’s better to have people do things people are good at and computers do things computers are good at.

Q: What are some of the other approaches behind the SRE model?

CJ: Because there are SRE teams working with many of Google’s services, we’re able to extend the principle of standardization across products: SRE-built tools originally used for deploying a new version of Gmail, for instance, might be generalized to cover more situations. This means that each team doesn’t need to build its own way to deploy updates. This ensures that every product gets the benefit of improvements to the tools, which leads to better tooling for the whole organization.

In addition, the combination of software engineering and systems engineering knowledge in SRE often leads to solutions that synthesize the best of both backgrounds. Google’s software network load balancer, Maglev, is an example — and it’s the underlying technology for the Google Cloud Load Balancer.

Q: How do these approaches impact App Engine and our customers running on App Engine?

CJ: Here’s a story that illustrates it pretty well. In the summer of 2013 we moved all of App Engine’s US region from one side of the country to the other. The move incurred no downtime to our customers.

Q: How?

CJ: We shut down one App Engine cluster, and as designed, the apps running on it automatically moved to the remaining clusters. We had created a copy of the US region’s High Replication Datastore in the destination data center ahead of time so that those applications’ data (and there were petabytes of it!) was already in place; changes to the Datastore were automatically replicated in near real-time so that it was consistently up to date. When it was time to turn on App Engine in the new location, apps assigned to that cluster automatically migrated from their backup clusters and had all their data already in place. We then repeated the process with the remaining clusters until we were done.

Advance preparation, combined with extensive testing and contingency plans, meant that we were ready when things went slightly wrong and were able to minimize the impact on customers. And of course, we put together an internal postmortem — another key part of how SRE works — to understand what went wrong and how to fix it for the future, without pointing fingers.

Q: Very cool. How can we find out more about SRE?

CJ: Sure. If you’re interested in learning more about how Site Reliability Engineering works at Google, including the lessons we learned along the way, check out this website, the new book and we’ll also be at SREcon this week (April 7-8) giving various talks on this topic.

- Posted by Jo Maitland, Managing Editor, Google Cloud Platform

Google and Rackspace co-develop open server architecture based on new IBM POWER9 hardware

Rethinking data center design is happening out in the open here at Google. Today we're announcing that we’re working with Rackspace to co-develop an open server architecture design specification based on IBM’s new POWER9 CPU.

We also recently joined the Open Compute Project (OCP) and hope to submit this work to the OCP community. In fact, the POWER9 data center server specification is designed to fit in the proposed 48V open rack that we're co-designing with Facebook.

We’ve been working on OpenPOWER since 2014, when we helped found the OpenPOWER Foundation and we’re now POWER-ready. This means the architecture is fully supported across our toolchain, allowing developers to target apps to POWER with a simple flag.

It won’t surprise anyone to hear that demand for compute at Google has been relentless and it isn’t slowing down any time soon. We’ve found 60 trillion web addresses so far, versus one trillion in 2008. To meet that demand, our goal is to ensure our fleet is capable of handling ISA heterogeneity, to achieve best-in-class performance and value.

We're committed to open innovation and to optimizing performance and cost in data centers, and look forward to passing these savings along to our internal users as well as our Google Cloud Platform customers.

- Posted by Maire Mahony, Hardware Engineering Manager at Google & Director, OpenPOWER Foundation

Google Cloud Datastore gets faster cross-platform API

Today we’re announcing another important update to our NoSQL database, Google Cloud Datastore. We’ve redesigned the underlying architecture that supports the cross-platform API for accessing Datastore outside of Google App Engine, such as from Google Container Engine and Google Compute Engine, dramatically improving performance and reliability of the database. This follows new and simpler pricing for Cloud Datastore, announced last week.

The new Cloud Datastore API version (v1beta3) is available now. You need to enable this API before you can use it, even if you previously enabled an earlier version of the API.

Enable Cloud Datastore API

We’re also publishing a Service Level Agreement (SLA) for the API, which will take effect upon its General Availability release.

Now that v1beta3 is available, we’re deprecating the old v1beta2 API with a six-month grace period before decommissioning it on September 30th, 2016.

New Beta API revision

In the new release, we re-architected the entire serving path with an eye on performance and reliability. Cloud Datastore API revision v1beta3 has lower latency in both the average and long tail cases. Whether it’s magical items transferring to a player’s inventory faster, or browsing financial reports on a website that’s snappy  everyone loves fast.


In addition to these significant performance improvements, the v1beta3 API gives us a new platform upon which we can continue to improve performance and functionality.

You can use v1beta3 using the idiomatic Google Cloud Client Libraries (in Node.js, Python, Java, Go, and Ruby), or alternatively via the low-level native client libraries for JSON and Protocol Buffers over gRPC. You can learn more about the various client libraries in our documentation.

Cloud Datastore Service Level Agreement

Today we’re publishing a SLA for the General Availability release. Accessing Google Cloud Datastore via the Beta API is not covered by an SLA, although the SLA we’re publishing can help you estimate the expected performance of the Beta API. The SLA will only take effect when we reach General Availability.

App Engine Client libraries for Cloud Datastore are still covered as part of the App Engine SLA.

If you're using the Google Cloud Client Libraries, upgrading is as simple as updating the client libraries from GitHub. We look forward to what you build next with our faster cross-platform API for Cloud Datastore.

To learn more about Cloud Datastore, check out our getting started guide.

- Posted by Dan McGrath, Product Manager, Google Cloud Platform

Google Cloud collaborates with Kinvey for HIPAA-compliant mobile Backend-as-a-Service

Our guest blogs are written by third-party developers, partners and experts with real-world expertise creating and running applications on Google Cloud Platform. They're a great way to dive into the thoughts and opinions of folks at the forefront of software engineering and business development in cloud computing. Today’s guest blog is by Sravish Sridhar, Founder and CEO of mobile Backend-as-a-Service provider, Kinvey.

Modernizing legacy enterprise apps to work on mobile devices is no small feat, and if you want to be sure those apps still meet tough government regulations once mobile, you’re in for a world of complexity.

Kinvey's collaborating with Google to simplify this process. We’ve extended our mobile Backend-as-a-Service — a fully-managed, HIPAA compliant platform built on Google Cloud — to developers at healthcare providers, pharmaceutical companies and in life sciences. Our services satisfy the stringent policies of patient privacy as mandated by U.S. Government HIPAA regulations.

Kinvey on GCP provides a decoupled architecture for front-end developers to iterate on their apps and deliver them in an agile manner, without having to wait on backend systems owners to provision connectors to enterprise data and auth systems. Here’s how it works:


  • An app developer starts to build the UI/UX of their app using the front-end programming language or framework of their choice — Android, Objective-C, Swift, Ionic, Xamarin, PhoneGap, etc.
  • The developer downloads the Kinvey SDK for the particular language they're using and uses the appropriate Kinvey SDK to take care of client-side functionality like managing and anonymizing auth tokens, marshaling data between the app and Kinvey’s backend APIs, offline caching and sync and data encryption.
  • The app is wired up to backend functionality by leveraging Kinvey’s backend features, such as an identity service to register/login users, data store to store and retrieve data from the cloud, file store to cache large files like photos and videos, and custom business logic that can be written and provisioned on Kinvey’s Node.js PaaS
  • In the meantime, owners of backend enterprise systems can connect Kinvey to their enterprise auth and data sources, without writing any code. They use Kinvey’s Mobile Identity Connect (MIC) to connect to auth protocols like Active Directory, OpenID, LDAP, SAML, etc. and Kinvey’s RAPID data connectors and custom data links to connect to enterprise data services like Epic, Cerner, SAP and SharePoint. Services provisioned via MIC and RAPID are then made available to the front-end developers by publishing them in Kinvey’s Service Catalog, with appropriate access policies.
  • The front-end developer can then "flip a switch" and instruct Kinvey to use a MIC auth service instead of the default Kinvey auth service, and one or more RAPID services instead of sample data stored in collections in the Kinvey data store.
  • With no front-end app code change, the app then works end-to-end with enterprise auth and data systems.


By providing connectors to Electronic Health Record (EHR) systems like Epic and Cerner, Kinvey makes it easy for developers to launch apps without having to focus on complex enterprise integrations.

Healthcare customers require a HIPAA compliant solution to ensure that patient data is secure end-to-end. Google Cloud Platform’s infrastructure, Cloud Storage and CDN allow us to store and deliver ​data and files in a highly secure and compliant fashion. Specifically, our mBaaS on Google Cloud offers features such as:

  • Plug-in client features for offline caching, network management and RESTful data access to accelerate development
  • Turn-key backend services for data integration, IAM and orchestration for new mobile use cases
  • Microservices for interconnectivity between your enterprise systems
  • Security at every level from mobile client to infrastructure layer
  • Mobile app analytics and reporting for fine-tuning operations

To see how to get started, sign up for Kinvey’s HIPAA compliant mobility platform.

- Posted by Sravish Sridhar, Founder/CEO, Kinvey

Google Cloud collaborates with Kinvey for HIPAA-compliant mobile Backend-as-a-Service

Our guest blogs are written by third-party developers, partners and experts with real-world expertise creating and running applications on Google Cloud Platform. They're a great way to dive into the thoughts and opinions of folks at the forefront of software engineering and business development in cloud computing. Today’s guest blog is by Sravish Sridhar, Founder and CEO of mobile Backend-as-a-Service provider, Kinvey.

Modernizing legacy enterprise apps to work on mobile devices is no small feat, and if you want to be sure those apps still meet tough government regulations once mobile, you’re in for a world of complexity.

Kinvey's collaborating with Google to simplify this process. We’ve extended our mobile Backend-as-a-Service — a fully-managed, HIPAA compliant platform built on Google Cloud — to developers at healthcare providers, pharmaceutical companies and in life sciences. Our services satisfy the stringent policies of patient privacy as mandated by U.S. Government HIPAA regulations.

Kinvey on GCP provides a decoupled architecture for front-end developers to iterate on their apps and deliver them in an agile manner, without having to wait on backend systems owners to provision connectors to enterprise data and auth systems. Here’s how it works:


  • An app developer starts to build the UI/UX of their app using the front-end programming language or framework of their choice — Android, Objective-C, Swift, Ionic, Xamarin, PhoneGap, etc.
  • The developer downloads the Kinvey SDK for the particular language they're using and uses the appropriate Kinvey SDK to take care of client-side functionality like managing and anonymizing auth tokens, marshaling data between the app and Kinvey’s backend APIs, offline caching and sync and data encryption.
  • The app is wired up to backend functionality by leveraging Kinvey’s backend features, such as an identity service to register/login users, data store to store and retrieve data from the cloud, file store to cache large files like photos and videos, and custom business logic that can be written and provisioned on Kinvey’s Node.js PaaS
  • In the meantime, owners of backend enterprise systems can connect Kinvey to their enterprise auth and data sources, without writing any code. They use Kinvey’s Mobile Identity Connect (MIC) to connect to auth protocols like Active Directory, OpenID, LDAP, SAML, etc. and Kinvey’s RAPID data connectors and custom data links to connect to enterprise data services like Epic, Cerner, SAP and SharePoint. Services provisioned via MIC and RAPID are then made available to the front-end developers by publishing them in Kinvey’s Service Catalog, with appropriate access policies.
  • The front-end developer can then "flip a switch" and instruct Kinvey to use a MIC auth service instead of the default Kinvey auth service, and one or more RAPID services instead of sample data stored in collections in the Kinvey data store.
  • With no front-end app code change, the app then works end-to-end with enterprise auth and data systems.


By providing connectors to Electronic Health Record (EHR) systems like Epic and Cerner, Kinvey makes it easy for developers to launch apps without having to focus on complex enterprise integrations.

Healthcare customers require a HIPAA compliant solution to ensure that patient data is secure end-to-end. Google Cloud Platform’s infrastructure, Cloud Storage and CDN allow us to store and deliver ​data and files in a highly secure and compliant fashion. Specifically, our mBaaS on Google Cloud offers features such as:

  • Plug-in client features for offline caching, network management and RESTful data access to accelerate development
  • Turn-key backend services for data integration, IAM and orchestration for new mobile use cases
  • Microservices for interconnectivity between your enterprise systems
  • Security at every level from mobile client to infrastructure layer
  • Mobile app analytics and reporting for fine-tuning operations

To see how to get started, sign up for Kinvey’s HIPAA compliant mobility platform.

- Posted by Sravish Sridhar, Founder/CEO, Kinvey

Introducing Style Detection for Google Cloud Vision API

At Google Cloud Platform, we’re thrilled by the developer community’s enthusiastic response to the beta release of Cloud Vision API and our broader Cloud Machine Learning product family unveiled last week at GCP NEXT.

Cloud Vision API is a tool that enables developers to understand the contents of an image, from identifying prominent natural or man-made landmarks to detecting faces and emotion. Right now, Vision API can even recognize clothing in an image and label dominant colors, patterns and garment types.

Today, we’re taking another step forward. Why only evaluate individual components of an outfit when we could evaluate the full synthesis — the real impact of what you wear in today’s culture?

We’re proud to announce Style Detection, the newest Cloud Vision AP feature. Using millions of hours of deep learning, convolutional neural networks and petabytes of source data, Vision API can now not just identify clothing, but evaluate the nuances of style to a relative degree of uncertainty.

Style Detection aims to help people improve their style — and lives — by navigating the complex and fickle landscape of fashion. Does a brown belt go with black shoes? Pleats or no pleats? “To tuck or not to tuck?” is now no longer a question. With Style Detection, we’re able to mine our nearly bottomless combined data sets of selfies, fashion periodicals and the unstructured ramblings of design bloggers into a coherent and actionable tool for picking tomorrow’s trousers.

We’re already seeing incredible results. Across our training corpus, we were able to detect the majority of personal style choices and glean with 52-97% accuracy not just what people were wearing, but what those clothes might say about them. The possibilities are endless — and it could mean the end of spandex forever!

Learn more about Style Detection and the Cloud Vision API here. We’re offering it to a small group of developers in alpha today (obviously, there are still details to iron out).

- Posted by Miles Ward, Global Head of Solutions

Financial services firm processes 25 billion stock market events per hour with Google Cloud Bigtable

FIS, a global financial technology and services firm and frequent leader of the FinTech Top 100 list, recently ran a load test of their system on Google Cloud Platform to process, validate and link U.S. stock exchange market events. FIS used Google Cloud Dataflow and Google Cloud Bigtable to process 25 billion simulated market events in 50 minutes, generating some impressive statistics in the process.

Cloud Bigtable achieved read rates in excess of 34 million events per second and 22 million event writes per second using 3500 Cloud Bigtable server nodes and 300 n1-standard-32 VMs with Cloud Dataflow. Additionally, Cloud Bigtable provided sustained rates of over 22 million event reads per second and 16 million event writes per second for extended periods of time.

Moreover, Cloud Bigtable was also able to achieve significant I/O bandwidth rates during the load test: read bandwidth peaked at 34 GB/s while write bandwidth peaked at 18 GB/s. Cloud Bigtable sustained significant bandwidth for input and output for 30 minutes as well: 22 GB/s for reads and 13 GB/s for writes.


For FIS, these performance capabilities make it possible to process an entire day’s worth of U.S. equities and options data and make it available for analysis within four hours.

For the complete set of benchmark results, see these slides. You can see a more detailed description of the overall system architecture presented by Neil Palmer and Todd Ricker from FIS and Carter Page, engineering manager for Google Cloud Bigtable:


We look forward to working with other innovative companies like FIS to help them address data processing challenges with the performance, scalability and NoOps approach that Cloud Bigtable provides.

- Posted by Misha Brukman, Product Manager for Google Cloud Bigtable

Google Cloud Datastore simplifies pricing, cuts cost dramatically for most use-cases

Google Cloud Datastore is a highly-scalable NoSQL database for web and mobile applications. Today we’re announcing much simpler pricing, and as a result, many users will see significant cost-savings for this database service.

Along with the simpler pricing model, there’ll be a more transparent method of calculating stored data in Cloud Datastore. The new pricing and storage calculations will go into effect on July 1st, 2016. For the majority of our customers, this will effectively result in a price reduction.


New pricing structure

We’ve listened to your feedback and will be simplifying our pricing. The new pricing will go into effect on July 1st, 2016, regardless of how you access Datastore. Not only is it simpler, but also the majority of our customers will see significant cost savings. This change removes the disincentive our current pricing imposes on using the powerful indexing features, freeing developers from over-optimizing index usage.

We’re simplifying pricing for entity writes, reads and deletes by moving from internal operation counting to a more direct entity counting model as follows:

Writes: In the current pricing, writing a single entity translated into one or more write operations depending on the number and type of indexes. In the new pricing, writing a single entity only costs 1 write regardless of indexes and will now cost $0.18 per 100,000. This means writes are more affordable for people using multiple indexes. You can use as many indexes as your application needs without increases in write costs. Since on average the vast majority of Entity writes previously translated to more than 4 write operations per entity, this represents significant costs savings for developers.

Reads: In the current pricing, some queries would charge a read operation per entity retrieved plus an extra read operation for the query. In the new pricing, you'll only be charged per entity retrieved. Small ops (projections and keys-only queries) will stay the same in only charging a single read for the entire query. The cost per Entity read stays the same as the old per operation cost of $0.06 per 100,000. This means that most developers will see reduced costs in reading entities.

Deletes: In the current pricing model, deletes translated into 2 or more writes depending on the number and type of indexes. In the new pricing, you'll only be charged a delete operation per entity deleted. Deletes are charged at the rate of $0.02 per 100,000. This means deletes are now discounted by at least 66% and often by more.

Free Quota: The free quota limit for Writes is now 20,000 requests per day since we no longer charge multiple write operations per entity written. Deletes now fall under their own free tier of 20,000 requests per day. Over all, this means more free requests per day for the majority of applications.

Network: Standard Network costs will apply.


New storage usage calculations

To coincide with our pricing changes on July 1st, Cloud Datastore will also use a new method for calculating bytes stored. This method will be transparent to developers so you can accurately calculate storage costs directly from the property values and indexes of the Entity. This new method will also result in decreased storage costs for the majority of customers.

Our current method relies heavily on internal implementation details that can change, so we’re moving to a fixed system calculated directly from the user data submitted. As the new calculation method gets finalized, we’ll post the specific details so developers can use it to estimate storage costs.

Building what’s next

With simpler pricing for Cloud Datastore, you can spend less time micro-managing indexes and focus more on building what’s next.

Learn more about Google Cloud Datastore or check out the our getting started guide.

- Posted by Dan McGrath, Product Manager, Google Cloud Platform