Tag Archives: Pricing

Announcing resource-based pricing for Google Compute Engine



The promise and benefit of the cloud has always been flexibility, low cost, and pay-per-use. With Google Compute Engine, custom machine types let you create VM instances of any size and shape, and we automatically apply committed use and sustained use discounts to reduce your costs. Today, we are taking the concept of pay-for-use in Compute Engine even further with resource-based pricing.

With resource-based pricing we are making a number of changes behind the scenes that align how we treat metering of custom and predefined machine types, as well as how we apply discounts for sustained use discounts. Simply put, we’ve made changes to automatically provide you with more savings and an easy-to-understand monthly bill. Who doesn’t love that?

Resource-based pricing considers usage at a granular level. Instead of evaluating your usage based on which machine types you use, it evaluates how many resources you consume over a given time period. What does that mean? It means that a core is a core, and a GB of RAM is a GB of RAM. It doesn’t matter what combination of pre-defined machine types you are running. Now we look at them at the resource level—in the aggregate. It gets better, too, because sustained use discounts are now calculated regionally, instead of just within zones. That means you can accrue sustained use discounts even faster, so you can save even more automatically.

To better understand these changes, and to get an idea of how you can save, let’s take a look at how sustained use discounts worked previously, and how they’ll work moving forward.
  • Previously, if you used a specific machine type (e.g. n1-standard-4) with four vCPUs for 50% of the month, you got an effective discount of 10%. If you used it for 75% of the month, you got an effective discount of 20%. If you use it for 100% of the month, you got an effective discount of 30%.
Okay. Now, what if you used different machine types?
  • Let’s say you were running a web-based service. You started the month running an n1-standard-4 with four vCPUs. In the second week user demand for your service increased and you scaled capacity. You started running an n1-standard-8 with eight vCPU. Ever increasing demand caused you to scale up again. In week three you began running an n1-standard-16 with sixteen vCPU. Due to your success you wound up scaling again—ending the month running an n1-standard-32 with thirty-two vCPU. In this scenario you wouldn’t receive any discount, because you didn’t run any of the machine types for up to 50% of the month.

With resource-based pricing, we no longer consider your machine type and instead, we add up all the resources you use across all your machines into a single total and then apply the discount. You do not need to take any action. You save automatically. Let’s look at the scaling example again, but this time with resource-based pricing.
  • You began the month running four vCPU, and subsequently scaled to eight vCPU, sixteen vCPU and finally thirty-two vCPU. You ran four vCPU all month, or 100% of the time, so you receive a 30% discount on those vCPU. You ran another four vCPU for 75% of the month, so you receive a 20% discount on those vCPU. And finally, you ran another eight vCPU for half the month, so you receive a 10% discount on those vCPU. Sixteen vCPU were run for one week, so they did not qualify for a discount. Let’s visualize how this works, to reinforce what we’ve learned.

And because resource-based pricing applies at a regional level, it’s now even easier for you to benefit from sustained use discounts, no matter which machine types you use, or the number of zones in a region in which you operate. Resource-based pricing will take effect in the coming months. Visit the Resource-based pricing page to learn more.

Introducing Preemptible GPUs: 50% Off



In May 2015, Google Cloud introduced Preemptible VM instances to dramatically change how you think about (and pay for) computational resources for high-throughput batch computing, machine learning, scientific and technical workloads. Then last year, we introduced lower pricing for Local SSDs attached to Preemptible VMs, expanding preemptible cloud resources to high performance storage. Now we're taking it even further by announcing the beta release of GPUs attached to Preemptible VMs.

You can now attach NVIDIA K80 and NVIDIA P100 GPUs to Preemptible VMs for $0.22 and $0.73 per GPU hour, respectively. This is 50% cheaper than GPUs attached to on-demand instances, which we also recently lowered. Preemptible GPUs will be a particularly good fit for large-scale machine learning and other computational batch workloads as customers can harness the power of GPUs to run distributed batch workloads at predictably affordable prices.

As a bonus, we're also glad to announce that our GPUs are now available in our us-central1 region. See our GPU documentation for a full list of available locations.

Resources attached to Preemptible VMs are the same as equivalent on-demand resources with two key differences: Compute Engine may shut them down after providing you a 30-second warning, and you can use them for a maximum of 24 hours. This makes them a great choice for distributed, fault-tolerant workloads that don’t continuously require any single instance, and allows us to offer them at a substantial discount. But just like its on-demand equivalents, preemptible pricing is fixed. You’ll always get low cost, financial predictability and we bill on a per-second basis. Any GPUs attached to a Preemptible VM instance will be considered Preemptible and will be billed at the lower rate. To get started, simply append --preemptible to your instance create command in gcloud, specify scheduling.preemptible to true in the REST API or set Preemptibility to "On" in the Google Cloud Platform Console and then attach a GPU as usual. You can use your regular GPU quota to launch Preemptible GPUs or, alternatively, you can request a special Preemptible GPUs quota that only applies to GPUs attached to Preemptible VMs.
For users looking to create dynamic pools of affordable GPU power, Compute Engine’s managed instance groups can be used to automatically re-create your preemptible instances when they're preempted (if capacity is available). Preemptible VMs are also integrated into cloud products built on top of Compute Engine, such as Kubernetes Engine (GKE’s GPU support is currently in preview. The sign-up form can be found here).

Over the years we’ve seen customers do some very exciting things with preemptible resources: everything from solving problems in satellite image analysis, financial services, questions in quantum physics, computational mathematics and drug screening.
"Preemptible GPU instances from GCP give us the best combination of affordable pricing, easy access and sufficient scalability. In our drug discovery programs, cheaper computing means we can look at more molecules, thereby increasing our chances of finding promising drug candidates. Preemptible GPU instances have advantages over the other discounted cloud offerings we have explored, such as consistent pricing and transparent terms. This greatly improves our ability to plan large simulations, control costs and ensure we get the throughput needed to make decisions that impact our projects in a timely fashion." 
Woody Sherman, CSO, Silicon Therapeutics 

We’re excited to see what you build with GPUs attached to Preemptible VMs. If you want to share stories and demos of the cool things you've built with Preemptible VMs, reach out on Twitter, Facebook or G+.

For more details on Preemptible GPU resources, please check out the preemptible documentation, GPU documentation and best practices. For more pricing information, take a look at our Compute Engine pricing page or try out our pricing calculator. If you have questions or feedback, please visit our Getting Help page.

To get started using Preemptible GPUs today; sign up for Google Cloud Platform and get $300 in credits to try out Preemptible GPUs.

Cutting cluster management fees on Google Kubernetes Engine



Today, we're excited to announce that we have eliminated the cluster management fee for Google Kubernetes Engine, our managed Kubernetes service.

We founded the Kubernetes open-source project in 2014, and have remained the leading contributor to it. Internally at Google, we’ve been running globally scaled, production workloads in containers for over a decade. Kubernetes and Kubernetes Engine include the best of what we have learned, including the advanced cluster management features that web-scale production applications require. Today’s announcement makes Kubernetes Engine’s cluster management available at no charge, for any size cluster, effective immediately.

To put this pricing update in context, Kubernetes Engine has always provided a managed master at no charge for clusters of fewer than six nodes. For larger clusters we also provided the managed master at no charge, however we charged a flat fee of $0.15 per hour to manage the cluster. This flat fee is now eliminated for all cluster sizes. At Google, we’ve found that larger clusters are more efficient  especially when running multiple workloads. So if you were hesitating to create larger clusters worry no more and scale freely!


Nodes in the cluster
Older Pricing
New Pricing
(effective immediately)
Cluster Management Fee
Cluster Management Fee
0 to 5 nodes
0
0
6+ nodes
$0.15 / hour
0


That’s great news, but some of you may be wondering what all is included in cluster management? In the context of Google Kubernetes Engine, every cluster includes a master VM that acts as its control plane. Kubernetes Engine’s cluster management includes the following capabilities among others:



A useful point of comparison is the cost of managing your Kubernetes cluster yourself, either on Google Compute Engine or on another cloud. In a self-managed cluster, you pay for the VM that hosts the master and any resources you need for monitoring, logging and storing its state. Depending on the size of your cluster, moving to Kubernetes Engine could save a decent fraction of your total bill just by saving the cost of the master.

Of course while dollar savings are nice, we have invested Google engineering in automating cluster management with Kubernetes Engine to you save time and headaches as well. In a self-managed cluster, you're responsible for scaling the master as your cluster grows, and for backing up etcd. You have to keep an eye out for security patches and apply them. To access new Kubernetes features, you have to upgrade the master and cluster yourself. And most likely cluster repair and scaling is manual. With Google Kubernetes Engine, on the other hand, we take care of all of this complexity at no charge so you can focus on your business.
“[Google Kubernetes Engine] gives us elasticity and scalable performance for our Kubernetes clusters. It’s fully supported and managed by Google, which makes it more attractive to us than elastic container services from other cloud providers”  
 Arya Asemanfar, Engineering Manager at Mixpanel
We’re committed to raising the bar on Kubernetes’ reliability, cost-effectiveness, ease-of-use and enterprise readiness, and continue to add advanced management capabilities into Kubernetes Engine. For a preview of what’s next we invite you to join an early access program for node auto-provisioning, a new cluster management feature that provisions the right type of nodes in your auto-scaling cluster based on the observed behavior of your workloads. To join the early access program, fill out this form.

Google Cloud Platform for data center professionals: what you need to know



At Google Cloud, we love seeing customers migrate to our platform. Companies move to us for a variety of reasons, from low costs to our machine learning offerings. Some of our customers, like Spotify and Evernote, have described the various reasons that motivated them to migrate to Google Cloud.

However, we recognize that a migration of any size can be a challenging project, so today we're happy to announce the first part of a new resource to help our customers as they migrate. Google Cloud Platform for Data Center Professionals is a guide for customers who are looking to move to Google Cloud Platform (GCP) and are coming from non-cloud environments. We cover the basics of running IT  Compute, Networking, Storage, and Management. We've tried to write this from the point of view of someone with minimal cloud experience, so we hope you find this guide a useful starting point.

This is the first part of an ongoing series. We'll add more content over time, to help describe the differences in various aspects of running your company's IT infrastructure.

We hope you find this useful in learning about GCP. Please tell us what you think and what else you would like to add, and be sure to follow along with our free trial when you sign up!

Introducing Pivotal Cloud Foundry on Google Cloud Platform



A new way for enterprises to capitalize on Google scale and innovation

Our goal for Google Cloud Platform (GCP) is to build the most open cloud for all businesses, and make it easy for them to build and run great software. This means being good stewards of the open source community, and having strong engineering partnerships with like-minded industry leaders.

Today, we're happy to announce more about our collaboration with Pivotal. Its cloud-native platform, Pivotal Cloud Foundry (PCF), is based on the open source Cloud Foundry project that it started many years ago. It was a natural fit for the two companies to start working together.

A differentiated Pivotal Cloud Foundry with Google

Customers can now deploy and operate Pivotal Cloud Foundry on GCP. This is a powerful combination that brings Pivotal’s enterprise cloud-native experience together with Google’s infrastructure and innovative technology.

So what does that mean in the real-world? Deployments of PCF on GCP can include:


Further, the combination of PCF and GCP allows customers to access Google’s data and machine learning (ML) services within customer applications via custom-built service brokers that expose GCP services directly into Cloud Foundry.

This level of integration with Google’s infrastructure enables the enterprise to build and deploy apps that can scale, store and analyze data quickly. The following data and machine learning services are now available in Pivotal Cloud Foundry today:

Customer collaboration - PCF and GCP in action

We pride ourself on our “engineer to engineer” approach to working with customers and partners. And that’s exactly how we worked with The Home Depot as a shared customer of GCP and Pivotal Cloud Foundry.

The Home Depot software development team worked side-by-side with Google and Pivotal as they co-engineered the integration of PCF on GCP. Together, they’re building business systems for a digital strategy around this partnership, and will be running parts of homedepot.com on PCF and GCP in time for this year’s Black Friday.

Getting started

We've published a “Pivotal Cloud Foundry on Google Cloud Platform” solutions document that provides an example deployment architecture, as well as links to various setup guides. These links range from the lower-level OSS bits up through step-by-step installation guides with screenshots from our friends at Pivotal. It's a comprehensive guide to help you get started with PCF on GCP.

What’s next

Bringing more GCP services into the Cloud Foundry ecosystem is a priority, and we’re looking at how we can further contribute to the Spring community. Stay tuned for more news and updates- but in the meantime, reach out to your local Pivotal or Google Cloud sales team or contact Sales to talk to someone about this exciting partnership.

Google Cloud Datastore simplifies pricing, cuts cost dramatically for most use-cases

Google Cloud Datastore is a highly-scalable NoSQL database for web and mobile applications. Today we’re announcing much simpler pricing, and as a result, many users will see significant cost-savings for this database service.

Along with the simpler pricing model, there’ll be a more transparent method of calculating stored data in Cloud Datastore. The new pricing and storage calculations will go into effect on July 1st, 2016. For the majority of our customers, this will effectively result in a price reduction.


New pricing structure

We’ve listened to your feedback and will be simplifying our pricing. The new pricing will go into effect on July 1st, 2016, regardless of how you access Datastore. Not only is it simpler, but also the majority of our customers will see significant cost savings. This change removes the disincentive our current pricing imposes on using the powerful indexing features, freeing developers from over-optimizing index usage.

We’re simplifying pricing for entity writes, reads and deletes by moving from internal operation counting to a more direct entity counting model as follows:

Writes: In the current pricing, writing a single entity translated into one or more write operations depending on the number and type of indexes. In the new pricing, writing a single entity only costs 1 write regardless of indexes and will now cost $0.18 per 100,000. This means writes are more affordable for people using multiple indexes. You can use as many indexes as your application needs without increases in write costs. Since on average the vast majority of Entity writes previously translated to more than 4 write operations per entity, this represents significant costs savings for developers.

Reads: In the current pricing, some queries would charge a read operation per entity retrieved plus an extra read operation for the query. In the new pricing, you'll only be charged per entity retrieved. Small ops (projections and keys-only queries) will stay the same in only charging a single read for the entire query. The cost per Entity read stays the same as the old per operation cost of $0.06 per 100,000. This means that most developers will see reduced costs in reading entities.

Deletes: In the current pricing model, deletes translated into 2 or more writes depending on the number and type of indexes. In the new pricing, you'll only be charged a delete operation per entity deleted. Deletes are charged at the rate of $0.02 per 100,000. This means deletes are now discounted by at least 66% and often by more.

Free Quota: The free quota limit for Writes is now 20,000 requests per day since we no longer charge multiple write operations per entity written. Deletes now fall under their own free tier of 20,000 requests per day. Over all, this means more free requests per day for the majority of applications.

Network: Standard Network costs will apply.


New storage usage calculations

To coincide with our pricing changes on July 1st, Cloud Datastore will also use a new method for calculating bytes stored. This method will be transparent to developers so you can accurately calculate storage costs directly from the property values and indexes of the Entity. This new method will also result in decreased storage costs for the majority of customers.

Our current method relies heavily on internal implementation details that can change, so we’re moving to a fixed system calculated directly from the user data submitted. As the new calculation method gets finalized, we’ll post the specific details so developers can use it to estimate storage costs.

Building what’s next

With simpler pricing for Cloud Datastore, you can spend less time micro-managing indexes and focus more on building what’s next.

Learn more about Google Cloud Datastore or check out the our getting started guide.

- Posted by Dan McGrath, Product Manager, Google Cloud Platform