Tag Archives: Management Tools

Verifying PostgreSQL backups made easier with new open-source tool



When was the last time you verified a database backup? If that question causes you to break into a cold sweat, rest assured you’re not alone.

Verifying backups should be a common practice, but it often isn’t. This can be an issue if there’s a disaster or—as is more likely at most companies—if someone makes a mistake when deploying database changes. One industry survey indicates that data loss is one of the biggest risks when making database changes.

PostgreSQL Page Verification Tool

At Google Cloud Platform (GCP), we recently wrote a tool to fight data loss and help detect data corruption early in the change process. We made it open source, because data corruption can happen to anybody, and we’re committed to making code available to ensure secure, reliable backups. If you use Google Cloud SQL for PostgreSQL, then you’re in luck—we’re already running the PostgreSQL Page Verification Tool on your behalf. It’s also available now as open source code.

This new PostgreSQL Page Verification tool is a command-line tool that you can execute against a Postgres database. Since PostgreSQL version 9.3, it’s been possible to enable checksums on data pages to avoid ignoring data corruption. However, with the release of this utility, you can now verify all data files, online or offline. The Page Verification Tool can calculate and verify checksums for each data page.

How the Page Verification tool works

To use the PostgreSQL Page Verification tool, you must enable checksums during initialization of a new PostgreSQL database cluster. You can’t go back in and do it after the fact. Once checksums are turned on, the Page Verification tool computes its own checksum and compares it to the Postgres checksum to confirm that they are identical. If the checksum does not match, the tool identifies which data page is at fault and causing the corruption.

The Page Verification Tool can be run against a database that’s online or offline. It verifies checksums on PostgreSQL data pages without having to load each page into a shared buffer cache, and supports subsequent segments for tables larger than 1GB.

The tool skips Free Space Map, Visibility Map and pg_internal.init files, since they can be regenerated. While the tool can run against a database continuously, it does have a performance overhead associated with it, so we advise incorporating the tool into your backup process and running it on a separate server.

How to start using the PostgreSQL Page Verification tool

The Page Verification tool is integrated into Google Cloud SQL, so it runs automatically. We’re using the tool at scale to validate our customers’ backups. We do the verification process on internal instances of Cloud SQL to make sure your database doesn’t take a performance hit.

The value of the PostgreSQL Page Verification Tool comes from detecting data corruption early to minimize data loss resulting from data corruption. Organizations that use the tool and achieve a successful verification have assurance of a useful backup in case disaster strikes.

At Google, when we make a database better, we make it better for everyone, so the PostgreSQL Page Verification tool is available to you via open source. We encourage Postgres users to download the tool at Google Open Source or GitHub. The best detection is early detection, not when you need to restore a backup.

Predict your future costs with Google Cloud Billing cost forecast



With every new feature we introduce to Google Cloud Billing, we strive to provide your business with greater flexibility, control, and clarity so that you can better align your strategic priorities with your cloud usage. In order to do so, it’s important to be able to answer key questions about your cloud costs, such as:
  • “How is my current month's Google Cloud Platform (GCP) spending trending?”
  • “How much am I forecasted to spend this month based on historical trends?”
  • “Which GCP product or project is forecasted to cost me the most this month?”
Today, we are excited to announce the availability of a new cost forecast feature for Google Cloud Billing. This feature makes it easier to see at a glance how your costs are trending and how much you are projected to spend. You can now forecast your end-of-month costs for whatever bucket of spend is important to you, from your entire billing account down to a single SKU in a single project.

View your current and forecasted costs


Get started

Cost forecast for Google Cloud Billing is now available to all accounts. Get started by navigating to your account’s billing page in the GCP console and opening the reports tab in the left-hand navigation bar.

You can learn more about the cost forecast feature in the billing reports documentation. Also, if you’re attending Google Cloud Next ‘18, check out our session on Monitoring and Forecasting Your GCP Costs.

Related content

Last month today: GCP in June

In June, we had a lot to discuss about getting the most out of the cloud for your business, from speeding up web traffic to running fully managed apps easily. Here’s a quick look at some of the highlights from Google Cloud Platform (GCP) news this month.

What caught your attention this month

Some of the most-read stories this month reflected new technology developments or integrations that will be useful for developers and engineers.
  • You can now deploy your Node.js app to the Google App Engine standard environment—and based on readership, many of you are excited about this. Node.js works easily on App Engine, without any language, module or API restrictions. You’ll get very quick deployment times, and a fully managed experience once you’ve deployed those apps, just as in other apps on the fully managed App Engine.
  • QUIC is a transport protocol, optimized for HTTPS, that makes web traffic run faster. The protocol itself isn’t new, but last month we announced QUIC support for our HTTPS load balancers. Network performance is a huge part of a successful public cloud operation, so this new support could make a big impact on web page load times for your cloud services. Enabling QUIC means your connections can be established faster, which is especially useful for latency-prone connections, and clients who don’t yet support QUIC will seamlessly continue to use HTTPS.
  • If you’re a Kubernetes fan, you may have already explored the new kubemci command-line interface (CLI). It lets you configure ingress for multi-cluster Kubernetes Engine environments, using Cloud Load Balancer. It’s also the first step in a long-term solution that will consist of a multi-cluster ingress system controlled via kubectl CLI or Kubernetes API calls.

Hot topics

You can now run your GCP workloads in Finland to improve availability and reduce your latency in the Nordics, and we announced that the Los Angeles region will open next month.

We also added some new storage tools to your arsenal. We’re adding Cloud Filestore as a GCP storage option so you can run enterprise applications that need a file system interface and shared file system for data. It’s fully managed and offers high performance for applications that need low latency and high throughput. For those of you supporting and running creative industry applications on GCP infrastructure, Cloud Filestore works great for render farms, website hosting and content management systems.

In addition, the Transfer Appliance became generally available in June, allowing a type of cloud data migration that will work well if you’ve got more than 20TB of data to upload to GCP, or that would take more than a week to upload. In early use, Transfer Appliance customers have gotten quick starts on analytics projects by moving test data to GCP, along with moving backup data and some or all of a data center to GCP.

And in the “Cloud powers some very cool projects” category, take a look at how the new Dragon Ball Legends game creator built the backend on GCP. Bandai Namco Entertainment knew that players of the latest addition to their Dragon Ball Z franchise would want to play against one another in real-time, with players around the globe. They turned to GCP for the scalability, global reach and real-time analytics they needed to make that possible.

Behind the compute curtain

This news of sole-tenant nodes for Google Compute Engine will come in handy for those of you at companies that need dedicated cloud servers. With this option, it’s possible to launch new VM instances as usual, but on server capacity dedicated to you. This choice is nice for industries with strict compliance and regulatory rules around data, and for getting higher utilization from VM instances along with instance placement, done either manually or by Compute Engine.

Building applications on GCP involves some upfront choices for app developers: Which compute offering will you pick, and what language will you use? Whether you’re a fan of containers or VMs, containers, App Engine or Cloud Functions, you’ll find in this post some excellent concrete examples the time and effort involved in building a “Hello, World” app in each of GCP’s four compute platforms.

That’s a wrap for June. This month brings the Next ‘18 conference, July 24-26. Join us and thousands of other IT practitioners in San Francisco to learn all you need to know about building a modern cloud infrastructure. Till then, build away!

Kubernetes 1.11: a look from inside Google



Congratulations to everyone involved in the recent Kubernetes 1.11 release. Now that the core has been stabilized, we here at Google have been focusing our upstream work on increasing Kubernetes’ plugability, i.e., moving more pieces out into other repositories. As the project has matured, adding a plugin no longer means "sending Tim Hockin a pull request," but instead means creating proper, well-defined interfaces with names like CNI, CRI and CSI. In fact, this maturity and extendability has been one of the things that helps us make Google Kubernetes Engine an enterprise-ready platform. Back in March, we gave you a look at what was new in Kubernetes 1.10. Now, with the release of 1.11, let’s take a look at the core Kubernetes work that Google is driving, as well as some of the innovation we've built on Kubernetes’ foundations in the last three months.

New features in 1.11

Priority and preemption
Pod priority and preemption is one of the main features of our internal scheduling system that lets us achieve high resource utilization in our data centers. We wrote about that key use case when we introduced it in Alpha in Kubernetes 1.9, and since then, we’ve added improved scheduling performance and better support for critical system pods. Now, we're pleased to move it to Beta in this release, meaning it’s enabled by default in Kubernetes Engine clusters that run 1.11. This is a feature that many users who run larger clusters have been waiting for!

Changes to CRDs
Custom Resource Definitions (CRDs) are one of the most popular extension mechanisms for Kubernetes, and new features in 1.11 make them even more powerful. CRDs are used for a broad array of Kubernetes extensions, for example to enable the use of Spark or Functions natively through the Kubernetes API.

Kubernetes objects have a schema version (e.g. v1beta1 or v1), but we only ever store one version in the etcd database. When you query an object at a particular version, a server-side conversion is done to convert the object to match the schema of the version you request.

Previously, CRD authors had to delete and recreate resources to move them between different versions. In 1.11, you can now define multiple versions for your own resources. The next step will be to enable server-side conversion for CRD, to allow for schema changes like renaming fields, without breaking existing clients.

Cloud Provider plugins
Google continues to invest in the long-term sustainability and multi-cloud portability of core Kubernetes. The Cloud Provider interface allows infrastructure providers to deliver a "batteries-included" experience for user workloads on their platform, powering common services like dynamic provisioning and management of storage and external load balancing for Services.

This code is currently compiled into Kubernetes core binaries. Google is leading a long running effort to extract this functionality into provider-specific repositories, in order to reduce the scope of the Kubernetes core. This will also allow providers to deliver enhancements and fixes to users more quickly than Kubernetes’ three-month release cadence. As a part of this effort, we’re excited to announce the creation of SIG-Cloud Provider to provide technical oversight and governance for this effort.

New features not in 1.11

That's not a headline you normally see, right?

One thing that is not in 1.11 — not even a bit of it — is Server-side Apply, a feature which moves the logic for kubectl apply from the client to server, making the expected behavior clearer, and allowing more clients to take advantage of server-side processing without shelling out to kubectl.

Normally, a feature like this would be committed to the project as it was built. But if a release is due, and the feature isn't ready, a large amount of effort would be required to go towards reverting it. Instead, Google has been leading the effort to introduce feature branches in Kubernetes, which let us work on long-running features in parallel to the main codebase. This lets us avoid last-minute scrambles to adjust for surprises, and is an example of how we are working to ensure the stability of the Kubernetes project.

Work on server-side apply is happening in the open in its feature branch, and we look forward to welcoming it into Kubernetes when it's ready — and not a moment before.

Kubernetes ecosystem work
Our work with Kubernetes doesn't stop at releasing core binaries every three months. Some of the work we are most excited about is in the form of extensions we've released since the last Kubernetes release:

Kustomize
We've thought a lot about how to declaratively manage application configuration. A common pattern that we saw was the use of templating solutions such as Helm (based on Google Cloud's Deployment Manager), which requires a user to learn a different configuration language than what the API server returns when you query it. A templating approach also means that if you download a YAML example, you have to turn it into a template before you can use it in your environment.

With kustomize, we're introducing a new approach to application definition. Kustomize allows you to apply overlays to existing YAML configurations, so you can customize a forked repository with your local changes, or define different configs for 'staging' and 'production' with different configs and replica counts.

Kustomize is well suited for a GitOps-style workflow, where there's a common base configuration that is tweaked in various directions with overlays to create different variants. The base and overlays can be managed by separate teams in different repositories.

Application API
Applications are made up of many services and resources, but the whole is more than the sum of its parts. After they are created, there is no well-defined way of identifying all the parts that relate to an application to Kubernetes. We want cluster users to be able to think in terms of their applications, and allow tools and UIs to define, update and display an application-centric view of your cluster.

The new Application API provides a way to aggregate Kubernetes components (e.g. Services, Deployments, StatefulSets, Ingresses, CRDs), and manage them as a group.

We have had contributions from friends at Samsung, Bitnami, Heptio, Red Hat and more, and we are looking for more contributions and feedback to ensure that the project adds value across the community.

The Application API is currently in Alpha. We hope to promote it to Beta in the next few weeks, and you'll hear more about it from us then.

Looking forward to Kubernetes Engine

If you'd like to get access to Kubernetes 1.11 on Kubernetes Engine ahead of general availability, please complete this form.

And if you liked reading this post, you'll love the Kubernetes Podcast from Google, which I co-host with Adam Glick. Every Tuesday we take a look at the week’s news and talk with Googlers or members of the wider Kubernetes community. So far we've spoken about product launches, processes and community, and this week we talk to the Kubernetes 1.11 release leads. Subscribe now!

Partner Interconnect now generally available



We are happy to announce that Partner Interconnect, launched in beta in April, is now generally available. Partner Interconnect lets you connect your on-premises resources to Google Cloud Platform (GCP) from the partner location of your choice, at a data rate that meets your needs.

With general availability, you can now receive an SLA for Partner Interconnect connections if you use one of the recommended topologies. If you were a beta user with one of those topologies, you will automatically be covered by the SLA. Charges for the service start with GA (see pricing).

Partner Interconnect is ideal if you want physical connectivity to your GCP resources but cannot connect at one of Google’s peering locations, or if you want to connect with an existing service provider. If you need help understanding the connection options, the information here can help.

In this blog we will walk through how you can start using Partner Interconnect, from choosing a partner that works best for you all the way through how you can deploy and start using your interconnect.


Choosing a partner


If you already have a service provider partner for network connectivity, you can check the list of supported service providers to see if they offer Partner Interconnect service. If not, you can select a partner from the list based on your data center location.

Some critical factors to consider are:
  • Make sure the partner can offer the availability and latency you need between your on-premises network and their network.
  • Check whether the partner offers Layer 2 connectivity, Layer 3 connectivity, or both. If you choose a Layer 2 Partner, you have to configure and establish a BGP session between your Cloud Routers and on-premises routers for each VLAN attachment that you create. If you choose a Layer 3 partner, they will take care of the BGP configuration.
  • Please review the recommended topologies for production-level and non-critical applications. Google provides a 99.99% (with Global Routing) or 99.9% availability SLA, and that only applies to the connectivity between your VPC network and the partner's network.

Bandwidth options and pricing


Partner Interconnect provides flexible options for bandwidth between 50 Mbps and 10 Gbps. Google charges on a monthly basis for VLAN attachments depending on capacity and egress traffic (see options and pricing).

Setting up Partner Interconnect VLAN attachments


Once you’ve established network connectivity with a partner, and they have set up interconnects with Google, you can set up and activate VLAN attachments using these steps:
  1. Create VLAN attachments.
  2. Request provisioning from the partner.
  3. If you have a Layer 2 partner, complete the BGP configuration and then activate the attachments for traffic to start. If you have a Layer 3 partner, simply activate the attachments, or use the pre-activation option.
With Partner Interconnect, you can connect to GCP where and how you want to. Follow these steps to easily access your GCP compute resources from your on-premises network.

Related content


Try full-stack monitoring with Stackdriver on us



In advance of the new simplified Stackdriver pricing that will go into effect on June 30, we want to make sure everyone gets a chance to try Stackdriver. That’s why we’ve decided to offer the full power of Stackdriver, including premium monitoring, logging and application performance management (APM), to all customers—new and existing—for free until the new pricing goes into effect. This offer will be available starting June 18.

Stackdriver, our full-stack logging and monitoring tool, collects logs and metrics, as well as other data from your cloud apps and other sources, then generates useful dashboards, charts and alerts to let you act on information as soon as you get it. Here’s what’s included when you try Stackdriver:
  • Out-of-the-box observability across the entire Google Cloud Platform (GCP) and Amazon Web Services (AWS) services you use
  • Platform, system, application and custom metrics on demand with Metrics Explorer
  • Uptime checks to monitor the availability of the internet-facing endpoints you depend on
  • Alerting policies to let you know when something is wrong. Alerting and notification options, previously available only on the premium tier, are now available for free during this limited time
  • Access to logging and APM features like logs-based metrics, using Trace to understand application health, debugging live with debugger and more
Want to estimate your usage once the new pricing goes into effect? Check out our earlier blog post on viewing and managing your costs. You’ll see the various ways you can estimate usage to plan for the best use of Stackdriver monitoring in your environment. And if you are not already a Stackdriver user, you can sign up to try Stackdriver now!

Related content:

Introducing QUIC support for HTTPS load balancing



For four years now, Google has been using QUIC, a UDP-based encrypted transport protocol optimized for HTTPS, to deliver traffic for our products – from Google Web Search, to YouTube, to this very blog. If you’re reading this in Chrome, you’re probably using QUIC right now. QUIC makes the web faster, particularly for slow connections, and now your cloud services can enjoy that speed: today, we’re happy to be the first major public cloud to offer QUIC support for our HTTPS load balancers.

QUIC’s key features include establishing connections faster, stream-based multiplexing, improved loss recovery, and no head-of-line blocking. QUIC is designed with mobility in mind, and supports migrating connections from WiFi to Cellular and back.

Benefits of QUIC


If your service is sensitive to latency, QUIC will make it faster because of the way it establishes connections. When a web client uses TCP and TLS, it requires two to three round trips with a server to establish a secure connection before the browser can send a request. With QUIC, if a client has talked to a given server before, it can start sending data without any round trips, so your web pages will load faster. How much faster? On a well-optimized site like Google Search, connections are often pre-established, so QUIC’s faster connections can only speed up some requests—but QUIC still improves mean page load time by 8% globally, and up to 13% in regions where latency is higher.

Cedexis benchmarked our Cloud CDN performance using a Google Cloud project. Here’s what happened when we enabled QUIC.

Encryption is built into QUIC, using AEAD algorithms such as AES-GCM and ChaCha20 for both privacy and integrity. QUIC authenticates the parts of its headers that it doesn’t encrypt, so attackers can’t modify any part of a message.

Like HTTP/2, QUIC multiplexes multiple streams into one connection, so that a connection can serve several HTTP requests simultaneously. But HTTP/2 uses TCP as its transport, so all of its streams can be blocked when a single TCP packet is lost—a problem called head-of-line blocking. QUIC is different: Loss of a UDP packet within a QUIC connection only affects the streams contained within that packet. In other words, QUIC won’t let a problem with one request slow the others down, even on an unreliable connection.

Enabling QUIC

You can enable QUIC in your load balancer with a single setting in the GCP Console. Just edit the frontend configuration for your load balancer and enable QUIC negotiation for the IP and port you want to use, and you’re done.

You can also enable QUIC using gcloud:
gcloud compute target-https-proxies update proxy-name 
--quic_override=ENABLE
Once you’ve enabled QUIC, your load balancer negotiates QUIC with clients that support it, like Google Chrome and Chromium. Clients that do not support QUIC continue to use HTTPS seamlessly. If you distribute your own mobile client, you can integrate Cronet to gain QUIC support. The load balancer translates QUIC to HTTP/1.1 for your backend servers, just like traffic with any other protocol, so you don’t need to make any changes to your backends—all you need to do is enable QUIC in your load balancer.

The Future of QUIC

We’re working to help QUIC become a standard for web communication, just as we did with HTTP/2. The IETF formed a QUIC working group in November 2016, which has seen intense engagement from IETF participants, and is scheduled to complete v1 drafts this November. QUIC v1 will support HTTP over QUIC, use TLS 1.3 as the cryptographic handshake, and support migration of client connections. At the working group’s most recent interop event, participants presented over ten independent implementations.

QUIC is designed to evolve over time. A client and server can negotiate which version of QUIC to use, and as the IETF QUIC specifications become more stable and members reach clear consensus on key decisions, we’ve used that version negotiation to keep pace with the current IETF drafts. Future planned versions will also include features such as partial reliability, multipath, and support for non-HTTP applications like WebRTC.

QUIC works across changing network connections. QUIC can migrate client connections between cellular and Wifi networks, so requests don’t time out and fail when the current network degrades. This migration reduces the number of failed requests and decreases tail latency, and our developers are working on making it even better. QUIC client connection migration will soon be available in Cronet.

Try it out today

Read more about QUIC in the HTTPS load balancing documentation and enable it for your project(s) by editing your HTTP(S) load balancer settings. We look forward to your feedback!

Introducing improved pricing for Preemptible GPUs



Not everyone needs the extra performance that GPUs bring to a compute workload, but those who do, really do. Earlier this year, we announced that you could attach GPUs to Preemptible VMs on Google Compute Engine and Google Kubernetes Engine, lowering the price of using GPUs by 50%. Today, Preemptible GPUs are generally available (GA) and we’ve lowered preemptible prices on our entire GPU portfolio to be 70% cheaper than GPUs attached to on-demand VMs.

Preemptible GPUs are ideal for customers with short-lived, fault-tolerant and batch workloads such as machine learning (ML) and high-performance computing (HPC). Customers get access to large-scale GPU infrastructure, predictable low pricing, without having to bid on capacity. GPUs attached to Preemptible VMs are the same as equivalent on-demand resources with two key differences: Compute Engine may shut them down after providing you a 30-second warning, and you can use them for a maximum of 24 hours. Any GPUs attached to a Preemptible VM instance will be considered Preemptible and will be billed at the lower rate.

We offer three different GPU platforms to choose from, making it easy to pick the right GPU for your workload.


GPU Hourly Pricing *
GPU
Standard
(Prices vary by location)
Previous Preemptible
(All Locations)
New Preemptible
(All Locations)
$2.48
$1.24
$0.74
$1.46
$0.73
$0.43
$0.45
$0.22
$0.135
* GPU prices listed as hourly rate, per GPU attached to a VM that are billed by the second. Prices listed are for US regions. Prices for other regions may be different. Additional Sustained Use Discounts of up to 30% apply to GPU non-preemptible usage only.



Combined with custom machine types, Preemptible VMs with Preemptible GPUs let you build your compute stack with exactly the resources you need—and no more. Attaching Preemptible GPUs to custom Preemptible VMs allows you to reduce the amount of vCPU or host memory for your GPU VM, to save even further over  pre-defined VM shapes. Additionally, customers can use Preemptible Local SSD for a low-cost, high-performance storage option with our Preemptible GPUs. Check out this pricing calculator to configure your own preemptible environment.

The use-case for Preemptible GPUs
Hardware-accelerated infrastructure is in high demand among innovators, researchers, and academics doing machine learning research, particularly when coupled with the low, predictable pricing of Preemptible GPUs.

“Preemptible GPUs have been instrumental in enabling our research group to process large video collections at scale using our Scanner open-source platform. The predictable low cost makes it feasible for a single grad student to repeatedly deploy hundreds of GPUs in ML-based analyses of 100,000 hours of TV news video. This price drop enables us to perform twice the amount of processing with the same budget."
- Kayvon Fatahalian, Assistant Professor, Stanford University

Machine Learning Training and Preemptible GPUs
Training ML workloads is a great fit for Preemptible VMs with GPUs. Kubernetes Engine and  Compute Engine’s managed instance groups allow you to create dynamically scalable clusters of Preemptible VMs with GPUs for your large compute jobs. To help deal with Preemptible VM terminations, Tensorflow’s checkpointing feature can be used to save and restore work progress. An example and walk-through is provided here.

Getting Started
To get started with Preemptible GPUs in Google Compute Engine, simply append --preemptible to your instance create command in gcloud, specify scheduling.preemptible to true in the REST API or set Preemptibility to "On" in the Google Cloud Platform Console, and then attach a GPU as usual. You can use your regular GPU quota to launch Preemptible GPUs or, alternatively, you can request a special Preemptible GPUs quota that only applies to GPUs attached to Preemptible VMs. Check out our documentation to learn more. To learn how to use Preemptible GPUs with Google Kubernetes Engine, head over to our Kubernetes Engine GPU documentation.

For a certain class of workloads, Google Cloud GPUs provide exceptional compute performance. Now, with new low Preemptible GPU pricing, we invite you to see for yourself how easy it is to get the performance you need, at the low, predictable price that you want.

What DBAs need to know about Cloud Spanner, part 1: Keys and indexes



Cloud Spanner is a relational and horizontally scalable database service, built from a cloud/distributed design perspective. It brings efficiency and high availability for developers and database administrators (DBAs), and differs structurally from typical databases you’re used to. In this blog series, we’ll explore some of the key differences that DBAs and developers will encounter as you migrate from traditional vertically-scaling (scale-up) relational database management systems (RDBMS) and move to Cloud Spanner. We will discuss some of the dos-and-don'ts, best practices and why things are different in Cloud Spanner.

In this series, we will explore a range of topics, including:
  • Selection of keys and use of indexes
  • How to approach business logic
  • Importing and exporting data
  • Migrating from your existing RDBMS
  • Optimizing performance
  • Access control and logging
You’ll gain an understanding of how to best use Cloud Spanner and release its potential to achieve linearly scalable performance over massive databases. In this first installment, let’s start with a closer look at how the concepts of keys and indexes work in Cloud Spanner.

Choosing keys in Cloud Spanner

Just like in other databases, the choice of key is vitally important to optimize the performance of the database. It’s even more important in Cloud Spanner, due to the way its mechanisms distribute database load. Unlike traditional RDBMS, you’ll need to take care when choosing the primary keys for the tables and choosing which columns to index.

Using well-distributed keys results in a table whose size and performance can scale linearly with the number of Cloud Spanner nodes, while using poorly distributed keys can result in hotspots, where a single node is responsible for the majority of reads and writes to the table.

In a traditional vertically scaled RDBMS, there is a single node that manages all of the tables. (Depending on the installation, there may be replicas that can be used for reading or for failover). This single node therefore has full control over the table row locks, and the issuing of unique keys from a numeric sequence.

Cloud Spanner is a distributed system, with many nodes reading and writing to the database at any one time. However, to achieve scalability, along with global ACID transactions and strong consistency, only one node at any one time can have write responsibility for a given row.

Cloud Spanner distributes management of rows across multiple nodes by breaking up each table into several splits, using ranges of the lexicographically sorted primary key.

This enables Cloud Spanner to achieve high availability and scalability, but it also means that using any continuously increasing or decreasing sequence as a key is detrimental to performance. To explain why, let’s explore how Cloud Spanner creates and manages its table splits.

Table splits and key choice

Cloud Spanner manages splits using Paxos (you can learn more about how in this detailed documentation: Life of Cloud Spanner Reads & Writes and Spanner: Google's Globally Distributed Database). In a regional instance of Cloud Spanner, the responsibility for reading/writing each split is distributed across a group of three nodes, one in each of the three availability zones of the Cloud Spanner instance.

One node in this group of three is elected Split Leader and manages the writes and locks for all the rows in the split. All three nodes in the group can perform reads.

To create a visual example, consider a table with 600 rows that uses a simple, continuous, monotonically increasing integer key (as is common in traditional RDBMS), broken into six splits, running on a two-node (per zone) Cloud Spanner instance. In an ideal situation, the table will have six splits, the leaders of which will be the six individual nodes available in the instance.

This distribution would result in ideal performance when reading/updating the rows, provided that the reads and updates are evenly distributed across the key range.

Problems with hotspots


The problem arises when new rows are appended to the database. Every new row will have an incrementing ID and will be added to the last split, meaning that out of the six available nodes, only one node will be handling all of the writes. In the example above, node 2c would be handling all writes. This node then becomes a hotspot, limiting the overall write performance of the database. In addition, the row distribution becomes unbalanced, with the last split becoming significantly larger, so it’s then handling more row reads than the others.

Cloud Spanner does try to compensate for uneven load by adding and removing splits in the background according to read and write load, and by creating a new split once the split size crosses a set threshold. However, in a frequently appended table, this will not happen quickly enough to avoid creating a hotspot.

Along with monotonically increasing or decreasing keys, this issue also affects tables that are indexed by any deterministic key—for example, an increasing timestamp in an event log table. Timestamp-keyed tables are also more likely to have a read hotspot because, in most cases, recently timestamped rows are accessed more frequently than the others. (Check out Cloud Spanner — Choosing the right primary keys for detailed information on detecting and avoiding hotspots.)

Problems with sequence generators

The concept of sequence generators, or lack thereof, is an important area to explore further. Traditional vertical RDBMS have integrated sequence generators, which create new integer keys from a sequence during a transaction. Cloud Spanner cannot have this due to its distributed architecture, as there would either be race conditions between the split leader nodes when inserting new keys, or the table would have to be globally locked when generating a new key, both of which would reduce performance.

One workaround could be that the key is generated by the application (for example, by storing the next key value in a separate table in the database, or by getting the current maximum key value from the table). However, you’ll run into the same performance problems. Consider that as the application is also likely to be distributed, there may be multiple database clients trying to append a row at the same time, with two potential results depending on how the new key is generated:
  • If the SELECT for the existing key is performed in the transaction, one application instance trying to append would block all other application instances trying to append due to row locking.
  • If the SELECT for the existing key is done outside of the transaction, then there is a race between each of the application instances trying to append the new row. One will succeed, while others would have to retry (including generating a new key) after the append fails, since the key already exists.

What makes a good key

So if sequential keys will limit database performance in Cloud Spanner, what’s a good key to use? Ideally, the high-order bits should be evenly and semi-randomly distributed when keys are generated.

One simple way to generate such a key is to use random numbers, such as a random universally unique identifier (UUID). Note that there are several classes of UUID. Versions 1 and 2 use deterministic prefixes, such as timestamps or MAC addresses. Ensure that the UUID generation method you use is truly randomly distributed, i.e., v4, at least over the higher order bytes. This will ensure that the keys are evenly distributed among the keyspace, and hence that the load is distributed evenly over the spanner nodes.

Although another approach might be to use some real-world attributes of the data that are immutable and evenly distributed over the key range, this is quite a challenge since most uniformly distributed attributes are discrete and not continuous. For example, the random result of a dice roll is uniformly distributed and has six finite values. A continuous distribution could rely on an irrational number, for example π.

What if I really need an integer sequence as a key?

Though it’s not recommended, in some circumstances an integer sequence key is necessary, either for legacy or for external reasons, e.g., an employee ID.

To use an integer sequence key, you’ll first need a sequence generator that’s safe across a distributed system. One way of doing this is to have a table in Cloud Spanner contain a row for each required sequence that contains the next value in the sequence—so it would look something like this:
CREATE TABLE Sequences (
     Sequence_ID STRING(MAX) NOT NULL, -- The name of the sequence
     Next_Value INT64 NOT NULL
) PRIMARY KEY (Sequence_ID)
When a new ID value is required, the next value of the sequence is read, incremented and updated in the same transaction as the insert for the new row.

Note that this will limit performance when many rows are inserted, as each insert will block all other inserts due to the update of the Sequences table that we created above.

This performance issue can be reduced—though at the cost of possible gaps in the sequence—if each application instance reserves a block of, for example, 100 sequence values at once by incrementing Next_Value by 100, and then manages issuing individual IDs from that block internally.

In the table using the sequence, the key cannot simply be the numeric sequence value itself, as that will cause the last split to become a hotspot (as explained previously). So the application must generate a complex key that randomly distributes the rows among the splits.

This is known as application-level sharding and is achieved by prefixing the sequential ID with an additional column containing a value that’s evenly distributed among the key space—e.g., a hash of the original ID, or bit-reversing the ID. That looks something like this:
CREATE TABLE Table1 (
     Hashed_Id INT64 NOT NULL, 
     ID INT64 NOT NULL,
     -- other columns with data values follow....
) PRIMARY KEY (Hashed_Id, Id)
Even a simple cyclic redundancy check (CRC)32 checksum is good enough to provide a suitably pseudo-random Hashed_Id. It does not have to be secure, just enough to randomize the row order of the sequentially numbered keys, as in the following table:

Note that whenever a row is read directly, both the ID and Hashed_Id must be specified to prevent a table scan, as in this example:
SELECT * FROM Table1
WHERE t1.Hashed_Id = 0xDEADBEEF
      AND t1.Id = 1234
Similarly, whenever this table is joined with other tables in the query by Id, the join must also use both the ID and the Hashed_Id. Otherwise, you’ll lose performance, since a table scan will be required to find the row. This means that the table that references the ID must also include the Hashed_Id, like this:
CREATE TABLE Table2 (
     Id String(MAX),  -- UUID
     Table1_Hashed_Id INT64 NOT NULL, 
     Table1_Id INT64 NOT NULL,
     -- other columns with data values follow....
) PRIMARY KEY (Id)

SELECT * from Table2 t2 INNER JOIN Table1 t1 
     ON t1.Hashed_Id = t2.Table1_Hashed_Id
     AND t1.Id = t2.Table1_Id
WHERE ... -- some criteria

What if I really need to use a timestamp as a key?

In many cases, the row using the timestamp as a key also refers to some other table data. For example, the transactions on a bank account will refer to the source account. In this case, assuming that the source account number is already reasonably evenly distributed, you can use a complex key containing the account number first and then the timestamp:
CREATE TABLE Transactions (
     account_number INT64 NOT NULL,
     timestamp TIMESTAMP NOT NULL,
     transaction_info ...,
) PRIMARY KEY (account_number, timestamp DESC)
The splits will be made primarily using the account number and not the timestamp, thus distributing the newly added rows over various splits.

Note that in this table, the timestamp is sorted by descending order. That’s because in most cases you want to read the most recent transactions—which will be first in the table—so you won’t need to scan through the entire table to find the most recent rows.

If you do not, or cannot have an external reference, or have any other data that you can use in the key in order to distribute the order, then you will need to perform application-level sharding, which is shown in the integer sequence example above.

Note, however, that using a simple hash will make queries by timestamp range extremely slow, since retrieving a range of timestamps will require a full table scan to cover all the hashes. Instead, we recommend generating a ShardId from the timestamp. So, for example,
TimestampShardId = CRC32(Timestamp) % 100
will return a pseudo-random value between 0 and 99 from the timestamp. Then, you can use this ShardId in the table key so that sequential timestamps are distributed across multiple splits, like so:
CREATE TABLE Events (
     TimestampShardId INT64 NOT NULL
     Timestamp TIMESTAMP NOT NULL,
     event_info...
) PRIMARY KEY (TimestampShardId, Timestamp DESC)
For example, a table with dates of the first 10 days of 2018 (which without ShardId would be stored in the table in date order) will give the following ordering:

When a query is made, you must use a BETWEEN clause to be able to select across all shards without performing a table scan:
Select * from Events
WHERE
   TimestampShardId BETWEEN 0 AND 99
   AND Timestamp > @lower_bound
   AND Timestamp < @upper_bound;
Note that the ShardId is only a way of improving key distribution so that Cloud Spanner can use multiple splits to store sequential timestamps. It does not identify an actual database split, and rows in different tables with the same ShardId may well be in different splits.

Migration implications

When you’re migrating from an existing RDBMS that uses keys that are not optimal for Cloud Spanner, take the above considerations into account. If necessary, add key hashes to tables or change the key ordering.

Deciding on indexes in Cloud Spanner

In a traditional RDBMS, indexes are very efficient ways of looking up rows in a table by a value that is not the primary key. Under most circumstances, a row lookup via an index will take approximately the same time as a row lookup via its key. That’s because the table and the index are managed by a single node, so the index can point directly to the on-disk row of the table.

In Cloud Spanner, indexes are actually implemented using tables, which allows them to be distributed and enables the same degree of scalability and performance as normal tables.

However, because of this type of implementation, using indexes to read the data from the table row is less efficient than in a traditional RDBMS. It’s effectively an inner join with the original table, so reading from a table using an indexed key turns into this process:
  • Look up split for index key
  • Read index row from split to get table key
  • Look up split for table key
  • Read table row from split to get row values
  • Return row values
Note that there is no guarantee that the split for the index key is on the same node as the split for the table key, so a simple index query may require cross-node communication just to read one row.

Similarly, updating an indexed table will most likely require a multi-node write to update the table row and the index row. So using an index in Cloud Spanner is always a trade-off between improved read performance and reduced write performance.

Index keys and hotspots

Because indexes are implemented as tables in Cloud Spanner, you’ll encounter the same issues with the indexed columns as you did with the table keys: An index on a column with poorly distributed values (such as a timestamp) will lead to the creation of a hotspot, even if the underlying table is using well-distributed keys. That’s because when rows are appended to the table, the index will also have new rows appended, and writes for these new rows will always be sent to the same split.

Therefore, care must be taken when creating indexes, and we recommend that you create indexes only using columns which have a well-distributed set of values, just like when choosing a table key.

In some cases, you’ll need to do application-level sharding for the indexed columns in order to create a synthetic ShardId column, which can be used in the index to distribute values over the splits.

For example, this configuration below will create a hotspot when appending events due to the index, even if UserId is randomly distributed.
CREATE TABLE Events (
      UserId String(MAX),
      Timestamp TIMESTAMP,
      EventData)
PRIMARY KEY (UserId, Timestamp DESC);

CREATE INDEX EventsByTimestamp ON Events (Timestamp DESC);
As with a table keyed by timestamp only, a synthetic ShardId column will need to be added to the table, and then used as the first indexed column to help the distribution of the index among the splits.

A simple ShardId generator could be:
TimestampShardId = CRC32(Timestamp) % 100
which will give a hash value between 0 and 99 from the timestamp. You’ll need to add this to the original table as a new column, then use it as the first index key, like this:
CREATE TABLE Events (
     UserId String(MAX),
     Timestamp TIMESTAMP,
     TimestampShardId INT64, 
     EventData)
PRIMARY KEY (UserId, Timestamp DESC);

CREATE INDEX EventsByTimestamp ON Events (TimestampShardId,Timestamp);
This will remove the hotspot on index update, but will slow down range queries on timestamp, since you’ll have to run the query for each ShardId value (0-99) to get the timestamp range from all shards:
Select * from Events@{FORCE_INDEX=EventsByTimestamp}
WHERE
   TimestampShardId BETWEEN 0 AND 99
   AND Timestamp > @lower_bound
   AND Timestamp < @upper_bound;
Using this type of index and sharding strategy must strike a balance between the additional complexity when reading and the increase in performance of an indexed query.

Other indexes you should know

When you’re migrating to Cloud Spanner, you’ll also want to understand how these other index types function and when you might need to use them:

NULL_FILTERED indexes

By default, Cloud Spanner will index rows using NULL indexed column values. A NULL is considered to be the smallest possible value, so these values will appear at the start of the index.

It’s also possible to disable this behavior by using the CREATE NULL_FILTERED INDEX syntax, which will create an index ignoring rows with NULL indexed column values.

This index will be smaller than the complete index, as it will effectively be a materialized filtered view on the table, and will be faster to query than the full table when a table scan is necessary.

UNIQUE indexes

You can use a UNIQUE index to enforce that a column of a table has unique values. This constraint will be applied at transaction commit time (and at index creation).

Covering Indexes and STORING clause

To optimize performance when reading from indexes, Cloud Spanner can store the column values of the table row in the index itself, removing the need to read the table. This is known as a Covering Index. This is achieved by using the STORING clause when defining the index. The values of the column can then be read directly from the index, so reading from the index performs as well as reading from the table. For example, this table contains employee data:
CREATE TABLE Employees (
      CompanyUUID INT64,
      EmployeeUUID INT64,
      FullName STRING(MAX)
            ...
) PRIMARY KEY (CompanyUUID,EmployeeUUID)
If you often need to look up an employee’s full name, for example, you can create an index on employeeUUID, storing the full name for rapid lookups:
CREATE INDEX EmployeesById 
      ON Employees (EmployeeUUID) 
      STORING (FullName);

Forcing index use

Cloud Spanner’s query engine will only automatically use indexes in rare circumstances (when it is a query fully covered by the index), so it is important to use a FORCE_INDEX directive in the SQL SELECT statement to ensure that Cloud Spanner looks up values from the index. (You can find more details in the documentation.)
Select * 
from  Employees@{FORCE_INDEX=EmployeesById}
Where EmployeeUUID=xxx;
Note that when using the Cloud Spanner Read APIs, you can only perform fully covered queries—i.e., queries where the index stores all of the columns requested. To read the columns from the original table using an index, you must use an SQL query. See Use a Secondary Index section of the Getting Started docs for examples.

Continuing your Cloud Spanner education

There are some big conceptual differences when you’re using a cloud-built, horizontally scalable database like Cloud Spanner in contrast with the RDBMS you’ve been using for years. Once you’re familiar with the way keys and indexes work, you can start to take advantage of the benefits of Cloud Spanner for faster scalability.

In the next episode of this series, we will look at how to deal with business logic that would previously be implemented by triggers and stored procedures, neither of which exist in Cloud Spanner.

Want to learn more about Cloud Spanner in person? We’ll be discussing data migration and more in this session at Next 2018 in July. For more information, and to register, visit the Next ‘18 website.


Related content:

How we used Cloud Spanner to build our email personalization system—from “Soup” to nuts

A closer look at the HANA ecosystem on Google Cloud Platform



Since we announced our partnership with SAP in early 2017, we’ve rapidly expanded our support for SAP HANA, SAP’s in-memory, column-oriented, relational database management system. From the beginning, we knew we’d need to build tools that integrate SAP HANA with Google Cloud Platform (GCP) that make it faster and easier for developers and administrators to take advantage of the platform.

In this blog post, we’ll walk you through the evolution of SAP HANA on GCP and take a deeper look at the ecosystem we’ve built to support our customers.

The evolution of SAP HANA on GCP


For many enterprise customers running SAP HANA databases, instances with large amounts of memory are essential. That’s why we’ve been working to make virtual machines with larger memory configurations available for SAP HANA workloads.

Our initial work with SAP on the certification process for SAP HANA began in early 2017. In that 15 month period, we’ve rapidly evolved from instances with 208GB memory to 4TB memory. This has allowed us to support larger single-node SAP HANA installations of up to 4TB.

Smart Data Access — Google BigQuery

Google BigQuery, our serverless data warehouse, enables low cost, high performance analytics at petabyte scale. We’ve worked with SAP to natively integrate SAP HANA with BigQuery through smart data access which allows you to extend SAP HANA’s capabilities and query data stored within BigQuery by means of virtual tables. This support has been available since SAP HANA 2.0 SPS 03, and you can try it out by following this step-by-step codelabs tutorial.

Fully automated deployment of SAP HANA

In many cases, the manual deployment process can be time consuming, error prone and cumbersome. It’s very important to reduce or eliminate the margin of error and make the deployment conform to SAP’s best practices and standards.


To address this, we’ve launched deployment templates that fully automate the deployment of single node and scale-out SAP HANA configurations on GCP. In addition, we’ve also launched a new deployment automation template that creates a high availability SAP HANA configuration with automatic failover.

With these templates, you have access to fully configured, ready-to-go SAP HANA environments in a matter of minutes. You can also see the resources you created, and a complete catalog of all your deployments, in one location through the GCP console. We’ve also made the deployment process fully visible by providing deployment logs through Google Stackdriver.

Monitoring SAP HANA with Stackdriver

Visibility into what’s happening inside your SAP HANA database can help you identify factors impacting your database, and prepare accordingly. For example, a time series view of how resource utilization or latency within the SAP HANA database changes over time can help administrators plan in advance, and in many cases successfully troubleshoot issues.

Stackdriver provides monitoring, logging, and diagnostics to better understand the health, performance, and availability of cloud-powered applications. Stackdriver’s integration with SAP HANA helps administrators monitor their SAP HANA databases, notifying and alerting them so they can proactively fix issues.

More information on this integration is available in our documentation.

TensorFlow support in SAP HANA

SAP has offered support for TensorFlow Serving beginning with SAP HANA 2.0. This lets you directly build inference into SAP HANA through custom machine learning models hosted in TensorFlow serving applications running on Google Compute Engine.

You can easily build a continuous training pipeline by exporting data in your SAP HANA database to Google Cloud Storage and then using Cloud TPUs to train deep learning models. These models can then be hosted with TensorFlow Serving to be used for inference within SAP HANA.

SAP Cloud Platform, SAP HANA as a Service

SAP HANA as a Service is now also deployable to the Google Cloud Platform. The fully managed cloud service makes it possible for developers to leverage the power of SAP HANA, without spending time on operational and administrative tasks. The service is especially well suited to customers who have the goal of rapid innovation, reducing time to value.

SAP HANA, express edition on Google Cloud Launcher

SAP HANA, express edition is meant for developers and technologists who prefer a hands-on learning experience. A free license is included, allowing users to use a large catalog of tutorial content, online courses and samples to get started with SAP HANA. The Google Launcher provides users a fast and effective provisioning experience for SAP HANA, express edition.

Conclusion

These developments are all part of our continuing goal to make Google Cloud the best place to run SAP applications. We’ll continue listening to your feedback, and we’ll have more updates to share in the coming months. In the meantime, you can learn more about SAP HANA on GCP by visiting our website. And if you’d like to learn about all our announcements at SAPPHIRE NOW, read our Google Cloud blog post.