Category Archives: Google Cloud Platform Blog

Product updates, customer stories, and tips and tricks on Google Cloud Platform

Add log statements to your application on the fly with Stackdriver Debugger Logpoints



In 2014 we launched Snapshots for Stackdriver Debugger, which gave developers the ability to examine their application’s call stack and variables in production with no impact to users. In the past year, developers have taken over three hundred thousand production snapshots across their services running on Google App Engine and on VMs and containers hosted anywhere.

Today we’re showing off Stackdriver Debugger Logpoints. With Logpoints, you can instantly add log statements to your production application without rebuilding or redeploying it. Like Snapshots, this is immensely useful when diagnosing tricky production issues that lack an obvious root cause. Even better, Logpoints fits into existing logs-based workflows.
(click to enlarge)
Adding a logpoint is as simple as clicking a line in the Debugger source viewer and typing in your new log message (just make sure that you open the Logpoints tab in the right hand pane first). If you haven’t synced your source code, you can add Logpoints by specifying the target file and line number in the right-hand pane or via the gcloud command line tools. Variables can be referenced by {variableName}. You can review the full documentation here.

Because Logpoints writes its output through your app’s existing logging mechanism, it's compatible with any logging aggregation and analysis system, including Splunk or Kibana, or you can read its output from locally stored logs. However, Stackdriver Logging customers benefit from being able to read their log output from within the Stackdriver Debugger UI.


Logpoints is already available for applications written in Java, Go, Node.js, Python and Ruby via the Stackdriver Debugger agents. As with Snapshots, this same set of languages is supported across VMs (including Google Compute Engine), containers (including Google Container Engine), and Google App Engine. Logpoints has been accessible through the gcloud command line interface for some time, and the process for using Logpoints in the CLI hasn’t changed.

Each logpoint lasts up to twenty-four hours or until it's deleted or when the application is redeployed. Adding a logpoint incurs a performance cost on par with adding an additional log statement to your code directly. However, the Stackdriver Debugger agents automatically throttle any logpoints that negatively impact your application’s performance and any logpoints or snapshots with conditions that take too long to evaluate.

At Google, we use technology like Snapshots and Logpoints to solve production problems every day to make our services more performant and reliable. We’ve heard from our customers how snapshots are the bread and butter of their problem-solving processes, and we’re excited to see how you use Logpoints to make your cloud applications better.

Partnering on open source: Google and Ansible engineers on managing GCP infrastructure



It's time for the third chapter in the Partnering on open source series. This time around, we cover some of the work we’ve done with Ansible, a popular open source IT automation engine, and how to use it to provision, manage and orchestrate Google Cloud Platform (GCP) resources.

Ansible, by Red Hat, is a simple automation language that can perfectly describe an IT application infrastructure on GCP including virtual machines, disks, network load-balancers, firewall rules and more. In this series, I'll walk you through my former life as a DevOps engineer at a satellite space imaging company. You'll get a glimpse into how I used Ansible to update satellites in orbit along with other critical infrastructure that serve imagery to interested viewers around the globe.

In this first video, we set the stage and talk about Ansible in general, before diving into hands-on walkthroughs in subsequent episodes.



Upcoming videos demonstrate how to use Ansible and GCP to:

  • Apply a camera-settings hotfix to a satellite orbiting Earth by spinning up a Google Compute Engine instance, testing the latest satellite image build and pushing the settings to the satellite.
  • Provision and manage GCP's advanced networking features like globally available load-balancers with L7 routing to serve satellite ground images on a public website.
  • Create a set of networks, routes and firewall rules with security rules to help isolate and protect the various systems involved in the imagery processing pipeline. The raw images may contain sensitive data that must be appropriately screened and scrubbed before being added to the public image repository and network security is critical.

The series wraps up with a demonstration of how to extend Ansible's capabilities by writing custom modules. The videos in this series make use of custom and publicly available modules for GCP.

Join us on YouTube to watch the upcoming videos or go back and watch the other videos on the series. You can also follow Google Cloud on YouTube, or @GoogleCloud on Twitter to find out when new videos are published. And stay tuned for more blog posts and videos about work we’re doing with open-source providers like Puppet, Chef, Cloud Foundry, Red Hat, SaltStack and others.

App Engine users, now you can configure custom domains from the API or CLI



As a developer, your job is to provide a professional branded experience for your users. If you’re developing web apps, that means you’ll need to host your application on its own custom domain accessed securely over HTTPS with an SSL certificate.

With App Engine, it’s always been easy to access applications from their own hostname, e.g., <YOUR_PROJECT_ID>.appspot.com, but custom domains and SSL certificates could only be configured through the App Engine component of the Cloud Platform Console.

Today, we’re happy to announce that you can now manage both your custom domains and SSL certificates using the new beta features of the Admin API and gcloud command-line tool. These new beta features provide improved management, including the ability to automate mapping domains and uploading SSL certificates.

We hope these new API and CLI commands will simplify managing App Engine applications, help your business scale, and ultimately, allow you to spend more time writing code.

Managing App Engine custom domains from the CLI


To get started with the CLI, first install the Google Cloud SDK.

To use the new beta commands, make sure you’ve installed the beta component:

gcloud components install beta

And if you’ve already installed that component, make sure that it's up to date:

gcloud components update

Now that you’ve installed the new beta command, verify your domain to register ownership:

gcloud beta domains verify <DOMAIN>
gcloud beta domains list-verified

After you've verified ownership, map that domain to your App Engine application:

gcloud beta app domain-mappings create <DOMAIN>

You can also map your subdomains this way. Note that as of today, only the verified owner can create mappings to a domain.

With the response from the last command, complete the mapping to your application by updating the DNS records of your domain.

To create an HTTPS connection, upload your SSL certificate:

gcloud beta app ssl-certificates create --display-name 
<CERT_DISPLAY_NAME> --certificate <CERT_DIRECTORY_PATH> 
--private-key <KEY_DIRECTORY_PATH>

Then update your domain mapping to include the certificate that you just uploaded:

gcloud beta app domain-mappings update <DOMAIN> --certificate-id 
<CERT_ID>

We're also excited to provide a single command that you can use to renew your certificate before it expires:

gcloud beta app ssl-certificates update <CERT_ID> --certificate 
<CERT_DIRECTORY_PATH> --private-key <KEY_DIRECTORY_PATH>

As with all beta releases, these commands should not yet be used in production environments. For complete details, please check out the full set of instructions, along with the API reference. If you have any questions or feedback, we’ll be watching the Google App Engine forum, you can log a public issue, or get in touch on the App Engine slack channel (#app-engine).

Solutions guide: Preparing Container Engine environments for production



Many Google Cloud Platform (GCP) users are now migrating production workloads to Container Engine, our managed Kubernetes environment. You can spin up a Container Engine cluster for development, then quickly start porting your applications. First and foremost, a production application must be resilient and fault tolerant and deployed using Kubernetes best practices. You also need to prepare the Kubernetes environment for production by hardening it. As part of the migration to production, you may need to lock down who or what has access to your clusters and applications, both from an administrative as well as network perspective.

We recently created a guide that will help you with the push towards production on Container Engine. The guide walks through various patterns and features that allow you to lock down your Container Engine workloads. The first half focuses on how to control access to the cluster administratively using IAM and Kubernetes RBAC. The second half dives into network access patterns teaching you to properly configure your environment and Kubernetes services. With the IAM and networking models locked down appropriately, you can rest assured that you're ready to start directing your users to your new applications.

Read the full solution guide for using Container Engine for production workloads, or learn more about Container Engine from the documentation.

Getting started with Shared VPC



Large organizations with multiple cloud projects value the ability to share physical resources, while maintaining logical separation between groups or departments. At Google Cloud Next '17, we announced Shared VPC, which allows you to configure and centrally manage one or more virtual networks across multiple projects in your Organization, the top level Cloud Identity Access Management (Cloud IAM) resource in the Google Cloud Platform (GCP) cloud resource hierarchy.

With Shared VPC, you can centrally manage the creation of routes, firewalls, subnet IP ranges, VPN connections, etc. for the entire organization, and at the same time allow developers to own billing, quotas, IAM permissions and autonomously operate their development projects. Shared VPC is now generally available, so let’s look at how it works and how best to configure it.

How does Shared VPC work?

We implemented Shared VPC entirely in the management control plane, transparent to the data plane of the virtual network. In the control plane, the centrally managed project is enabled as a host project, allowing it to contain one or more shared virtual networks. After configuring the necessary Cloud IAM permissions, you can then create virtual machines in shared virtual networks, by linking one or more service projects to the host project. The advantage of sharing virtual networks in this way is being able to control access to critical network resources such as firewalls and centrally manage them with less overhead.

Further, with shared virtual networks, virtual machines benefit from the same network throughput caps and VM-to-VM latency as when they're not on shared networks. This is also the case for VM-to-VPN and load balancer-to-VM communication.

To illustrate, consider a single externally facing web application server that uses services such as personalization, recommendation and analytics, all internally available, but built by different development teams.

Example topology of a Shared VPC setup.

Let’s look at the recommended patterns when designing such a virtual network in your organization.

Shared VPC administrator role

The network administrator of the shared host project should also have the XPN administrator role in the organization. This allows a single central group to configure new service projects that attach to the shared VPC host project, while also allowing them to set up individual subnetworks in the shared network and configure IP ranges, for use by administrators of specific service projects. Typically, these administrators would have the InstanceAdmin role on the service project.

Subnetworks USE permission

When connecting a service project to the shared network, we recommend you grant the service project administrators compute.subnetworks.use permission (through the NetworkUser role) on one (or more) subnetwork(s) per region, such that the subnetwork(s) are used by a single service project.

This will help ensure cleaner separation of usage of subnetworks by different teams in your organization. In the future, you may choose to associate specific network policies for each subnetwork based on which service project is using it.

Subnetwork IP ranges

When configuring subnetwork IP ranges in the same or different regions, allow sufficient IP space between subnetworks for future growth. GCP allows you to expand an existing subnetwork without affecting IP addresses owned by existing VMs in the virtual network and with zero downtime.

Shared VPC and folders

When using folders to manage projects created in your organization, place all host and service projects for a given shared VPC setup within the same folder. The parent folder of the host project should be in the parent hierarchy of the service projects, so that the parent folder of the host project contains all the projects in the shared VPC setup. When associating service projects with a host project, ensure that these projects will not move to other folders in the future, while still being linked to the host project.


Control external access

In order to control and restrict which VMs can have public IPs and thus access to the internet, you can now set up an organization policy that disables external IP access for VMs. Do this only for projects that should have only internal access, e.g. the personalization, recommendation and analytics services in the example above.

As you can see, Shared VPC is a powerful tool that can make GCP more flexible and manageable for your organization. To learn more about Shared VPC, check out the documentation.

Spinnaker 1.0: a continuous delivery platform for cloud



At Google we deploy a lot of code: tens of thousands deployments a day, to thousands of services, seven of which have more than a billion users each around the globe. Along the way we’ve learned some best practices about how to deploy software at velocity -- things like automated releases, immutable infrastructure, gradual rollouts and fast rollbacks.

Back in 2014, we started working with the Netflix team that created Spinnaker, and saw in it a release management platform that embodied many of our first principles for safe, frequent and reliable releases. Excited by its potential, we partnered with Netflix to bring Spinnaker to the public, and they open-sourced it in November 2015. Since then, the Spinnaker community has grown to include dozens of organizations including Microsoft, Oracle, Target, Veritas, Schibsted, Armory and Kenzan, to name a few.

Today we’re happy to announce the release of Spinnaker 1.0, an open-source multi-cloud continuous delivery platform used in production at companies like Netflix, Waze, Target, and Cloudera, plus a new open-source command line interface (CLI) tool called halyard that makes it easy to deploy Spinnaker itself. Read on to learn what Spinnaker can do for your own software development processes.

Why Spinnaker?

Let’s look at a few of the features and new updates that make Spinnaker a great release management solution for enterprises:

Open-source, multi-cloud deployments
Here at Google Cloud Platform (GCP), we believe in an open cloud. Spinnaker, including its rich UI dashboard, is 100% open-source. You can install it locally, on-prem, or to any cloud platform, running either on a virtual machine (VM) or Kubernetes.

Spinnaker streamlines the deployment process by decoupling your release pipeline from your target cloud provider, reducing the complexity of moving from one platform to another or deploying the same application to multiple clouds.

It has built-in support for Google Compute Engine, Google Container Engine, Google App Engine, AWS EC2, Microsoft Azure, Kubernetes, and OpenStack, with more added every year by the community, including Oracle Bare Metal and DC/OS, coming soon.

Whether you’re releasing to multiple clouds or preventing vendor lock-in, Spinnaker helps you deploy your application based on what’s best for your business.

Automated releases
In Spinnaker, deployments are orchestrated using custom release pipelines, the stages of which can consist of almost anything you want -- integration or system tests, spinning a server group up or down, manual approvals, waiting a period of time, or running a custom script or Jenkins job.

Spinnaker integrates seamlessly with your existing continuous integration (CI) workflows. You can trigger pipelines from git, Jenkins, Travis CI, Docker registries, on a cron-like schedule, or even other pipelines.

Best-practice deployment strategies
Out-of-the-box, Spinnaker supports sophisticated deployment strategies like release canaries, multiple staging environments, red/black (a.k.a. blue/green) deployments, traffic splitting and easy rollbacks.

This is enabled in part by Spinnaker’s use of immutable infrastructure in the cloud, where changes to your application trigger a redeployment of your entire server fleet. Compare this to the traditional approach of configuring updates to running machines, which results in slower, riskier rollouts and hard-to-debug configuration-drift issues.

With Spinnaker, you simply choose the deployment strategy you want to use for each environment, e.g. red/black for staging, rolling red/black for production, and it orchestrates the dozens of steps necessary under-the-hood. You don’t have to write your own deployment tool or maintain a complex web of Jenkins scripts to have enterprise-grade rollouts.

Role-based authorizations and permissions
Large companies often adopt Spinnaker across multiple product areas managed by a central DevOps team. For admins that need role-based access control for a project or account, Spinnaker supports multiple authentication and authorization options, including OAuth, SAML, LDAP, X.509 certs, GitHub teams, Azure groups or Google Groups.

You can also apply permissions to manual judgements, a Spinnaker stage which requires a person’s approval before proceeding with the pipeline, ensuring that a release can’t happen without the right people signing off.

Simplified installation and management with halyard
With the release of Spinnaker 1.0, we’re also announcing the launch of a new CLI tool, halyard, that helps admins more easily install, configure and upgrade a production-ready instance of Spinnaker.

Prior to halyard and Spinnaker 1.0, admins had to manage each of the microservices that make up Spinnaker individually. Starting with 1.0, all new Spinnaker releases are individually versioned and follow semantic versioning. With halyard, upgrading to the latest Spinnaker release is as simple as running a CLI command.

Getting started

Try out Spinnaker and make your deployments fast, safe, and, dare we say, boring.

For more info on Spinnaker, visit the new spinnaker.io website and learn how to get started.

Or if you’re ready to try Spinnaker right now, click here to install and run Spinnaker with Google’s click-to-deploy option in the Cloud Launcher Marketplace.

For questions, feedback, or to engage more with the Spinnaker community, you can find us on the Spinnaker Slack channel, submit issues to the Spinnaker GitHub repository, or ask questions on Stack Overflow using the “spinnaker” tag.

More on Spinnaker




Join the Intelligent App Challenge brought to you by SAP and Google Cloud



Does your organization use SAP? At SAP SAPPHIRE last month, Nan Boden, Google Cloud head of Global Technology Partners, announced the Intelligent App Challenge designed to encourage innovative integrations between the SAP and Google Cloud ecosystems, and we’re accepting submissions through August 1, 2017. Winning entries could receive up to US $20,000 in GCP credits, tickets to SAP TechEd '17 and SAP Sapphire '18, and on-stage presence at SAP TechEd '17.

Earlier this year, we announced a strategic partnership with SAP at Google Cloud Next '17 with a focus on developing and integrating Google’s best cloud and machine learning solutions with SAP enterprise applications. The partnership includes certification of the in-memory database SAP HANA on Google Cloud Platform (GCP), new G Suite integrations, Google’s machine learning capabilities and data governance collaboration. It also offers Google Cloud and SAP customers more scope, scalability and opportunities to create new products, and has already resulted in the certification of several SAP products on GCP.

The SAP + GCP collaboration allows developers to take advantage of SAP’s in-memory database running on GCP to store and index large amounts of transactional (OLTP) and analytical (OLAP) data in HANA, and combine it with GCP to use it in new ways. For example, you could build sophisticated and large-scale machine learning (ML) models without needing to transport or transform large subsets of data, or build out the ML infrastructure required to consume and analyze this information. Use Google Cloud Machine Learning tools and APIs along with SAP HANA, express edition to design intelligent business applications such as fraud detection, recommendation engines, talent engagement, intelligent campaign management, conversational interfaces, etc.

We're excited to see how the ecosystem of partners of SAP and Google take our platform and use it to solve pressing business challenges. It’s our platform, and your imagination  to build solutions that solve customer problems in new and unique ways.

Entries to the Intelligent App Challenge must be built on GCP with SAP HANA, express edition. Extra consideration will be given to entries who use Machine Learning tools and capabilities.

Registered applicants for the Intelligent App Challenge will also have access to a number of resources and tutorials. Judges will include industry experts, developers, mentors and industry analysts.

Please visit the Intelligent App Challenge page to learn more, or register your company today.

Enhancing the Python experience on App Engine



Developers have always been at the heart of Google Cloud Platform (GCP). And with App Engine, developers can focus on writing code that powers their business and leave the infrastructure hassle to Google, freeing themselves from tasks such as server management and capacity planning. Earlier this year, we announced the general availability of App Engine flexible environment, and later announced the expansion of App Engine to the europe-west region. Today we're happy to announce additional upgrades for Python users for both App Engine flexible and standard environments.
Starting today, App Engine flexible environment users can deploy to the latest version of Python, 3.6. We first supported Python 3 for App Engine flexible environment back in 2016, and have continued to update the runtime as the community releases new versions of Python 3 and Python 2. We'll continue to update the runtimes to the latest versions as soon as they become available. To see a demo on how to deploy a simple “Hello World” Flask web application that can deploy to Python 3 in under ten minutes, see our video in an earlier blogpost.

On App Engine standard environment, we’ve updated more than 2 million apps running Python 2.7.5 to Python 2.7.12 without any input needed from our users, and as of today, all new deployments will run in this new runtime. To see a demo on how to deploy a simple “Hello World” Flask web application that deploys in seconds and scales to millions of requests per second to Python 2 in under a minute, see our Getting Started guide. We're committed to updating Python to the latest versions of Python 2 as they become available, and bringing the latest versions of Python 3 to the App Engine standard environment is on our roadmap. Stay tuned!

On the libraries side, App Engine flexible environment users can continue to pull in any library the application requires by simply providing a requirements.txt file during deployment. App Engine standard environment users also now have updated runtime-provided libraries. Refer to the App Engine standard documentation for the full list of built-in and third-party libraries. We'll continue updating these libraries as new versions become available.

As of today, Python developers have deployed more than 6,000,000 applications to App Engine, and companies large and small continue to innovate without having to worry about infrastructure. App Engine has built in support for micro-services, auto scaling, load balancing, traffic splitting and much more. And with a commitment to open source and open cloud, App Engine continues to welcome contribution from the developer community on both the runtimes and the libraries. If you wish to keep up to date with the latest runtime releases, please bookmark the release notes page for Python on App Engine standard and flexible.

Feel free to reach out to us on Twitter using the handle @googlecloud. We're also on the Google Cloud Slack community. To get in touch, request an invite to join the Slack Python channel.

Happy coding!

Google Cloud services are switching Certificate Authority



Earlier this year, Google announced that we had established Google Trust Services to operate our own Root Certificate Authority on behalf of Google and Alphabet. Preparations are proceeding apace and customers that rely on Google services—including Google Cloud services such as Compute Engine, Gmail and others—should be aware that Google will soon begin using a different Certificate Authority (CA). We expect this to have no impact for the vast majority of customers.

Google commonly uses TLS (previously known as SSL) to secure communications between Google services and our users. As part of TLS, a server is required to provide proof of its identity in the form of a certificate that's signed by a CA. Google has long used certificates ultimately issued by the CA “GeoTrust.”

In the coming months, Google will begin using the GlobalSign R2 CA (“GS Root R2”). As it's a well-established and commonly trusted root CA, we expect minimal disruption to clients. However, for TLS clients that operate with custom root stores, we recommend that customers and application vendors ensure that their applications trust at least our minimum root set (PEM file).

The Google Trust Services home page contains links for customers and application vendors to test support for Google-operated roots, including GS Root R2. However, because we may use other roots in the future, customers should use the aforementioned root set and not simply the specific roots currently listed there.

More generally, a reasonable root set is not the only factor in ensuring that TLS clients continue to function over time. TLS clients should also meet these requirements to ensure minimal disruption:
  1. Support for TLS 1.2.
  2. A Server Name Indication (SNI) extension that contains the domain that's being connected to.
  3. Support for the cipher suite TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 using the NIST P-256 curve (a.k.a “secp256r1”) and uncompressed points.
  4. At a minimum, trust the certificates listed at https://pki.google.com/roots.pem.
  5. Support for DNS Subject Alternative Names (SANs) by the certificate verifier, where SANs may include a single wildcard as the left-most label in the domain name.
We've been working hard to ensure that the transition to a new CA is as smooth as possible for users of our services. Feel free to reach out to us with questions or concerns: Google Cloud Platform | G Suite.

From NoSQL to new SQL: How Spanner became a global, mission-critical database



Now that Cloud Spanner is generally available for mission-critical production workloads, it’s time to tell how Spanner evolved into a global, strongly consistent relational database service.
Recently the Spanner team presented a new paper at SIGMOD ‘17 that offers some fascinating insights into this aspect of Spanner’s “database DNA” and how it developed over time.

Spanner was originally designed to meet Google’s internal requirements for a global, fault-tolerant service to power massive business-critical applications. Today Spanner also embraces the SQL functionality, strong consistency and ACID transactions of a relational database. For critical use cases like financial transactions, inventory management, account authorization and ticketing/reservations, customers will accept no substitute for that functionality.

For example, there's no “spectrum” of less-than-strong consistency levels that will satisfy the mission-critical requirement for a single transaction state that's maintained worldwide; only strong consistency will do. Hence, few if any customers would choose to use an eventually-consistent database for critical OLTP. For Cloud Spanner customers like JDA, Snap and Quizlet, this unique feature set is already resonating.

Here are a few highlights from the paper:


  • Although Spanner was initially designed as a NoSQL key-value store, new requirements led to an embrace of the relational model, as well. Spanner’s architects had a relatively specific goal: to provide a service that could support fault-tolerant, multi-row transactions and strong consistency across data centers (with significant influence  and code  from Bigtable). At the same time, internal customers building OLTP applications also needed a database schema, cross-row transactions and an expressive query language. Thus early in Spanner’s lifecycle, the team drew on Google’s experience building the F1 distributed relational database to bring robust relational semantics and SQL functionality into the Spanner architecture. “These changes have allowed us to preserve the massive scalability of Spanner, while offering customers a powerful platform for database applications,” the authors wrote, adding that, “From the perspective of many engineers working on the Google infrastructure, the SQL vs. NoSQL dichotomy may no longer be relevant.”
  • The Spanner SQL query processor, while recognizable as a standard implementation, has unique capabilities that contribute to low-latency queries. Features such as query range extraction (for runtime analysis of complex expressions that are not easily re-written) and query restarts (compensating for failures, resharding, and other anomalies without significant latency impact) mitigate the complexities of highly distributed queries that would otherwise contribute to latency. Furthermore, the query processor serves both transactional and analytical workloads for low-latency or long-running queries.
  • Long-term investments in SQL tooling have produced a familiar RDBMS-like user experience. As part of a companywide effort to standardize on common SQL functionality for all its relational services (Spanner, Dremel/BigQuery, F1, and so on), Spanner’s user experience emphasizes ANSI SQL constructs and support for nested data as a first-class citizen. “SQL has provided significant additional value in expressing more complex data access patterns and pushing computation to the data, ” the authors wrote.
  • Spanner will soon rely on a new columnar format called Ressi designed for database-like access patterns (for hybrid OLAP/OLTP workloads). Ressi is optimized for time-versioned (rapidly changing) data, allowing queries to more efficiently find the most recent values. Later in 2017, Ressi will replace the SSTables format inherited from Bigtable, which although highly robust, are not explicitly designed for performance.


All in all, “Our path to making Spanner a SQL system led us through the milestones of addressing scalability, manageability, ACID transactions, relational model, schema DDL with indexing of nested data, to SQL,” the authors wrote.

For more details, read the full paper here.