Category Archives: Google Cloud Platform Blog

Product updates, customer stories, and tips and tricks on Google Cloud Platform

Reliable releases and rollbacks – CRE life lessons



Editor’s note: One of the most common causes of service outages is releasing a new version of the service binaries; no matter how good your testing and QA might be, some bugs only surface when the affected code is running in production. Over the years, Google Site Reliability Engineering has seen many outages caused by releases, and now assumes that every new release may contain one or more bugs.

As software engineers, we all like to add new features to our services; but every release comes with the risk of something breaking. Even assuming that we are appropriately diligent in adding unit and functional tests to cover our changes, and undertaking load testing to determine if there are any material effects on system performance, live traffic has a way of surprising us. These are rarely pleasant surprises.

The release of a new binary is a common source of outages. From the point of view of the engineers responsible for the system’s reliability, that translates to three basic tasks:
  1. Detecting when a new release is actually broken;
  2. Moving users safely from a bad release to a “hopefully” fixed release; and
  3. Preventing too many clients from suffering through a bad release in the first place (“canarying”).
For the purpose of this analysis, we’ll assume that you are running many instances of your service on machines or VMs behind a load balancer such as nginx, and that upgrading your service to use a new binary will involve stopping and starting each service instance.

We’ll also assume that you monitor your system with something like Stackdriver, measuring internal traffic and error rates. If you don’t have this kind of monitoring in place, then it’s difficult to meaningfully discuss reliability; per the Hierarchy of Reliability described in the SRE Book, monitoring is the most fundamental requirement for a reliable system).

Detection

The best case for a bad release is that when a service instance is restarted with the bad release, a major fraction of improperly handled requests generate errors such as HTTP 502, or much higher response latencies than normal. In this case, your overall service error rate rises quickly as the rollout progresses through your service instances, and you realize that your release has a problem.

A more subtle case is when the new binary returns errors on a relatively small fraction of queries - say, a user setting change request, or only for users whose name contains an apostrophe for good or bad reasons. With this failure mode, the problem may only become manifest in your overall monitoring once the majority of your service instances are upgraded. For this reason, it can be useful to have error and latency summaries for your service instance broken down by binary release version.

Rollbacks

Before you plan to roll out a new binary or image to your service, you should ask yourself, “What will I do if I discover a catastrophic / debilitating / annoying bug in this release?” Not because it might happen, but because sooner or later it is going to happen and it is better to have a well-thought out plan in place instead of trying to make one up when your service is on fire.

The temptation for many bugs, particularly if they are not show-stoppers, is to build a quick patch and then “roll forward,” i.e., make a new release that consists of the original release plus the minimal code change necessary to fix the bug (a “cherry-pick” of the fix). We don’t generally recommend this though, especially if the bug in question is user-visible or causing significant problems internally (e.g., doubling the resource cost of queries).

What’s wrong with rolling forward? Put yourself in the shoes of the software developer: your manager is bouncing up and down next to your desk, blood pressure visibly climbing, demanding to know when your fix is going to be released because she has your company’s product director bending her ear about all the negative user feedback he’s getting. You’re coding the fix as fast as humanly possible, because for every minute it’s down another thousand users will see errors in the service. Under this kind of pressure, coding, testing or deployment mistakes are almost inevitable.

We have seen this at Google any number of times, where a hastily deployed roll-forward fix either fails to fix the original problem, or indeed makes things worse. Even if it fixes the problem it may then uncover other latent bugs in the system; you’re taking yourself further from a known-good state, into the wilds of a release that hasn’t been subject to the regular strenuous QA testing.

At Google, our philosophy is that “rollbacks are normal.” When an error is found or reasonably suspected in a new release, the releasing team rolls back first and investigates the problem second. A request for a rollback is not interpreted as an attack on the releasing team, or even the person who wrote the code containing the bug; rather, it is understood as The Right Thing To Do to make the system as reliable as possible for the user. No-one will ask “why did you roll back this change?” as long as the rollback changelist describes the problem that was seen.

Thus, for rollbacks to work, the implicit assumption is that they are:

  1. easy to perform; and
  2. trusted to be low-risk.

How do we make the latter true?

Testing rollbacks

If you haven’t rolled back in a few weeks, you should do a rollback “just because”; aim to find any traps with incompatible versions, broken automation/testing etc. If the rollback works, just roll forward again once you’ve checked out all your logs and monitoring. If it breaks, roll forward to remove the breakage and then focus all your efforts on diagnosing the cause of the rollback breakage. It is better by far to detect this when your new release is working well, rather than being forced off a release that is on fire and having to fight to get back to your known-good original release.

Incompatible changes

Inevitably, there are going to be times when a rollback is not straightforward. One example is when the new release requires a schema change to an in-app database (such as a new column). The danger is that you release the new binary, upgrade the database schema, and then find a problem with the binary that necessitates rollback. This leaves you with a binary that doesn’t expect the new schema, and hasn’t been tested with it.

The approach we recommend here is a feature-free release; starting from version v of your binary, build a new version v+1 which is identical to v except that it can safely handle the new database schema. The new features that make use of the new schema are in version v+2. Your rollout plan is now:
  1. Release binary v+1
  2. Upgrade database schema
  3. Release binary v+2
Now, if there are any problems with either of the new binaries then you can roll back to a previous version without having to also roll back the schema.

This is a special case of a more general problem. When you build the dependency graph of your service and identify all its direct dependencies, you need to plan for the situation where any one of your dependencies is suddenly rolled back by its owners. If your launch is waiting for a dependency service S to move from release r to r+1, you have to be sure that S is going to “stick” at r+1. One approach here is to make an ecosystem assumption that any service could be rolled back by one version, in which case your service would wait for S to reach version r+2 before your service moved to a version depending on a feature in r+1.

Summary

We’ve learned that there’s no good rollout unless you have a corresponding rollback ready to do, but how can we know when to rollback without having our entire service burned to the ground by a bad release?

In part 2 we’ll look at the strategy of “canarying” to detect real production problems without risking the bulk of your production traffic on a new release.

Solution guide: backing up Windows files using CloudBerry Backup with Google Cloud Storage



Modern businesses increasingly depend on their data as a foundation for their operation. The more critical the reliance is on that data, the more important it is to ensure that data is protected with backups. Unfortunately, even by taking regular backups, you're still susceptible to data loss from a local disaster or human error. Thus, many companies entrust their data to geographically distributed cloud storage providers like Google Cloud Platform (GCP). And when they do, they want convenient cloud backup automation tools that offer flexible backup options and quick on-demand restores.

One such tool is CloudBerry Backup (CBB), and has the following capabilities:

  • Creating incremental data copies with low impact on production workloads
  • Data encryption on all transferring paths
  • Flexible retention policy, allowing you to balance the volume of data stored and storage space used
  • Ability to carry out hybrid restores with the use of local and cloud storage resources

CBB includes a broad range of features out of the box, allowing you to address most of your cloud backup needs, and is designed to have low impact on production servers and applications.

CBB has a low-footprint backup client that you install on the desired server. After you provision a Google Cloud Storage bucket, attach it to CBB and create a backup plan to immediately start protecting your files in the cloud.

To simplify your cloud backup onboarding, check out the step-by-step tutorial on how to use CloudBerry Backup with Google Cloud Storage and easily restore any files.

Cloud SQL for PostgreSQL: Managed PostgreSQL for your mobile and geospatial applications in Google Cloud



At Google Cloud Next ‘17, we announced support for PostgreSQL as part of Google Cloud SQL, our managed database service. With its extensibility, strong standards compliance and support from a vibrant open-source community, Postgres is the database of choice for many developers, especially for powering geospatial and mobile applications. Cloud SQL already supports MySQL, and now, PostgreSQL users can also let Google take care of mundane database administration tasks like applying patches and managing backups and storage capacity, and focus on developing great applications.

Feature highlights

Storage and data protection
  • Flexible backups: Schedule automatic daily backups or run them on-demand.
  • Automatic storage increase: Enable automatic storage increase and Cloud SQL will add storage capacity whenever you approach your limit.

Connections
  • Open standards: We embrace the PostgreSQL wire protocol (the standard connection protocol for PostgreSQL databases) and SSL, so you can access your database from nearly any application, running anywhere.
  • Security features: Our Cloud SQL Proxy creates a local socket and uses OAuth to help establish a secure connection with your application or PostgreSQL tool. It automatically creates the SSL certificate and makes more secure connections easier for both dynamic and static IP addresses.

Extensibility
  • Geospatial support: Easily enable the popular PostGIS extension for geospatial objects in Postgres.
  • Custom instance sizes: Create your Postgres instances with the optimal amount of CPU and memory for your workloads


Create Cloud SQL for PostgreSQL instances customized to your needs.


More features coming soon

We’re continuing to improve Cloud SQL for PostgreSQL during beta. Watch for the following:

  • Automatic failover for high availability
  • Read replicas
  • Additional extensions
  • Precise restores with point-in-time recovery
  • Compliance certification as part of Google’s Cloud Platform BAA

Case study: Descartes Labs delves into Earth’s resources with Cloud SQL for PostgreSQL

Using deep-learning to make sense of vast amounts of image data from Google Earth Engine, NASA, and other satellites, Descartes Labs delivers invaluable insights about natural resources and human population. They provide timely and accurate forecasts on such things as the growth and health of crops, urban development, the spread of forest fires and the availability of safe drinking water across the globe.

Cloud SQL for PostgreSQL integrates seamlessly with the open-source components that make up Descartes Labs’ environment. Google Earth Engine combines a multi-petabyte catalog of satellite imagery and geospatial datasets with planetary-scale analysis capabilities and makes it available for scientists, researchers and developers to detect changes, map trends and quantify differences on the Earth's surface. With ready-to-use data sets and an API, Earth Engine data is core to Descartes Labs’ product. Combining this with NASA data and the popular OpenStreetMap data, Descartes Labs takes full advantage of the open source community.

Descartes Labs’ first application tracks corn crops based on a 13-year historical backtest. It predicts the U.S. corn yield faster and more accurately than the U.S. Department of Agriculture.
click to enlarge

Descartes adopted Cloud SQL for PostgreSQL early on because it allowed them to focus on developing applications rather than on mundane database management tasks. “Cloud SQL gives us more time to work on products that provide value to our customers,” said Tim Kelton, Descartes Labs Co-founder and Cloud Architect. “Our individual teams, who are building micro services, can quickly provision a database on Cloud SQL. They don't need to bother compiling Geos, Proj4, GDAL, and Lib2xml to leverage PostGIS. And when PostGIS isn’t needed, our teams use PostgreSQL without extensions or MySQL, also supported by Cloud SQL.”

According to Descartes Labs, Google Cloud Platform (GCP) is like having a virtual supercomputer on demand, without all the usual space, power, cooling and networking issues. Cloud SQL for PostgreSQL is a key piece of the architecture that backs the company’s satellite image analysis applications.
click to enlarge
In developing their newest application, GeoVisual Search, the team benefited greatly from automatic storage increases in Cloud SQL for PostgreSQL. “Ever tried to estimate how a compressed 54GB XML file will expand in PostGIS?” Tim Kelton asked. “It’s not easy. We enabled Cloud SQL’s automatic storage increase, which allows the disk to start at 10GB and, in our case, automatically expanded to 387GB. With this feature, we don’t waste money or time by under- or over-allocating disk capacity as we would on a VM.”
click to enlarge
Because the team was able to focus on data models rather than on database management, development of the GeoVisual Search application proceeded smoothly. Descartes’ customers can now find the geospatial equivalent of a needle in a haystack: specific objects of interest in map images.

The screenshot below shows a search through two billion map tiles to find wind turbines.
click to enlarge
Tim’s parting advice for startups evaluating cloud solutions: “Make sure the solution you choose gives you the freedom to experiment, lets your team focus on product development rather than IT management and aligns with your company’s budget.”

See what GCP can do for you


Sign up for a $300 credit to try Cloud SQL and the rest of GCP. Start with inexpensive micro instances for testing and development. When you’re ready, you can easily scale them up to serve performance-intensive applications. As a bonus, everyone gets the 100% sustained use discount during beta, regardless of usage.

Our partner ecosystem can help you get started with Cloud SQL for PostgreSQL. To streamline data transfer, reach out to Alooma, Informatica, Segment, Stitch, Talend and Xplenty. For help with visualizing analytics data, try ChartIO, iCharts, Looker, Metabase and Zoomdata.
"PostgreSQL is one of Segment’s most popular database targets for our Warehouses product. Analysts and administrators appreciate its rich set of OLAP features and the portability they’re ensured by it being open source. In an increasingly “serverless” world, Google’s Cloud SQL for PostgreSQL offering allows our customers to eschew costly management and operations of their PostgreSQL instance in favor of effortless setup, and the NoOps cost and scaling model that GCP is known for across their product line."   Chris Sperandio, Product Lead, Segment
"At Xplenty, we see steady growth of prospects and customers seeking to establish their data and analytics infrastructure on Google Cloud Platform. Data integration is always a key challenge, and we're excited to support both Google Cloud Spanner and Cloud SQL for PostgreSQL both as data sources as well as targets, to continue helping companies integrate and prepare their data for analytics. With the robustness of Cloud Spanner and the popularity of PostgreSQL, Google continues to innovate and prove it is a world leader in cloud computing."   Saggi Neumann, CTO, Xplenty

No matter how far we take Cloud SQL, we still feel like we’re just getting started. We hope you’ll come along for the ride.


Crash exploitability analysis on Google Cloud Platform: security in plaintext




When an application or service crashes, do you wonder what caused the crash? Do you wonder if the crash poses a security risk?

An important element in platform hardening is properly handling server process crashes. When a process crashes unexpectedly, it suggests there may be a security problem an attacker could exploit to compromise a service. Even highly reliable user-facing services can depend on internal server processes that crash. At Google, we collect crashes for analysis and automatically flag and analyze those with potential security implications.

Security vulnerabilities in crashes

Analyzing crashes is a widespread security practice — this is why, when you run Google Chrome, you’re asked if it’s okay to send data about crashes back to the company.

At Google Cloud Platform (GCP), we monitor for crashes in the processes that manage customer VMs and across our services, using standard processes to protect customer data in GCP.

There are many different security issues that can cause a crash. One well-known example is a use-after-free vulnerability. A use-after-free vulnerability occurs when you attempt to use a region of memory that’s already been freed.

Most of the time, a use-after-free action simply causes the program to crash. However, if an attacker has the ability to properly manipulate memory, there’s the potential for them to exploit the vulnerability and gain arbitrary code execution capabilities.

One recent example of a use-after-free was CVE-2016-5177. In this instance, a use-after-free was found by an external researcher in the V8 JavaScript Engine used by Chrome. The issue was fixed in a September 2016 release of Chrome.

Analysis tactics

Debugging a single crash can be difficult. But how do you handle debugging crashes when you have to manage thousands of server jobs?

In order to help secure a set of rapidly evolving products such as Google Compute Engine, Google App Engine and the other services that comprise GCP, you need a way to automatically detect problems that can lead to crashes.

In Compute Engine’s early days, when we had a much smaller fleet of virtual machines running at any given time, it was feasible for security engineers to analyze crashes by hand.

We would load crash dumps into gdb and look at the thread that caused a crash. This provided detailed insight into the program state prior to a crash. For example, gdb allows you to see whether a program is executing from a region of memory marked executable. If it’s not, you may have a security issue.

Analyzing crashes in gdb worked well, but as Cloud grew to include more services and more users, it was no longer feasible for us to do as much of this analysis by hand.

Automating analysis

We needed a way to automate checking crashes for use-after-free vulnerabilities and other security issues. That meant integrating with the systems used to collect crash data across Google, and running an initial set of signals against a crash to either flag it as a security problem to be fixed or that required further analysis.

Automating this triage was important, because crashes can occur for many reasons and may not pose a security threat. For instance, we expect to see many crashes just from routine stress testing. If, however, a security problem is found, we automatically file a bug that details the specific issue and assigns it an exploitability rating.

Always evolving

Maintaining a platform with high security standards means going up against attackers who are always evolving, and we're always working to improve in turn.

We're continually improving our crash analysis to automatically detect more potential security problems, better determine the root cause of a crash and even identify required fixes.

Digging deep on PHP 7.1 for Google App Engine



Developers love to build web applications and APIs with PHP, and we were delighted to announce last week at Google Cloud Next ‘17 that PHP 7.1 is available on Google App Engine. App Engine is our easy-to-use platform for building, deploying, managing and automatically scaling services on Google’s infrastructure. The PHP 7.1 runtime is available on the App Engine flexible environment, and is currently in beta.

Getting started


To help you get started with PHP on App Engine, we’ve built a collection of getting started guides, samples, codelabs and interactive tutorials that walk through creating your code, using our APIs and services, and deploying to production.

When running PHP on App Engine, you can use the tools and databases you already know and love, including Laravel, Symfony, Wordpress, or any other web framework. You can also use MongoDB, MySQL, or Cloud Datastore to store your data. And while the runtime is flexible enough to manage most applications and services, if you want more control over the underlying infrastructure, you can easily migrate to Google Container Engine or Google Compute Engine.

Deploying to App Engine on PHP 7.1


To deploy a simple application to App Engine on PHP 7.1, download and install the Google Cloud SDK. Once you’ve done this, run the following commands:

echo "<?php echo 'Hello, World';"> index.php
gcloud app deploy

This generates an app.yaml with the following values:

env: flex
runtime: php
runtime_config:
  document_root: .

Once the application is deployed, you can view it in the browser, or go to the Cloud Console to view the running instances.

Installing dependencies


For dependency management, we recommend using Composer. With it, dependencies declared in composer.json are automatically installed when deployed to App Engine Flexible Environment. In addition, it uses the PHP version specified in composer.json in your deployment.

composer require "php:7.1.*" --ignore-platform-reqs

Using Google’s APIs and services


Using the Google Cloud client library, you can take advantage of our advanced APIs and services such as our scalable NoSQL database Google Cloud Datastore, Google Cloud Pub/Sub, and Google BigQuery. To use the Google Cloud client library, install the code using Composer (this example assumes composer is installed globally):

composer require google/cloud


This creates a file composer.json with the most recent version of Google Cloud PHP (currently 0.24.0).

{
    "require": {
        "google/cloud": "^0.24.0"
    }
}


App Engine detects the project ID of the instance and authenticates using the App Engine service account. That means you can run, say, a BigQuery query with a few lines of code, with no additional authentication! For example, add the following code to index.php to call BigQuery:

<?php
require_once __DIR__ . '/vendor/autoload.php';
$client = new Google\Cloud\BigQuery\BigQueryClient();
$query = 'SELECT TOP(corpus, 10) as title, COUNT(*) as unique_words ' .
         'FROM [publicdata:samples.shakespeare]';
$queryResults = $client->runQuery($query);
foreach ($queryResults->rows() as $result) {
    print($result['title'] . ': ' . $result['unique_words'] . PHP_EOL);
}


Add this to a directory with the above composer.json file, and deploy it to App Engine flexible environment:

gcloud app deploy 
gcloud app browse

The second command will open your browser window to your deployed project, and you will see a printed list of BigQuery results!

Use your favorite framework


The PHP community uses a myriad of frameworks. We have code samples for setting up applications in Laravel, Symfony, Drupal, Wordpress, and Silex, as well as a Wordpress plugin that integrates with Google Cloud Storage. Keep an eye on the tutorials page as we add more frameworks and libraries, and be sure to create an issue for any tutorials you’d like to see.

Commitment to PHP and open source


At Google, we’re committed to open source. As such, the new core PHP Docker runtime, google-cloud composer package and Google API client are all open source:


We’re thrilled to welcome PHP developers to Google Cloud Platform, and we’re committed to making further investments to help make you as productive as possible. This is just the start -- stay tuned to the blog and our GitHub repositories to catch the next wave of PHP support on GCP.

We can’t wait to hear from you. Feel free to reach out to us on Twitter @googlecloud, or request an invite to the Google Cloud Slack community and join the #PHP channel.

Discover and redact sensitive data with the Data Loss Prevention API



Last week at Google Cloud Next '17, we introduced a number of security enhancements across Google Cloud, including the Data Loss Prevention API. Like many Google Cloud Platform (GCP) products, the DLP API began its life as an internal product used in development and support workflows. It also uses the same codebase as DLP on Gmail and Drive.

Now in beta, the DLP API gives GCP users access to a classification engine that includes over 40 predefined content templates for credit card numbers, social security numbers, phone numbers and other sensitive data. Users send the API textual data or images and get back metadata such as likelihood and offsets (for text) and bounding boxes (for images).


Be smart with your data

The DLP API helps you minimize what data you collect, expose or copy. For example, it can be used to automatically classify or redact sensitive data from a text stream before you write it to disk, generate logs or perform analysis. Use it to alert users before they save sensitive data in an application or triage content to the right storage system or user based on the presence of sensitive content.


Your data is your most critical asset

The DLP API helps you to manage and run analytics on cloud data, without introducing additional risk to your organization. Pre-process with the DLP API, then analyze trends in Google BigQuery, understand context with Google Cloud Natural Language API and run predictive models with Cloud Machine Learning Engineall on redacted textual content.

Try the DLP API out here with our demo application. Watch as it detects credit card numbers based on pattern formatting, contextual information and checksum.
To find out more and get started, visit the DLP API product page.

Cloud KMS GA, new partners expand encryption options



As you heard at Google Cloud Next ‘17, our Cloud Key Management Service (KMS) is now generally available. Cloud KMS makes it even easier for you to encrypt data at scale, manage secrets and protect your data the way you want  both in the cloud and on-premise. Today, we’re also announcing a number of partner options for using Customer-Supplied Encryption Keys.

Cloud KMS is now generally available.

With Cloud KMS, you can manage symmetric encryption keys in a cloud-hosted solution, whether they’re used to protect data stored in Google Cloud Platform (GCP) or another environment. You can create, use, rotate and destroy keys via our Cloud KMS API, including as part of a secret management or envelope encryption solution. Further, Cloud KMS is directly integrated with Cloud Identity Access Management and Cloud Audit Logging for greater control over your keys.

As we move out of beta, we’re introducing an availability SLA, so you can count on Cloud KMS for your production workloads. We’ve load tested Cloud KMS extensively, and reduced latency so that Cloud KMS can sit in the serving path of your requests.

Ravelin, a fraud detection provider, has continued their use of Cloud KMS to encrypt secrets stored locally, including configurations and authentication credentials, used for both customer transactions and internal systems and processes. Using Cloud KMS allows Ravelin to easily encrypt these secrets for storage.
“Encryption is absolutely critical to any company managing their own systems, transmitting data over a network or storing sensitive data, including sensitive system configurations. Cloud KMS makes it easy to implement best practices for secret management, and its low latency allows us to use it for protecting frequently retrieved secrets. Cloud KMS gives us the cryptographic tools necessary to protect our secrets, and the features to keep encryption practical.”  Leonard Austin, CTO at Ravelin. 

Managing your secrets in Google Cloud


We’ve published recommendations on how to manage your secrets in Google Cloud. Most development teams have secrets that they need to manage at build or run time, such as API keys. Instead of storing those secrets in source code, or in metadata, for many cases we suggest you store secrets encrypted at rest in a Google Cloud Storage bucket, and use Cloud KMS to encrypt those secrets at rest.

Customer-Supplied Encryption Key partners


You now have several partner options for using Customer-Supplied Encryption Keys. Customer-Supplied Encryption Keys (or CSEK, available for Google Cloud Storage and Compute Engine) allow you to provide a 256-bit string, such as an AES encryption key, to protect your data at rest. Typically, customers use CSEK when they have stricter regulatory needs, or need to provide their own key material.

To simplify the use of this unique functionality, our partners Gemalto, Ionic, KeyNexus, Thales and Virtru, can generate CSEK keys in the appropriate format. These partners make it easier to generate an encryption key for use with CSEK, and to associate that key to an object in Cloud Storage or a persistent disk, image or instance in Compute Engine. Each partner brings differentiated features and value to the table, which they describe in their own words below.

Gemalto
“Gemalto is dedicated to multi-cloud enterprise key management by ensuring customers have the best choices to maintain high assurance key ownership and control as they migrate operations, workloads and data to the cloud. Gemalto KeySecure has supported Client-Side Encryption with Google Cloud Storage for years, and is now extending support for Customer Supplied Encryption Keys (CSEK)." Todd Moore SVP of Encryption Products at Gemalto

Ionic
"We are excited to announce the first of many powerful capabilities leveraging Google's Customer Supplied Encryption Keys (CSEK). Our new Ionic Protect for Cloud Storage solution enables developers to simply and seamlessly use their own encryption keys with the full capabilities of the Ionic platform while natively leveraging Google Cloud Storage.”  Adam Ghetti, Founder and CEO of Ionic

KeyNexus
"KeyNexus helps customers supply their own keys to encrypt their most sensitive data across Google Cloud Platform as well as hundreds of other bring-your-own-key (BYOK) use cases spanning SaaS, IaaS, mobile and on-premise, via secure REST APIs. Customers choose KeyNexus as a centralized, platform-agnostic, key management solution which they can deploy in numerous highly available, scalable and low latency cloud or on-premise configurations. Using KeyNexus, customers are able to supply keys to encrypt data server-side using Customer-Supplied Encryption Keys (CSEKs) in Google Cloud Storage and Google Compute Engine"  Jeff MacMillan, CEO of KeyNexus

Thales
“Protected by FIPS 140-2 Level 3 certified hardware, the Thales nShield HSM uses strong methods to generate encryption keys based on its high-entropy random number generator. Following generation, nShield exports customer keys into the cloud for one-time use via Google’s Customer-Supplied Encryption Key functionality. Customers using Thales nShield HSMs and leveraging Google Cloud Platform can manage their encryption keys from their own environments for use in the cloud, giving them greater control over key material” Sol Cates, Vice President Technical Strategy at Thales e-Security

Virtru
Virtru offers business privacy, encryption and data protection for Google Cloud. Virtru lets you choose where your keys are hosted and how your content is encrypted. Whether for Google Cloud Storage, Compute Engine or G Suite, you can upload Virtru-generated keys to Google’s CSEK or use Virtru’s client-side encryption to protect content before upload. Keys may be stored on premise or in any public or private cloud."  John Ackerly, Founder and CEO of Virtru

Encryption by default, and more key management options


Recall that by default, GCP encrypts customer content stored at rest, without any action required from the customer, using one or more encryption mechanisms using keys managed server-side.

Google Cloud provides you with options to choose the approach that best suits your needs. If you prefer to manage your cloud-based keys yourself, select Cloud KMS; and if you’d like to manage keys with a partner or on-premise, select Customer-Supplied Encryption Keys.
Safe computing!

ASP.NET Core containers run great on GCP



With the recent release of ASP.NET Core, the .NET community has a cross-platform, open-source option that allows you to run Docker containers on Google App Engine and manage containerized ASP.NET Core apps with Kubernetes. In addition, we announced beta support for ASP.NET Core on App Engine flexible environment last week at Google Cloud Next. In this post, you’ll learn more about that as well as about support for Container Engine and how we integrate this support into Visual Studio and into Stackdriver!

ASP.NET Core on App Engine Flexible Environment

Support for ASP.NET Core on App Engine means that you can publish your ASP.NET Core app to App Engine (running on Linux inside a Docker container). To do so, you’ll need an app.yaml  that looks like this:

runtime: aspnetcore
env: flex

Use the “runtime” setting of “aspnetcore” to get a Google-maintained and supported ASP.NET Core base Docker image. The new ASP.NET Core runtime also provides Stackdriver Logging for any messages that are routed to standard error or standard output. You can use this runtime to deploy your ASP.NET Core apps to App Engine or to Google Container Engine.

Assuming you have your app.yaml file at the root of your project, you can publish to App Engine flexible environment with the following commands:

dotnet restore
dotnet publish -c Release
copy app.yaml .\bin\Release\netcoreapp1.0\publish\app.yaml
gcloud beta app deploy .\bin\Release\netcoreapp1.0\publish\app.yaml
gcloud app browse

In fact, you don’t even need that last command to publish that app  it just shows it once it’s been published.

ASP.NET Core on Container Engine

To publish this same app to Container Engine, you need a Kubernetes cluster and the corresponding credentials cached on your local machine:

gcloud container clusters create cluster-1
gcloud container clusters get-credentials cluster-1

To deploy your ASP.NET Core app to your cluster, you must first package it in a Docker container. You can do that with Google Cloud Container Builder, a service that builds container images in the cloud without having to have Docker installed. Instead, create a new file in the root of your project called cloudbuild.yaml with the following content:

steps:
- name: 'gcr.io/gcp-runtimes/aspnetcorebuild-1.0:latest'
- name: gcr.io/cloud-builders/docker:latest
args: [ 'build', '-t', 'gcr.io/&ltprojectid&gt/app:0.0.1', '--no-cache', '--pull', '.' ]
images:
['gcr.io/&ltprojectid&gt/app:0.0.1']

This file takes advantage of the same ASP.NET Core runtime that we used for App Engine. Replace each with the project ID where you want to run your app. To build the Docker image for your published ASP.NET Core app, run the following commands:


dotnet restore
dotnet publish -c Release
gcloud container builds submit --config=cloudbuild.yaml
.\bin\release\netcoreapp1.0\publish\

Once this is finished, you'll have an image called gcr.io/<projectid>/app:latest that you can deploy to Container Engine with the following commands:

kubectl run  --image=gcr.io//app:latest --replicas=2 
--port=8080

kubectl expose deployment  --port=80 --target-port=8080 
--type=LoadBalancer

kubectl get services

Replace <MYSERVICE> with the desired name for your service and these two commands will deploy the image to Container Engine, ensure that there are two running replicas of your service and expose an internet-facing service that load-balances requests between replicas. The final command provides the external IP address of your newly deployed ASP.NET Core service so that you can see it in action.


GCP ASP.NET Core runtime in Visual Studio

Being able to deploy from the command line is great for automated CI/CD processes. For more interactive usage, we’ve also built full support for deploying to both App Engine and Container Engine from Visual Studio via the Cloud Tools for Visual Studio extension. Once it’s installed, simply right-click on your ASP.NET Core project in the Solution Explorer, choose Publish to Google Cloud and choose where to run your code:
If you deploy to App Engine, you can choose App Engine-specific options without an app.yaml file:
Likewise, if you choose Container Engine, you receive Kubernetes-specific options that also don’t require any configuration files:
The same underlying commands are executed regardless of whether you deploy from the command line or from within Visual Studio (not counting differences between App Engine and Container Engine, of course). Choose the option that works best for you.

For more details about deploying from Visual Studio to App Engine and to Container Engine, check out the documentation. And if you’d like some help choosing between App Engine and Container Engine, the computing and hosting services section of the GCP overview provides some good guidance.


App Engine in Google Cloud Explorer

If you deploy to App Engine, the App Engine node in Cloud Explorer provides additional information about running services and versions inside Visual Studio.

The Google App Engine node lists all of the services running in your project. You can drill down into each service and see all of the versions deployed for that service, their traffic allocation and their serving status. You can perform most common operations directly from Visual Studio by right-clicking on the service, or version, including managing the service in the Cloud Console, browsing to the service or splitting traffic between versions of the service.

For more information about App Engine support for ASP.NET Core, I recommend the App Engine documentation for .NET.


Client Libraries for ASP.NET Core

There are more than 100 Google APIs available for .NET in NuGet, which means that it’s easy to get to them from the command line or from Visual Studio:
These same libraries work for both ASP.NET and ASP.NET Core, so feel free to use them from your container-based apps on GCP.


Stackdriver support for ASP.NET Core

Some of the most important libraries for you to use in your app are going to be those associated with what happens to your app once it’s running in production. As I already mentioned, simply using the ASP.NET Core runtime for GCP with your App Engine or Container Engine apps automatically routes the standard and error output to Stackdriver Logging. However, for more structured log statements, you can also use the Stackdriver logging API for ASP.NET Core directly:


using Google.Cloud.Diagnostics.AspNetCore;
...
public void Configure(ILoggerFactory loggerFactory) {
    loggerFactory.AddGoogle("<projectid>");
}
...
public void LogMessage(ILoggerFactory loggerFactory) {
    var logger = loggerFactory.CreateLogger("[My Logger Name]");
    logger.LogInformation("This is a log message.");
}


To see your log entries, go to the Stackdriver Logging page. If you want to track unhandled exceptions from your ASP.NET Core app so that they show up in Stackdriver Error Reporting, you can do that too:

public void Configure(IApplicationBuilder app) {
    string projectId = "";
    string serviceName = "";
    string version = "";
    app.UseGoogleExceptionLogging(projectId, serviceName, version);
}


To see unhandled exceptions, go to Stackdriver Error Reporting. Finally, if you want to trace the performance of incoming HTTP requests to ASP.NET Core, you can set that up like so:

public void ConfigureServices(IServiceCollection services) {
    services.AddGoogleTrace("");
}
...
public void Configure(IApplicationBuilder app) {
    app.UseGoogleTrace();
}

To see how your app performs, go to the Stackdriver Trace page for detailed reports. For example, this report shows a timeline of how a frontend interacted with a backend and how the backend interacted with Datastore:
Stackdriver integration into ASP.NET Core lets you use Logging, Error Reporting and Trace to monitor how well your app is doing in production quickly and easily. For more details, check out the documentation for Google.Cloud.Diagnostics.AspNetCore.

Where are we?

As containers become more central to app packaging and deployment, the GCP ASP.NET Core runtime lets you bring your ASP.NET skills, processes and assets to GCP. You get a Google-supported and maintained runtime and unstructured logging out of the box, as well as easy integration into Stackdriver Logging, Error Reporting and Trace. Further, you get all of the Google APIs in NuGet that support ASP.NET Core apps. And finally, you can choose between automated deployment processes from the command line, or interactive deployment and resource management from inside of Visual Studio.

Combine that with Google’s deep expertise in containers exposed via App Engine flexible environment and Google Container Engine (our hosted Kubernetes offering), and you get a great place to run your ASP.NET Core apps and services.

Google Cloud Functions: a serverless environment to build and connect cloud services



Developers rely on many cloud services to build their apps today: everything from storage and messaging services like Google Cloud Storage and Google Cloud Pub/Sub and mobile development platforms like Firebase, to data and analytics platforms like Google Cloud Dataflow and Google BigQuery. As developers consume more cloud services from their applications, it becomes increasingly complex to coordinate them and ensure they all work together seamlessly. Last week at Google Cloud Next '17, we announced the public beta of a new capability for Google Cloud Platform (GCP) called Google Cloud Functions that allows developers to connect services together and extend their behavior with code, or to build brand new services using a completely serverless approach.

With Cloud Functions you write simple, single-purpose functions that are attached to events emitted from cloud services. Your Cloud Function is triggered when an event being watched is fired. Your code executes in a fully managed environment and can effectively connect or extend services in Google’s cloud, or services in other clouds across the internet; no need to provision any infrastructure or worry about managing servers. A function can scale from a few invocations a day to many millions of invocations without any work from you, and you only pay while your function is executing.

Asynchronous workloads like lightweight ETL, or cloud automation tasks such as triggering an application build no longer require an always-on server that's manually connected to the event source. You simply deploy a Cloud Function bound to the event you want and you're done.
"Semios uses Google Cloud Functions as a critical part of our data ingestion pipeline, which asynchronously aggregates micro-climate telemetry data from our IoT network of 150,000 in-field sensors to give growers real-time insights about their orchards."
— Maysam Emadi, Data Scientist, Semios
Cloud Function’s fine-grained nature also makes it a perfect candidate for building lightweight APIs, microservices and webhooks. HTTP endpoints are automatically configured when you deploy a function you intend to trigger using HTTP — no complicated configuration (or integration with other products) required. Simply deploy your function with an HTTP trigger, and we'll give you back a secure URL you can curl immediately.
"At Vroom, we work with a number of partners to market our services and provide us with leads. Google Cloud Functions makes integration with these partners as simple as publishing a new webhook, which scales automatically with use, all without having to manage a single machine." — Benjamin Rothschild, Director of Analytics, Vroom
If you're a mobile developer using Firebase, you can now connect your Firebase app to one or more Cloud Functions by binding a Cloud Function to mutation events in the Firebase Realtime Database, events from Firebase Authentication, and even execute a Cloud Function in response to a conversion event in Firebase Analytics. You can find out more about this Firebase integration at https://firebase.google.com/features/functions.

Cloud Functions also empowers developers to quickly and easily build messaging bots and create custom actions for Google Assistant.
“At Meetup, we wanted to improve developer productivity by integrating task management with Slack. Google Cloud Functions made this integration as simple as publishing a new HTTP function. We’ve now rolled the tool out across the entire organization without ever touching a server or VM.” — Jose Rodriguez, Lead of Engineering Effectiveness, Meetup
In our commitment to openness, Cloud Functions uses only standard, off-the-shelf runtimes and doesn’t require any proprietary modules or libraries in your code: your functions will just work. In addition, the execution environment doesn't rely on a proprietary or forked operating system, which means your dependencies have native library compatibility. We currently support the Node.js runtime and have a set of open source Node.js client libraries for connecting to a wide range of GCP services.

As part of the built-in deployment pipeline we'll resolve all dependencies by running npm install for you (or npm rebuild if you provide packages that require compilation), so you don't have to worry about building for a specific environment. We also have an open source local emulator so you can build and quickly iterate on your Cloud Functions from your local machine.
"Node.js is continually growing across the cloud, especially when it comes to the container and serverless space. This new offering from Google, built in collaboration with the open source community, will provide even more options to the Node.js community going forward.” — Mikeal Rogers, Community Manager, Node.js Foundation
Head over to our quickstart guide to dive right in! Best of all, we've created a generous free tier to allow you to experiment, prototype and play with the product without spending a dime. You can find out more on our pricing page.

We look forward to seeing what you create with Cloud Functions. We’d love to hear your feedback on StackOverflow.

Your favorite languages, now on Google App Engine



Since 2008, Google App Engine has made it easy to build web applications, APIs and mobile backends at Google scale. Our core goal has always been to let developers focus on code, while we handle the rest. Liberated from the need to manage and patch servers, hand-hold rollouts and maintain infrastructure, organizations from startups to Fortune 500 companies have been able to achieve unprecedented time to market, scale and agility on our platform.

At Google Cloud Next last week, we delivered on the promise of Google App Engine while evolving the platform toward the openness and flexibility that developers demand. Any language. Any framework. Any library. The App Engine team is thrilled that the App Engine flexible environment is now generally available.



Your favorite languages, libraries and tools

General availability means support for Node.js, Ruby, Java 8, Python 2.7 or 3.5, and Go 1.8 on App Engine. All of these runtimes are containerized, and are of course available as open source on GitHub.

If we don’t have support for the language you want to use, bring your own. If it runs in a Docker container, you can run it on App Engine. Like Swift? Want to run Perl? Love Elixir? Need to migrate your Parse app? You can do all this and more in App Engine.

In addition to the GA supported runtimes, we’re also excited to announce two new beta runtimes today: ASP.NET Core and PHP 7.1.

ASP.NET Core on App Engine goes beta

With this release, we also announced beta support for ASP.NET Core on App Engine. This is a great choice for developers building web applications with C# and .NET Core who want to enjoy the benefits of running on App Engine. The Google Cloud .NET client libraries make it easy to use the full breadth of Google Cloud services from your application, and are currently available on NuGet.

To make developing applications for .NET core on GCP even better, we’ve added support for deploying your apps directly with the Cloud Tools for Visual Studio extension.


To get started, check out the App Engine for .NET getting started guide.

PHP 7.1 on App Engine goes beta

Along with .NET support, PHP 7.1 support on App Engine is now beta. This runtime allows you to choose between PHP 5.6, 7.0, or 7.1. There are step-by-step guides for running Symfony, Laravel or Drupal and our client libraries make it easy to take advantage of Google Cloud Platform’s advanced APIs and services.

To get started, check out the App Engine for PHP getting started guide.

Our commitment to open source

At Google, we’re committed to open source and open development. The Docker-based App Engine runtimes, the client libraries, the tooling  all open source, and available on GitHub.



The best part about these runtimes and libraries is that they run anywhere that supports a Docker-based environment. The code you write for App Engine works across App Engine, Google Container Engine or Google Compute Engine. You can even grab your Docker image and run it on your own infrastructure.

We’re excited to welcome developers of all languages to App Engine. We’d like to extend a warm welcome to Node.js, Ruby and .NET developers, and we’re committed to making further investments to help make you as productive as possible.

If you’re an App Engine developer who loves the unique features of the standard environment, we’ve got more coming for you too. Over the next few months, we’ll be rolling out support for Java 8, updated libraries and improved connectivity with other GCP services. Developers that sign up for the alpha release of Java 8 on the App Engine standard environment can get started today. You can expect multiple announcements on both the standard and flexible environments in App Engine in the coming months.

We can’t wait to hear what you think. If you’re new to Google Cloud Platform (GCP), make sure to sign up and give it a try. Feel free to reach out to us on Twitter @googlecloud, or request an invite to the Google Cloud Slack community and join the #appengine channel.