Category Archives: Google Cloud Platform Blog

Product updates, customer stories, and tips and tricks on Google Cloud Platform

Compute Engine machine types with up to 64 vCPUs now ready for your production workloads



Today, we're happy to announce general availability for our largest virtual machine shapes, including both predefined and custom machine types, with up to 64 virtual CPUs and 416 GB of memory.


64 vCPU machine types are available on our Haswell, Broadwell and Skylake (currently in Alpha) generation Intel processor host machines.

Tim Kelton, co-founder and Cloud Architect of Descartes Labs, an early adopter of our 64 vCPU machine types, had this to say:
"Recently we used the 64 vCPU instances during the building of both our global composite imagery layers and GeoVisual Search. In both cases, our parallel processing jobs needed tens of thousands of CPU hours to complete the task. The new 64 vCPU instances allow us to work across more satellite imagery scenes simultaneously on a single instance, dramatically speeding up our total processing times."
The new 64 core machines are available for use today. If you're new to GCP and want to give these larger virtual machines a try, it’s easy to get started with our $300 credit for 12 months.

Google Cloud Platform launches Northern Virginia region



Google Cloud Platform (GCP) continues to rapidly expand our global footprint, and we’re excited to announce the availability of our latest cloud region: Northern Virginia.
The launch of Northern Virginia (us-east4) brings the total number of regions serving the Americas market to four including Oregon, Iowa and South Carolina. We’ll continue to turn up new options for developers in this market with future regions in São Paulo, Montreal and California.

Google Cloud customers benefit from our commitment to large-scale infrastructure investments. Each region gives developers additional choice on how to run their applications closest to their customers, while Google’s networking backbone transforms compute and storage infrastructure into a global-scale computer, giving developers around the world access to the same cloud infrastructure that Google engineers use every day.

We’ve launched Northern Virginia with three zones and the following services:
Incredible user experiences hinge on incredibly performant infrastructure. Developers who want to serve the Northeastern and Mid-Atlantic regions of the United States will see significant reductions in latency when they run their workloads in the Northern Virginia region. Our performance testing shows 25%-85% reductions in RTT latency when serving customers in Washington DC, New York, Boston, Montreal and Toronto compared to using our Iowa or South Carolina regions.
"We are a latency-sensitive business and the addition of the Northern Virginia region will allow us to expand our coverage area and reduce latency to our current users. This will also allow us to significantly increase the capability of our Data Lake platform, which we are looking at as a competitive advantage" — Linh Chung, CIO at Viant, a Time Inc. Company
We want to help you build what’s Next for you. Our locations page provides updates on the availability of additional services, and for guidance on how to build and create highly available applications, take a look at our zones and regions page. Give us a shout to request early access to new regions and help us prioritize what we build next.

Windows on the rise at GCP



It’s been a little over three months since we made our no-charge VM migration tool available for GCP in the Google Cloud Console, and customers have jumped at the chance to move their enterprise workloads to Google Cloud. While customers are moving applications using a variety of source operating systems to Google Cloud, we've been especially excited to see that almost half of the VM migrations to Google Cloud via this new service have been of Microsoft Windows workloads.

Why is this significant to you? Because our goal to make Google Cloud the best place to run any application  from Windows workloads to new cloud native applications. We believe that the significant number of Windows applications migrating to Google Cloud through this new service is indicative of strong demand to give enterprise Windows applications the agility, scale and security advantages of Google Cloud.
“We are leveraging Google Cloud to deliver the experiences our customers demand, and we want to make sure that all our workloads can take advantage of Google Cloud’s unique infrastructure and services. Using the free Google Cloud migration tools, we’ve been able to easily move our Windows servers to Google Cloud with near-zero downtime.”  Rob Wilson, CTO at Smyths Toys
We're happy to see customers take advantage of our first class support for Windows, SQL Server and both .NET and .NET Core on GCP. We’ve made sure that those applications are well-supported by providing support for Windows Server 2016 within weeks of it reaching GA, by adding support for SQL Server Web, Standard and Enterprise editions (including support for High Availability), by integrating Visual Studio and PowerShell, by making all of Google’s APIs available via NuGet and by joining the .NET Foundation’s Technical Steering Committee. Further, with Stackdriver Logging, Error Reporting and Trace support for Windows and .NET; developers and administrators have the support they need to build, deploy and manage their applications. Finally, with the recent announcement of .NET Core support in all of our libraries and tooling, as well as in our App Engine and Container Engine products, you’re covered into the future as well.

Internally, we’ve seen other signs of more Windows and .NET workloads running on GCP, including a 57% increase in Windows CPU usage in the second half of 2016. Further, we know that sometimes you need help to take advantage of the full capabilities of GCP, which is why we announced the Windows Partner Program. These top-notch systems integrators will help you to not just “lift & shift,” but rather to “move & improve,” with cutting-edge capabilities such as big data processing, data analytics, machine learning and container management.

Learn more about Windows, SQL Server and .NET on GCP; and don’t hesitate to reach out with questions and suggestions. We’ve had lots of folks make the switch already, we’d love you to join them. Our migration service is offered at no charge and you get $300 of GCP credits when you sign up so you can migrate a few servers to see how easy it is to run your windows apps in GCP. Click here to get started.

Use Google Cloud Client Libraries to store files, save entities, and log data



To develop a cloud application, you usually need to access an online object storage, a scalable NoSQL database and a logging infrastructure. To that end, Google Cloud Platform (GCP) provides the Cloud Storage API, the Cloud Datastore API, and the Stackdriver Logging API. Better yet, you can now access those APIs via the latest Google Cloud Client Libraries which we’re proud to announce, are now Generally Available (GA) in seven server-side languages: C#, Go, Java, Node.js, PHP, Python and Ruby.

Online object storage

For your object storage needs, the Cloud Storage API enables you for instance to upload blobs of data, such as picture or movies, directly into buckets. To do so in Node.js for example, you first need to install the Cloud Client Library:

npm install --save @google-cloud/storage

and then simply run the following code to upload a local file into a specific bucket:

const Storage = require('@google-cloud/storage');
 
// Instantiates a client
const storage = Storage();
 
// References an existing bucket, e.g. “my-bucket”
const bucket = storage.bucket(bucketName);
 
// Upload a local file to the bucket, e.g. “./local/path/to/file.txt”
return bucket.upload(fileName)
 .then((results) => {
  const file = results[0];
  console.log(`File ${file.name} uploaded`);
});


NoSQL Database

With Cloud Datastore, one of our NoSQL offerings, you can create entities, which are structured objects, and save them in GCP so that they can be retrieved or queried by your application at a later time. Here’s an example in Java, where you specify the maven dependency in the following manner:

    com.google.cloud
    google-cloud-datastore
    1.0.0

followed by executing this code to create a task entity:

// Imports the Google Cloud Client Library
import com.google.cloud.datastore.Datastore;
import com.google.cloud.datastore.DatastoreOptions;
import com.google.cloud.datastore.Entity;
import com.google.cloud.datastore.Key;
 
public class QuickstartSample {
  public static void main(String... args) throws Exception {
 
  // Instantiates a client
  Datastore datastore = DatastoreOptions.getDefaultInstance().getService();
 
  // The kind for the new entity
  String kind = "Task";
 
  // The name/ID for the new entity
  String name = "sampletask1";
 
  // The Cloud Datastore key for the new entity
  Key taskKey = datastore.newKeyFactory().setKind(kind).newKey(name);
 
  // Prepares the new entity
  Entity task = Entity.newBuilder(taskKey)
  .set("description", "Buy milk")
  .build();
 
  // Saves the entity
  datastore.put(task);
  }
}

Logging framework

Our libraries also allow you to send log data and events very easily to the Stackdriver Logging API. As a Python developer for instance, the first step is to install the Cloud Client Library for Logging:

pip install --upgrade google-cloud-logging

Then add the following code to your project (e.g. your __init__.py file):

import logging
import google.cloud.logging
client = google.cloud.logging.Client()
# Attaches a Google Stackdriver logging handler to the root logger
client.setup_logging(logging.INFO)

Then, just use the standard Python logging module to directly report logs to Stackdriver Logging:

import logging
logging.error('This is an error')

We encourage you to visit the client libraries page for Cloud Storage, Cloud Datastore and Stackdriver Logging to learn more on how to get started programmatically with these APIs across all of the supported languages. To see the full list of APIs covered by the Cloud Client Libraries, or to give us feedback, you can also visit the respective GitHub repositories in the Google Cloud Platform organization.

Putting gRPC multi-language support to the test



gRPC is an RPC framework developed and open-sourced by Google. There are many benefits to gRPC, such as efficient connectivity with HTTP/2, efficient data serialization with Protobuf, bi-directional streaming and more, but one of the biggest benefits is often overlooked: multi-language support.

Out of the box, gRPC supports multiple programming languages : C#, Java, Go, Node.js, Python, PHP, Ruby and more. In the new microservices world, the multi-language support provides the flexibility you need to implement services in whatever language and framework you like and let gRPC handle the low-level connectivity and data transfer between microservices in an efficient and consistent way.

This all sounds nice in theory but does it work in reality? As a long-time Java and C# developer, I wanted to see how well gRPC delivered on its multi-language promise. The plan was to run a couple of Java gRPC samples to see how gRPC worked in Java. Then, I wanted to see how easy it would be to port those samples into C#. Finally, I wanted to mix and match Java and C# clients/servers and see how well they worked together.

gRPC Java support

First, I wanted to figure out how well gRPC supports individual languages. Getting started with Java is pretty straightforward just add the Maven or Gradle dependencies and plugins. Ray Tsang, a colleague and Java expert, has written and published some gRPC samples in Java in his GitHub repository, so I started exploring those.

I tried a simple gRPC client and a simple gRPC server written in Java. These are "Hello World"-type samples, where a client sends a request to the server and the server echoes it back. The samples are Maven projects, so I used my favorite Java editor (Eclipse) to import the projects into a new workspace. First, I started the server:

Starting server on port 8080
Server started!

Then, I started the client. As expected, the client sent a request and received a response from the server:

Sending request
Received response: Hello there, Mete

Ray also has a more interesting sample that uses a bi-directional streaming feature. He built a chat server and a chat client based on JavaFX to talk to that server. I was able to get the two chat clients talking to each other through the chat server with little effort.



Two JavaFX clients talking to each other via Java server

gRPC C# support

So far so good. Next, I wanted to see how easy it was to rewrite the same samples in C#. With a little help from the gRPC documentation samples, I was able to create a GreeterClient and a GreeterServer for the Hello World sample. The code is very similar to Java but it looks a little nicer. (Ok, I'm biased in favor of C# :-) )

One minor difference: with Java, you can use Maven or Gradle plugins to generate gRPC stub classes automatically. In the case of C#, you need to bring in the gRPC Tools NuGet package and generate the stub classes with it. Take a look at generate_protos.bat to see how I did that. The good news is that you can rely on the same service definition file to generate Java and C# stub clients, which makes it easy to write client and server apps in different languages.

I also implemented the bi-directional streaming chat example with ChatServer and ChatWindowsClient but instead of JavaFX, I used Windows Forms. As before, the code is quite similar to the Java version but gRPC takes advantage of the language to make sure developers are not missing out on language specific features.

For example, ChatServerImpl.java creates and returns a StreamObserver as a handler for client messages. This works but felt a little unintuitive. On the other hand, ChatServerImpl.cs uses the async/await pattern of C# and writes to the response stream asynchronously, which yields a cleaner implementation.

gRPC multi-language test

The real test for multi-language support is how well Java and C# implementations work together. To test that, I started the Java chat server. Then, I started a Java chat client and a C# chat client both talking to the same Java chat server. It was nice to see the two clients talking to each other through the chat server with no special configuration or effort on my part.
One Windows Form client and one JavaFX client talking to each other via Java server


Conclusion

Designing a framework is hard. Designing a framework that works across different languages while maintaining the unique benefits of each language is even harder. Whether I worked with gRPC in Java or C#, it never felt alien. It's obvious that a lot of thought and effort went into making sure that gRPC was idiomatic for each language. That's great to see from a framework trying to cover a wide range of languages.

If you want to run the samples yourself, take a look at Ray's Java gRPC samples and my C# gRPC samples. You can also watch a recording of Ray’s talk on Java gRPC.

Happy gRPCing! :-)

Building lean containers using Google Cloud Container Builder



Building a Java application requires a lot of files — source code, application libraries, build systems, build system dependencies and of course, the JDK. When you containerize an application, these files sometimes get left in, causing bloat. Over time, this bloat costs you both time and money by storing and moving unnecessary bits between your Docker registry and your container runtime.

A better way to help ensure your container is as small as possible is to separate the building of the application (and the tools needed to do so) from the assembly of the runtime container. Using Google Cloud Container Builder, we can do just that, allowing us to build significantly leaner containers. These lean containers load faster and save on storage costs.

Container layers

Each line in a Dockerfile adds a new layer to a container. Let’s look at an example:

FROM busybox
 
COPY ./lots-of-data /data
 
RUN rm -rf /data
 
CMD ["/bin/sh"]


In this example, we copy the local directory, "lots-of-data", to the "data" directory in the container, and then immediately delete it. You might assume such an operation is harmless, but that's not the case.

The reason is because of Docker’s “copy-on-write” strategy, which makes all previous layers read-only. If each successive command generates data that's not needed at container runtime, nor deleted in the same command, that space cannot be reclaimed.


Spinnaker containers

Spinnaker is an open source, cloud-focused continuous delivery tool created by Netflix. Spinnaker is actively maintained by a community of partners, including Netflix and Google. It has a microservice architecture, with each component written in Groovy and Java, and uses Gradle as its build tool.

Spinnaker publishes each microservice container on Quay.io. Each service has nearly identical Dockerfiles, so we’ll use the Gate service as the example. Previously, we had a Dockerfile that looked like this:

FROM java:8

COPY . workdir/

WORKDIR workdir

RUN GRADLE_USER_HOME=cache ./gradlew buildDeb -x test

RUN dpkg -i ./gate-web/build/distributions/*.deb

CMD ["/opt/gate/bin/gate"]

With Spinnaker, Gradle is used to do the build, which in this case builds a Debian package. Gradle is a great tool, but it downloads a large number of libraries in order to function. These libraries are essential to the building of the package, but aren’t needed at runtime. All of the runtime dependencies are bundled up in the package itself.

As discussed before, each command in the Dockerfile creates a new layer in the container. If data is generated in that layer and not deleted in the same command, that space cannot be recovered. In this case, Gradle is downloading hundreds of megabytes of libraries to the "cache" directory in order to perform the build, but we're not deleting those libraries.

A more efficient way to perform this build is to merge the two “RUN” commands, and remove all of the files (including the source code) when complete:

FROM java:8
 
COPY . workdir/
 
WORKDIR workdir
 
RUN GRADLE_USER_HOME=cache ./gradlew buildDeb -x test && \
  dpkg -i ./gate-web/build/distributions/*.deb && \
  cd .. && \
  rm -rf workdir
 
CMD ["/opt/gate/bin/gate"]

This took the final container size down from 652MB to 284MB, a savings of 56%. But can we do even better?

Enter Container Builder

Using Container Builder, we're able to further separate building the application from building its runtime container.

The Container Builder team publishes and maintains a series of Docker containers with common developer tools such as git, docker and the gcloud command line interface. Using these tools, we’ll define a "cloudbuild.yaml" file with one step to build the application, and another to assemble its final runtime environment.

Here's the "cloudbuild.yaml" file we'll use:
steps:
- name: 'java:8'
  env: ['GRADLE_USER_HOME=cache']
  entrypoint: 'bash'
  args: ['-c', './gradlew gate-web:installDist -x test']
- name: 'gcr.io/cloud-builders/docker'
  args: ['build', 
         '-t', 'gcr.io/$PROJECT_ID/$REPO_NAME:$COMMIT_SHA', 
         '-t', 'gcr.io/$PROJECT_ID/$REPO_NAME:latest',
         '-f', 'Dockerfile.slim', '.']
 
images:
- 'gcr.io/$PROJECT_ID/$REPO_NAME:$COMMIT_SHA'
- 'gcr.io/$PROJECT_ID/$REPO_NAME:latest'

Let’s go through each step and explore what is happening.

Step 1: Build the application

- name: 'java:8'
  env: ['GRADLE_USER_HOME=cache']
  entrypoint: 'bash'
  args: ['-c', './gradlew gate-web:installDist -x test']

Our lean runtime container doesn’t contain "dpkg", so we won't use the "buildDeb" Gradle task. Instead, we use a different task, "installDist", which creates the same directory hierarchy for easy copying.

Step 2: Assemble the runtime container

- name: 'gcr.io/cloud-builders/docker'
  args: ['build', 
         '-t', 'gcr.io/$PROJECT_ID/$REPO_NAME:$COMMIT_SHA',
         '-t', 'gcr.io/$PROJECT_ID/$REPO_NAME:latest', 
         '-f', 'Dockerfile.slim', '.']


Next, we invoke the Docker build to assemble the runtime container. We'll use a different file to define the runtime container, named "Dockerfile.slim". Its contents are below:
FROM openjdk:8u111-jre-alpine
 
COPY ./gate-web/build/install/gate /opt/gate
 
RUN apk --nocache add --update bash
 
CMD ["/opt/gate/bin/gate"]

The output of the "installDist" Gradle task from Step 1 already has the directory hierarchy we want (i.e. "gate/bin/", "gate/lib/", etc), so we can simply copy it into our target container.

One of the major savings is the choice of the Alpine Linux base layer, "openjdk:8u111-jre-alpine". Not only is this layer incredibly lean, but we also choose to only include the JRE, instead of the bulkier JDK that was necessary to build the application.


Step 3: Publish the image to the registry

images:
- 'gcr.io/$PROJECT_ID/$REPO_NAME:$COMMIT_SHA'
- 'gcr.io/$PROJECT_ID/$REPO_NAME:latest'

Lastly, we tag the container with the commit hash and crown it as the "latest" container. We then push this container to our Google Cloud Container Registry (grc.io) with these tags.

Conclusion

In the end, using Container Builder resulted in a final container size of 91.6MB, which is 85% smaller than our initial Dockerfile and even 68% smaller than our improved version.
*The major savings comes from separating the build and runtime environments, and from choosing a lean base layer for the final container.
Applying this approach across each microservice yielded similar results; our sum total container footprint shrunk from almost 6GB down to less than 1GB.

Google Cloud Natural Language API launches new features and Cloud Spanner graduating to GA



Today at Google Cloud Next London we're excited to announce product news that will help customers innovate and transform their businesses faster via the cloud: first, that Google Cloud Natural Language API is adding support for new languages and entity sentiment analysis, and second, that Google Cloud Spanner is graduating to general availability (GA).

Cloud Natural Language API beta


Since we launched Cloud Natural Language API, a fully managed service for extracting meaning from text via machine learning, we’ve seen customers such as Evernote and Ocado enhance their businesses in fascinating ways. For example, they use Cloud Natural Language API to analyze customer feedback and sentiment, extract key entities and metadata from unstructured text such as emails or web articles, and enable novel features (such as deriving action items from meeting notes).

These use cases, among many others, highlighted the need to expand language support and add improvements in the quality of our base NLU technology. We've incorporated this feedback into the product and are pleased to announce the following new capabilities under beta:

  • Expanded language support for entity, document sentiment and syntax analysis for the following languages: Chinese (Simplified and Traditional), French, German, Italian, Korean and Portuguese. This is in addition to existing support for English, Spanish and Japanese.
  • Understand sentiment for specific entities and not just whole document or sentence: We're introducing a new method that identifies entities in a block of text and also determines sentiment for those entities. Entity sentiment analysis is currently only available for the English language. For more information, see Analyzing Entity Sentiment.
  • Improved quality for sentiment and entity analysis: As part of the continuous effort to improve quality of our base models, we're also launching improved models for sentiment and entity analysis as part of this release.

Early access users of this new functionality such as Wootric are already using the expanded language support and new entity sentiment analysis feature to better understand customer sentiment around brands and products. For example, for customer feedback such as “the phone is expensive but has great battery life,” users can now parse that the sentiment for phone is negative while the sentiment for battery life is positive.

As the API becomes more widely adopted, we're looking forward to seeing more interesting and useful applications of it.

Cloud Spanner enters GA

Announced in March at Google Cloud Next ‘17, Cloud Spanner is the world’s first fully managed, horizontally scalable relational database service for mission-critical online transaction processing (OLTP) applications. Cloud Spanner is specifically designed to meet customer requirements in this area for strong consistency, high availability and global scale qualities that make it unique as a service.

During the beta period, we were thrilled to see customers unlock new use cases in the cloud with Cloud Spanner, including:

  • Powering mission-critical applications like customer authentication and provisioning for multi-national businesses
  • Building consistent systems for business transactions and inventory management in the financials services and retail industries
  • Supporting incredibly high-volume systems that need low-latency and high-throughput in the advertising and media industries

As with all our other services, GCP handles all the performance, scalability and availability needs automatically in a pay-as-you-go way.

On May 16, Cloud Spanner will reach a further milestone by becoming generally available for the first time. Currently we're offering regional instances, with multi-regional instances coming later this year. We've been Spanner users ourselves for more than five years to support a variety of mission-critical global apps, and we can’t wait to see what new workloads you bring to the cloud, and which new ones you build next!

Shazam: Why cloud GPUs finally make sense



At Shazam, we've been heavy users of graphics processing units (GPUs) for our recognition services since 2012, starting with the NVIDIA TESLA M2090 and working our way up to the K80 today. We've traditionally used bare metal servers because GPUs in the cloud have not been available, and when they have, they were far too expensive and not performant enough for our needs. Only recently have the economics of GPUs in the cloud really made sense for our business. This is what kicked off our journey to Google Cloud Platform (GCP).

For certain tasks, GPUs are a cost-effective and high-performance alternative to traditional CPUs. They work great with Shazam’s core music recognition workload, in which we match snippets of user-recorded audio fingerprints against our catalog of over 40 million songs. We do that by taking the audio signatures of each and every song, compiling them into a custom database format and loading them into GPU memory. Whenever a user Shazams a song, our algorithm uses GPUs to search that database until it finds a match. This happens successfully over 20 million times per day.

To meet that demand, we've been maintaining a fleet of GPUs on dedicated bare metal servers that we lease from a managed services provider. Because of the time it takes to source and provision a new physical server, we provision enough to meet peak demand and then run that capacity 24/7, 365 days a year. We kept costs under control by improving our algorithms and by taking advantage of ever-evolving GPU architectures and the performance improvements they brought. About six months ago, though, we began experimenting with GPUs running on Compute Engine. Thanks to the speed with which we can dial new instances up and down, we maintain GPU infrastructure to handle average use instead of the full capacity for our maximum peak load. Thus far, we’ve migrated about one-third of our infrastructure into Google Cloud.

In order to efficiently search our massive catalog of music, we maintain multiple levels of GPU server clusters that we call "tiers." A first tier searches against a database of the most popular songs’ audio signatures, while subsequent tiers search longer samples against progressively more and more obscure music databases. In this way, Shazam identifies, say, "Thinking Out Loud" by Ed Sheeran in a single pass from a short sample, but might need several passes and a much longer sample to identify a 1950s recording of a Lithuanian polka group (being able to match really obscure music in addition to popular music is what makes using Shazam such a magical experience for our users).

Increasing the hit rate on the first line of servers depends on keeping the index files up to date with the latest popular music. That’s hard to do given how quickly music falls in and out of favor. Some surges in traffic we can plan and pre-scale for, such as the Super Bowl, the Grammy’s, or even our first branded game show, "BEAT SHAZAM." Other surges we cannot predict  say, a local radio station in a large market reviving an old R&B hit, or when a track that was never popular is suddenly featured in a TV advertisement. And that’s not counting new music, which we add to our catalog every day through submissions from labels as well as by in-house music experts who are constantly searching for new music.

Of course, running on bare metal servers, we also need to provision extra capacity for the inevitable failure scenarios we all experience when operating services at scale. One of the amazing benefits of running in Google is that we can now replace a failed node in just minutes with a brand new one “off the shelf” instead of keeping a pool of nodes around just waiting for failures. In our managed service provider, we had to provision GPUs in groups of four cards per machine, with two dies per card. That meant that we could lose up to eight shards of our database when a node failed. Now, in Google, we provision one VM per shard, which localizes the impact of a node failure to a single shard instead of eight.

An unexpected benefit of using Google Cloud GPUs has been to increase how often we recalculate and update our audio signature database, which is actually quite computationally intense. On dedicated infrastructure, we update the index of popular songs daily. On Google Cloud, we can recompile the index and reimage the GPU instance in well under an hour, so the index files are always up-to-date.

This flexibility allows us to begin considering dynamic cluster configurations. For instance, because of the way our algorithm works, it’s much easier for us to identify songs that were Shazamed in a car, which is a relatively quiet environment, than it is to identify songs Shazamed in a restaurant, where talking and clanging obscure the song’s audio signature. With the flexibility that cloud-based GPUs afford us, we have many more options available to us for configuring our tiers to match the specific demands that our users throw at us at different times of day. For example, we may be able to reconfigure our clusters according to time of day  morning drive time, vs. happy hour at the bar.

It’s exciting to think about the possibilities that using GPUs in Google Cloud opens up, and we look forward to working with Google Cloud as it adds new GPU offerings to its lineup.

You can find out more details about our move to Google Cloud Platform here: https://blog.shazam.com/moving-gpus-to-google-cloud-36edb4983ce5

Google Cloud Launcher adds more container support



Containers are a repeatable and reliable way to deploy on Google Cloud Platform (GCP). With more and more customers adopting containers, there’s a clear need for more pre-packaged, secure, maintained container offerings that customers can easily deploy into their environments. A few weeks ago at Google Cloud Next ‘17, we launched container runtime base images for Debian, Ruby, OpenJDK, Jetty, Node.js and ASP.NET Core. Today, we're pleased to add the following Google maintained containers:


Google container solutions are managed by Google engineers. Since we’re maintaining the images, the containers available on Google Cloud Launcher will be current with the latest application and security updates.

The recipes we use to build the containers are publicly available on GitHub so that you can see how they were created. You can use the scripts and tweak them to create your own flavor of the containers. We strive to ensure these containers are not bloated and are vanilla images.

There’s no additional cost to use these containers beyond the cost of infrastructure. They're compatible with Docker and Kubernetes. The container images can be used on GCP with Google App Engine, Google Container Engine, Docker, or even off GCP in an on-premises Kubernetes environment. They're production-grade, which means you can deploy them onto a compatible virtual machine and run your business on them.

Our team of engineers welcomes feedback so feel free to post on the issue tracker in GitHub for the container solutions. We’d love to hear about any issues or feature requests. View our full list of open source container solutions managed by Googlers here.

If you’re new to GCP, it’s easy and free to get started with our $300 dollar of credit for 12 months.

Solution guide: Best practices for migrating Virtual Machines



Migrating to the cloud can be a challenging project for any sized company. There are many different considerations in migrations, but one of the core tasks is to migrate virtual machines. Due to the variety of hardware, hypervisors and operating systems in use today, this can be a complex and daunting prospect.

The customer engineering team at Google Cloud has helped a number of customers migrate to GCP - customers like Spotify and Evernote. Based on those experiences and our understanding of how our own cloud works, we've released an article describing our recommended best practices on migrating virtual machines.

One of the tools that can help customers move to Google Cloud is CloudEndure. CloudEndure powers the Google VM Migration Service, and can simplify the process of moving virtual machines. CloudEndure joined us in this article with practical case studies of migrations that they've done for various customers.

We hope you find this article helpful while migrating to the cloud. If you decide to use the Migration Service, take a look at our tutorial to help guide you through the process.