Category Archives: Google Cloud Platform Blog

Product updates, customer stories, and tips and tricks on Google Cloud Platform

Field of dreams: this week on Google Cloud Platform



The jury’s still out whether that rectangle in the Google Maps image identified by 15-year old Canadian William Gadoury is a lost Mayan city . . . or merely an abandoned field.
Meanwhile, Google Cloud Platform customers have no doubts about the value of geospatial data. This week, Land O’Lakes announced its new WinField Data Silo tool that runs on top of Google Compute Engine and Google Cloud Storage, and integrates with the Google Maps API to display real-time agronomic data stored in the system to its users. The fact that those users can be anywhere  sitting at their desks, or on the console of their combine harvesters  was cited as a unique differentiator for GCP.

Speaking of unique, cloud architect Janakiram MSV shares on Forbes the five unique things about GCE that no IaaS provider can match. First on his list is Google Compute Engine’s sustained usage discount. No argument from us. The longer a VM runs on GCE, the greater the discount  up to 30% for instances that run an entire month. Further, customers don’t need to commit to the instance up front, and any discounts are automatically applied by Google on their bill.

No argument from GCP customer Geofeedia either. According to the market intelligence provider, reserved instances have no place in provisioning cloud compute resources. “In the world of agile software, making a one, let alone a three year prediction about your hardware needs is extremely difficult,” writes Charlie Moad, Geofeedia director of production engineering. Moad also gives shout outs to GCP networking, firewall rules and its project-centric approach to building multi-region applications.

That’s it for this week on Google Cloud Platform. If you happen to be at Google I/0 2016 next week, check out the cloud sessions. And be sure to come back next Friday for the latest on the lost Mayan City/abandoned field debate.

Feeding 10 billion people with cloud computing: Land O’Lakes, Inc. moves to Google Cloud Platform



In 2050, the world's population will require farms to feed upwards of 10 billion people. This means one farmer will need to feed 250 people. That’s 61 percent more than a farmer feeds today. To meet this exploding demand, Land O’Lakes, Inc. is turning to the cloud to revolutionize modern farming. This is coming to life through Land O’Lakes’ WinField brand, a leading provider of agricultural solutions. Using Google Cloud Platform, they’re launching the WinField Data Silo™, a cloud-based application to help farmers make better, data-driven decisions.

Farmers make dozens of important decisions throughout their crop’s growing cycle from when to plant seeds, to where and how much to water and fertilize. Traditionally, these decisions have largely been based on intuition. But with cloud-based, big data tools that can capture, ingest and analyze data from multiple sources simultaneously, farmers can access precise information to optimize their yields. To improve decision-making tools for farmers, Land O’Lakes built the Data Silo, a cloud-native system that connects farmers with data.

Data Silo is a data collection application that ingests, stores and shares information between farmers, retailers, and third-party providers. It connects previously disparate systems, letting users quickly share information about crops and farm operations. With it, farmers can easily upload data to the platform, build dashboards and search for information. In return, they receive guidance on agronomic best practices, such as which crop to grow in a particular field, while maintaining control over who owns and accesses the data.

Land O’Lakes worked with Google Cloud Platform Technology Partner, Cloud Technology Partners, to develop Data Silo from the ground up. In a first phase, CTP worked with Google App Engine and Google Cloud SQL to build a working prototype within weeks of starting the project. Eventually, Land O’Lakes migrated Data Silo to Google Compute Engine to run its web-based
PHP application, power the mobile and web-based interfaces and integrate with existing monitoring and security systems. In addition, it implemented the PostgreSQL database and PostGIS libraries to run complex GIS functions.

A key differentiator for Cloud Platform is its geospatial sophistication. By integrating with the Google Maps API, Land O’Lakes is able to present Data Silo users with geospatial data overlaid with labels that are meaningful to them, on their mobile devices or desktops. Users can sort and view the data according to user-definable views such as type of crop, growing periods and yields. Maps update in real-time, along with any new data that users upload to the system.

Google Cloud Platform features also present unique possibilities for Land O’Lakes and Data Silo. Today, it functions primarily as a place for growers to store and share data about their farming operations. Tomorrow, Data Silo could evolve into a data hub for a variety of agricultural applications, for example, using Google Pub/Sub for data integration, or Google BigQuery and Google Cloud Bigtable to perform analytics that further drive crop yields.

Over the past 50 years, Land O’Lakes has grown and adapted to the changing needs of more than 300,000 farmers. To help them produce more food, with fewer resources and less environmental impact, the company is investing millions of dollars in new technology. Having a flexible, secure cloud that can easily scale, is critical to Land O’Lakes’ ability to launch today’s Data Silo technology and future innovation.

To hear more about how Land O’Lakes implemented Google Cloud Platform, watch their technical session at GCP NEXT.

How to get your ASP.NET app up on Google Cloud the easy way


Don’t let anyone tell you that Google Cloud Platform doesn’t support a wide range of platforms and programming languages. We kicked things off with Python and Java on Google App Engine, then PHP and Go. Now, we support .NET framework on Google Compute Engine.

Google recently published a .NET client library for services like Google Cloud Datastore and Windows virtual machines running on Compute Engine. With those pieces in place, it’s now possible to run an ASP.NET application directly on Cloud Platform.

To get you up and running fast, we published two new tutorials that show you how to build and deploy ASP.NET applications to Cloud Platform.

The Hello World tutorial shows you how to deploy an ASP.NET application to Compute Engine.
The Bookshelf tutorial shows you to build an ASP.NET MVC application that uses a variety Cloud Platform services to make your application reliable, scaleable and easy to maintain. First, it shows you how to store structured data with .NET. Do you love SQL? Use Entity Framework to store structured data in Cloud SQL. Tired of connection strings and running ALTER TABLE statements? Use Cloud Datastore to store structured data. The tutorial also shows you how to store binary data and run background worker tasks.
Give the tutorials a try, and please share your feedback! And don’t think we’re done yet  this is just the beginning. Among many efforts, we're hand-coding open source libraries so that calling Google APIs feels familiar to .NET programmers. Stay tuned for more on running ASP.NET applications on Google Cloud Platform.

Stackdriver Trace for App Engine is GA; app latency has nowhere to hide



At Google we're always obsessed with speed, in our products and on the web. Faster sites create happy users and improve engagement. Faster sites also reduce operating costs. Like us, Google Cloud Platform customers place a lot of value in speed — that's why we decided to externalize some of the tools that Google engineers use to optimize sites, including Stackdriver Trace.

A member of the Google Stackdriver family, Stackdriver Trace is now generally available for Google App Engine which receives over 100B requests per day. Stackdriver Trace automatically analyzes each of your applications running on GAE to identify performance bottlenecks and emergent issues.

(click to enlarge)

Impact of latency on application


Stackdriver Trace provides detailed insight into your application’s run time performance and latency in near real-time. The service continuously evaluates data from each traced request and checks for patterns that indicate performance bottlenecks. To remove the operational overhead for performance analysis, Stackdriver Trace automatically analyzes your application’s performance over time. You can also create reports to evaluate your application’s latency across versions or releases. With the Latency shift detection feature, the service evaluates each of the reports to evaluate if there has been a significant shift in latency over time.
(click to enlarge)
The Stackdriver Trace API can be used to add custom spans to a trace. A span represents a unit of work within a trace, such as an RPC request or a section of code. For custom workloads, you can define your custom start and end of a span using the Stackdriver Trace SDK. This data is uploaded to Stackdriver Trace, where you can leverage all the Trace Insights and Analytics features mentioned above.
(click to enlarge)

Trace is already integrated with other Stackdriver tools such as monitoring, logs, error reporting and debugger.

Today Trace works seamlessly across your distributed environment, and supports all language runtimes on the App Engine platform. Stay tuned for Trace coverage for other GCP platforms.
(click to enlarge)
(click to enlarge)

Get started today

(click to enlarge)
We’re looking forward to this next step for Google Cloud Platform as we continue to help developers and businesses everywhere benefit from Google’s technical and operational expertise in application performance. Please visit Stackdriver Trace to learn more and contact us with your feedback and ideas.

How Kubernetes takes container workload portability to the next level


Developers love application containers and the Docker and Rocket package formats, because of the package-once, run-anywhere experience that simplifies their jobs. But even the easiest-to-use technologies can spiral out of control and become victims of their own success. Google knows this all too well. With our own internal systems, we realized a long time ago that the most efficient way to share compute resources was containers, and the only way to run containers at scale is to use automation and orchestration. And so we developed Cgroups, which we donated to the Linux Foundation to help establish the container ecosystem, and what we affectionately call Borg, our cluster management system.

Flash forward to the recent rise of containers, and it occurred to us that developers at large could benefit from the service discovery, configuration and orchestration that Borg provides, to simplify building and running multi-node container-based applications. Thus Kubernetes was born, an open-source derivative of Borg that anyone can use to manage their container environment.

Earlier this year, we transferred the Kubernetes IP to the Cloud Native Computing Foundation. Under the auspices of the CNCF, members such as IBM, Docker, CoreOS, Mesosphere, Red Hat and VMware work alongside Google to ensure that Kubernetes works not just in Google environments, but in whatever public or private cloud an organization may choose.

What does that mean for container-centric shops? Kubernetes builds on the workload portability that containers provide, by helping organizations to avoid getting locked into any one cloud provider. Today, you may be running on Google Container Engine, but there may come a time when you wish you could take advantage of IBM’s middleware. Or you may be a longtime AWS shop, but would love to use Google Cloud Platform’s advanced big data and machine learning. Or you’re on Microsoft Azure today for its ability to run .Net applications, but would like to take advantage of existing in-house resources running OpenStack. By providing an application-centric API on top of compute resources, Kubernetes helps realize the promise of multi-cloud scenarios.

If running across more than one cloud is in your future, choosing Kubernetes as the basis of your container orchestration strategy makes sense. Today, most of the major public cloud providers offer container orchestration and scheduling as a service. Our offering, Google Container Engine, or GKE, is based on Kubernetes and by placing Kubernetes in the hands of the CNCF, our goal is to ensure that your applications will run on any Kubernetes implementation that a cloud provider may offer  or that you run yourself.

Even today, it’s possible to run Kubernetes on any cloud environment of your choosing. Don’t believe us? Just look at CoreOS Tectonic, which runs on AWS, or Kubernetes for Microsoft Azure.

Stay tuned for a tutorial about how to set up and run Kubernetes to run multi-cloud applications, or get started right away with a free trial.

Big data, big differentiators: this week on Google Cloud Platform




Sometimes, when doing a roundup of the week’s news, no clear theme emerges, and you’re left with a disjointed list of unrelated tidbits. That wasn’t a problem this week; both on this blog and in the Google Cloud Platform world at large, people had big data and analytics on the brain.

The week started out with a bang, with big data consultancy Mammoth Data releasing the results of a benchmark test comparing Google Cloud Dataflow with Apache Spark. Google’s data processing service did really well, outperforming Spark by two to five times, depending on the number of cores in the test.

Cloud Dataflow is a paid service, of course, but the platform’s API was recently accepted as an incubator project with the Apache Software Foundation, under Apache Beam. The rationale, according to Tyler Akidau, Google staff software engineer for Apache Beam, is to “provide the world with an easy-to-use, but powerful model for data-parallel processing, both streaming and batch, portable across a variety of runtime platforms.” You can read Tyler’s full post here. Data Artisan’s Kostas Tzoumas also provides his organization’s take, and the relationship of Apache Beam to Apache Flink.

We were also treated with the next installment of big data guru Mark Litwintschik’s  "A billion taxi rides" series, in which he analyzes data about 1.1 billion taxi and Uber rides in NYC against different data analytics tools. Up this week: Mark schooled us on how he got 33x Faster Queries on Google Cloud Dataproc; the Performance Impact of File Sizes on Presto Query Times; and how to build a 50-node Presto Cluster on Google Cloud's Dataproc.

If that’s not enough for you, be sure to register for a joint webinar with Bitnami, "Visualizing Big Data with Big Money" that uses election data from the Center for Responsive Politics. Using Google BigQuery and the open-source Re:Dash data visualization tool, citizens will be able to grok the enormity of this country’s campaign finance problems depressingly fast.

Ruby on Google App Engine goes beta



We’re excited to announce that Ruby runtime on Google App Engine is going beta. Frameworks such as Ruby on Rails and Sinatra make it easy for developers to rapidly build web applications and APIs for the cloud. App Engine provides an easy to use platform for developers to build, deploy, manage, and automatically scale services on Google’s infrastructure.

Getting started


To help you get started with Ruby on App Engine, we’ve built a collection of getting started guides, samples, and interactive tutorials that walk through creating your code, using our APIs and services, and deploying to production.

When running Ruby on App Engine, you can use the tools and databases you already know and love. Use Rails, Sinatra, or any other web framework to build your app. Use PostgreSQL, MySQL, or Cloud Datastore to store your data. The runtime is flexible enough to manage most applications and services  but if you want more control over the underlying infrastructure, you can easily migrate to Google Container Engine or Google Compute Engine for more flexibility and control.

Using Google’s APIs & services


Using the gcloud Ruby gem, you can take advantage Google’s advanced APIs and services, like our scalable NoSQL database Google Cloud Datastore, Google Cloud Pub/Sub and Google BigQuery:

require "gcloud"

gcloud = Gcloud.new
bigquery = gcloud.bigquery

sql = "SELECT TOP(word, 50) as word, COUNT(*) as count " +
      "FROM publicdata:samples.shakespeare"
job = bigquery.query_job sql

job.wait_until_done!
if !job.failed?
  job.query_results.each do |row|
    puts row["word"]
  end
end

Services like BigQuery allow you to take advantage of Google’s unique technology in the cloud to bring life to your applications.

Committent to Ruby and open source


At Google, we’re committed to open source. The new core Ruby Docker runtime, gcloud gem, Google API client, everything  is all open source:


We’re thrilled to welcome Ruby developers to the Google Cloud Platform, and we’re committed to making further investments to help make you as productive as possible. This is just the start  stay tuned to the blog and our GitHub repositories to catch the next wave of Ruby support on GCP.

We can’t wait to hear what you think. Feel free to reach out to us on Twitter @googlecloud, or request an invite to the Google Cloud Slack community and join the #ruby channel.

Leveraging the synergies: Synergyse, Google Apps and GCP



Since 2013, over four million people that needed a bit of help with Gmail, Calendar, Drive or Docs have turned to Synergyse, a virtual training coach for the Google Apps suite. Today, we announced that Synergyse is joining the Google family, and that all Google Apps users will be able to install the extension for free while the integration is underway. Run, don’t walk, to get your copy.

Free stuff aside, the Synergyse architecture serves as a powerful reminder of what’s possible when you choose Google as your application design partner: Synergyse reaches millions of consumers and business users that rely on the easy-to-use and integrated Google Apps suite. The Synergyse training modules make their way to users through a simple Google Chrome extension.

Moreover, the Synergyse back-end is powered by Google Cloud Platform, which takes care of delivering interactive context-aware training on-demand, as new users come online. Because Synergyse is hosted in the cloud, the team can easily add or update its training modules, and because it is built on GCP, it’s easy to weave in advanced Google functionality like search and speech recognition. It’s a testament to the cool things you can do when you use openness, collaboration and integration as your guiding product development principles.

While doves were crying: this week on Google Cloud Platform



It was a sad week for music lovers with the news of Prince’s passing, but we take comfort in the fact that things are rocking and rolling on GCP.

The cloud blogosphere was dominated by things that set GCP apart. Take Google App Engine. First released in 2007, it’s taken a while for the world to fully grok its value  even internally. In “Why Google App Engine rocks: A Google engineer’s take” Google Cloud director of technical support Luke Stone gives a full recounting of his team’s experience with App Engine and other managed services like Google BigQuery. He describes how his team was blown away by their productivity gains, and urges all developers to try out the platform-as-a-service route.

Digital gaming store Humble Bundle corroborates this sentiment. In the weekly GCP Podcast, Humble Bundle engineering manager Andy Oxfeld describes how the video game retailer relies on App Engine to scale its website up and down to meet fluctuating demand for its limited-time games. He also describes how the team uses Task Queues, dedicated memcache for faster load times, Google Cloud Storage and BigQuery, to name a few. Check it out.

Storage was another hot topic this week. One of the week’s most talked about posts comes from database luminary Mosha Pasumansky and technical lead for Dremel/BigQuery, in which he discusses Capacitor, BigQuery’s columnar storage format. Long story short, Capacitor advances the state of the art of columnar data encoding, and when combined with Google Cloud Platform’s Colossus distributed file system, provides super-fast and secure queries with little to no effort on the part of BigQuery users. Woohoo!

But sometimes you need to do something a little less flashy  like resize a persistent disk. If you’ve been wondering how to do that on Google Compute Engine, wonder no more  GCP developer advocate Mete Atamel has put together a one-minute video tutorial on YouTube that walks you through the basic steps. Best of all, you don’t even need to reboot the associated VM!

Finally, Google was at OpenStack Summit in Austin this week, where Google partner CoreOS demonstrated ‘Stackanetes’  running OpenStack as a managed Kubernetes service. You can also hear Google product manager Craig McLuckie discuss the benefits to this approach on The Cube with Wikibon analysts Stu Miniman and Brian Gracely. McLuckie also shares his thoughts on working with the open source community, and Google’s evolution from an Internet to a Cloud company.

Now playing: New ISO security and privacy certifications for Google Cloud Platform




Today, Google reiterated its commitment to the security needs of its enterprise customers with the addition of two new certificates to Google Cloud Platform: ISO27017 for cloud security and ISO27018 for privacy. We also renewed our ISO27001 certificate for the fourth year in a row.

Google Cloud Platform services covered by these ISO certifications now include Cloud Dataflow, Cloud Bigtable, Container Engine, Cloud Dataproc and Container Registry. These join Compute Engine, App Engine, Cloud SQL, Cloud Storage, Cloud Datastore, BigQuery and Genomics on the list of services that will be regularly audited for these certificates.

Certifications such as these provide independent third-party validations of our ongoing commitment to world-class security and privacy, while also helping our customers with their own compliance efforts. Google has spent years building one of the world’s most advanced infrastructures, and as we make it available to enterprises worldwide, we want to offer more transparency on how we protect their data in the cloud.

More information on Google Cloud Platform Compliance is available here.