Category Archives: Google Cloud Platform Blog

Product updates, customer stories, and tips and tricks on Google Cloud Platform

Google Cloud Shell will be free through 2016!

In October, we announced the launch of Google Cloud Shell, a Google Cloud Platform feature that lets you manage your infrastructure and applications from the command line in any browser. At that time we committed that Cloud Shell beta would be free through 2015, and today we have extended this to the end of 2016!

With the holiday season upon us, you might not always have access to the computer you use to manage your application daily. With Cloud Shell, it just takes one click in the console to get temporary, quick access to a VM hosted and managed by Google that includes the most common tools needed to manage GCP pre-installed. If you need to store something between sessions, you’ll have 5GB of storage space.

Image 1: Cloud Shell in the GCP Cloud Console

We’ve seen strong enthusiasm around these new capabilities from the community:
“Cloud shell, the new UI, and the depth of each service and it’s documentation puts @googlecloud on top for me. Quality over quantity” - @SageProgramming

“Cloud shell + container engine from @googlecloud make quick work of configuring @kubernetesio projects. Nothing to install but a browser!” - @nissyen

But you also told us that a free beta period through the end of 2015 was too short. With that in mind, we’re excited to extend the free beta period for another year, until the end of 2016.

Here are just a few of the things you can try out in Cloud Shell during this period:


We hope you give it a try and welcome your feedback  or interest in volunteering for a user experience research study, please email us at [email protected].

  • Posted by Cody Bratt, Product Manager

BigQuery cost controls now let you set a daily maximum for query costs

Today we’re giving you better cost controls in BigQuery to help you manage your spend, along with improvements to the streaming API, a performance diagnostic tool, and a new way to capture detailed usage logs.

BigQuery is a Google-powered supercomputer that lets you derive meaningful analytics in SQL, letting you only pay for what you use. This makes BigQuery an analytics data warehouse that’s both powerful and flexible. Those accustomed to a traditional fixed-size cluster – where cost is fixed, performance degrades with increased load, and scaling is complex – may find granular cost controls helpful in budgeting your BigQuery usage.

In addition, we’re announcing availability of BigQuery access logs in Audit Logs Beta, improvements to the Streaming API, and a number of UI enhancements. We’re also launching Query Explain to provide insight on how BigQuery executes your queries, how to optimize your queries and how to troubleshoot them.

Custom Quotas: No fear of surprise when the bill comes


Custom quotas allow you to set daily quotas that will help prevent runaway query costs. There are two ways you can set the quota:

  • Project wide: an entire BigQuery project cannot exceed the daily custom quota.
  • Per user: each individual user within a BigQuery project is subject to the daily custom quota.


Query Explain: understand and optimize your queries

Query Explain shows, stage by stage, how BigQuery executes your queries. You can now see if your queries are write, read or compute heavy, and where any performance bottlenecks might be. You can use BigQuery Explain to optimize queries, troubleshoot errors or understand if BigQuery Slots might benefit you.

In the BigQuery Web UI, use the “Explanation” button next to “Results” to see this information.

Improvements to the Streaming API

Data is most valuable when it’s fresh, but loading data into an analytics data warehouse usually takes time. BigQuery is unique among warehouses in that it can easily ingest a stream of up to 100,000 rows per second per table, available for immediate analysis. Some customers even stream 4.5 million rows per second by sharding ingest across tables. Today we’re bringing several improvements to BigQuery Streaming API.

  • Streaming API in EU locations. It’s not just for the US anymore: you may now use the Streaming API to load data into your BigQuery datasets residing in EU.
  • Template tables is a new way to manage related tables used for streaming. It allows an existing table to serve as a template for a streaming insert request. The generated table will have the same schema, and be created in the same dataset and project as the template table. Better yet, when the schema of the template table is updated, the schema of the tables generated from this template will also be updated.
  • No more “warm-up” delay. After streaming the first row into a table, we no longer require a warm-up period of a couple of minutes before the table becomes available for analysis. Your data is available immediately after the first insertion.

Create a paper trail of queries with Audit Logs Beta


BigQuery Audit Logs form an audit trail of every query, every job and every action taken in your project, helping you analyze BigQuery usage and access at the project level, or down to individual users or jobs. Please note that Audit Logs is currently in Beta.

Audit Logs can be filtered in Cloud Logging, or exported back to BigQuery with one click, allowing you to analyze your usage and spend in real-time in SQL.

With today’s announcements, BigQuery gives you more control and visibility. BigQuery is already very easy to use, and with recently launched products like Datalab (a data science notebook integrated with BigQuery), just about anyone in your organization can become a big data expert. If you’re new to BigQuery, take a look at the Quickstart Guide, and the first 1TB of data processed per month is on us. To fully understand the power of BigQuery, check out the documentation and feel free to ask your questions using the “google-bigquery” tag on Stack Overflow.

-Posted by Tino Tereshko, Technical Program Manager

The next generation of managed MySQL offerings on Cloud SQL

Google Cloud SQL is an easy-to-use service that delivers fully managed MySQL databases. It lets you hand off to Google the mundane, but necessary and often time consuming tasks — like applying patches and updates, managing backups and configuring replications — so you can put your focus on building great applications. And because we use vanilla MySQL, it’s easy to connect from just about any application, anywhere.

The first generation of Cloud SQL was launched in October 2011 and has helped thousands of developers and companies build applications. As Compute Engine and Persistent Disk have made great advancements since their launch, the second generation of Cloud SQL builds on their innovation to deliver an even better, more performant MySQL solution at a better price/performance ratio. We’re excited to announce the beta availability of the second generation of Cloud SQL — a new and improved Cloud SQL for Google Cloud Platform.

Speed, more speed and scalability


The two principal goals of the second generation of Cloud SQL are: better performance and scalability per dollar. The performance graph below speaks for itself. Second generation Cloud SQL is more than seven times faster than the first generation of Cloud SQL. And it scales to 10TB of data, 15,000 IOPS and 104GB of RAM per instance — well beyond the first generation.

Source: Google internal testing



Yoga for your database (Cloud SQL is flexible)


Cloud users appreciate flexibility. And while flexibility is not a word frequently associated with relational databases, with Cloud SQL we’ve changed that. Flexibility means easily scaling a database up and down. For example, a database that’s growing in size and number of queries per day might require more CPU cores and RAM. A Cloud SQL instance can be changed to allocate additional resources to the database with minimal downtime. Scaling down is just as easy.

Flexibility means easily connecting to your database from any client with Internet access, including Compute Engine, Managed VMs, Container Engine and your workstation. Connectivity from App Engine is only offered for Cloud SQL First Generation right now, but that will change soon. Because we embrace open standards by supporting MySQL Wire Protocol, the standard connection protocol for MySQL databases, you can access your managed Cloud SQL database from just about any application, running anywhere. For example:

  • Use all your favorite tools, such as MySQL Workbench, Toad and the MySQL command-line tool to manage your Cloud SQL instances
  • Get low latency connections from applications running on Compute Engine and Managed VMs
  • Use standard drivers, such as Connector/J, Connector/ODBC, and Connector/NET, making it exceptionally easy to access Cloud SQL from most applications


Flexibility also means easily starting and stopping databases. Many databases must run 24x7, but some are used only occasionally for brief or infrequent tasks. Cloud SQL can be managed using the Cloud Console (our browser-based administration console), command line (part of our gCloud SDK) or a RESTful API. The command line interface (CLI) and API make Cloud SQL administration scriptable and help users maximize their budgets by running their databases only when they’re needed.

The graph below shows the number of active Cloud SQL database instances running over time. Notice the clusters of five sawtooth-like ridges and then a drop for two additional ridges. These clusters show an increased number of databases running during business hours on Monday through Friday each week. Database activity, measured by the number of active databases, falls outside of business hours, especially on the weekends. This repeated rise and fall of database instances is a great example of flexibility. Its magnitude is helped significantly by first generation Cloud SQL’s ability to automatically sleep when it is not being accessed. While this is not a design goal of the second generation of Cloud SQL, users can quickly create and delete, or start and stop databases that only need to run on occasion. Cloud SQL users get the most from their budget because of the service’s flexibility.



What is a "managed" MySQL database?


Cloud SQL delivers fully managed MySQL databases, but what does that really mean? It means Google will apply patches and updates to MySQL, manage your backups, configure replication and provide automatic failover for High Availability (HA) in the event of a zone outage. It also means that you get Google’s operational expertise for your MySQL database. Google’s team of MySQL experts make configuring replication and automatic failover a breeze, so your data is protected and available. They also patch your database when important security updates are delivered. You choose when (day and time of week) the updates should be applied, and Google’s team takes care of the rest. This combined with Cloud SQL’s automatic encryption on database tables, temporary files and backups ensures your data is secure.

High Availability, replication and backups are configurable, so you can choose what's appropriate for each of your database instances. For development instances, you can choose to opt out of replication and automatic failover, while your production instances are fully protected. Even though we manage the database, you’re still in control.

Pricing: commitment issues


Getting the best Cloud SQL price doesn’t require you to commit to a one- or three-year contract. To get the best Cloud SQL price, just run your database 24x7 for the month. That’s it. If you use a database infrequently, you’ll be charged by the minute at the standard price. But there’s no need to decide upfront and Google helps find savings for you. No commitment, no strings attached. As a bonus, everyone gets the 100% sustained use discount during Beta, regardless of usage.

Ready to get started?


If you haven’t signed up for Google Cloud Platform, do so now and get a $300 credit to test drive Cloud SQL. The second generation Cloud SQL has inexpensive micro instances for small applications, and easily scales up and out to serve performance-intensive applications.

You can also take advantage of our growing partner ecosystem and tools to make working in Cloud SQL even easier. We’ve partnered with Talend, Attunity, Dbvisit and xPlenty to help you streamline the process of loading your data into Cloud SQL and with analytics products Tableau, Looker, YellowFin and Bime so you can easily create rich visualizations for meaningful insights. We’ve also integrated with ScaleArc and WebYog to help you monitor and manage your database and have partnered with service providers like Pythian, so you can have expert support during your Cloud SQL implementations. Reach out to any of our partners if you need help getting up and running.

Bottom Line


Cloud SQL Second Generation makes what customers love about Cloud SQL First Generation faster and more scalable, at a better price per performance.



- Posted by Brett Hesterberg, Product Manager, Google Cloud Platform

Measuring cloud performance just got easier and better

In February 2015, Google Cloud Platform and 30+ industry leaders and researchers launched PerfKit Benchmarker (PKB). PKB is an open source cloud benchmarking tool with more than 500 contributors from across the industry, including major cloud providers, hardware vendors and academia.

Today we're proud to announce our version 1.0 release. PKB supports 9 cloud providers, including AliCloud, Amazon Web Services, CloudStack, DigitalOcean, Google Cloud Platform, Kubernetes, Microsoft Azure, OpenStack, Rackspace, as well as the machine under your desk or in your datacenter. It fully automates 26 benchmarks covering compute, network and storage primitives, common applications like Tomcat and Cassandra, as well as cloud-specific services like object storage and managed MySQL. It also offers popular benchmarks such as EPFL EcoCloud Web Search, and EPFL EcoCloud Web Serving.

Since we first released PKB, we've seen strong engagement from researchers, industry partners and universities, making it a real community effort. PKB is being used today to measure performance across public clouds, bare-metal hardware and hardware simulations.
Fig. 1 PerfKit Benchmarker architecture



We're now declaring PerfKit Benchmarker V1 because the community believes we have the right set of benchmarks to cover the most common usage scenarios, the framework provides the right abstractions making it easy to extend and maintain and we've achieved the right balance between variance and runtime. PKB will continue to evolve and improve, covering new workloads and scenarios to keep pace with the ever changing cloud development design patterns.

Javier Picorel, a researcher from EPFL EcoCloud explained here why CloudSuite chose to integrate with PKB:
Cloud computing has become the dominant computing platform for delivering scalable online services to a global user base all over the world. The constant emergence of new services, growing user bases and data deluge result in ever-increasing computational requirements for the service providers. Popular online services, such as web search, social networks and video streaming, are hosted by private or public cloud providers in large cloud-server systems, which comprise thousands of servers. Since its inception, CloudSuite has emerged as a popular suite of benchmarks, both in industry and among academics, to benchmark the performance of cloud servers. CloudSuite facilitates research in the field of servers specialized for cloud workloads (e.g., Scale-out Processors) and the development of real products in the server industry (e.g., Cavium Thunderx Processor). 
We believe that PerfKit Benchmarker (PKB) is a step towards the standardization of cloud benchmarking. In essence, we envision PKB as the “SPEC for cloud-server systems.” Naturally, our goals match with PKB's and the strong consortium put together by Google convinced us to team up. On the technical side, we are excited about the standard APIs that PKB provides, enabling the automated deployment of our benchmarks into the most well-known cloud-server providers. We believe that PKB has all the potential to be established as the de-facto standard for cloud benchmarking. Therefore, we expect it to grab the attention of cloud providers, academics and industry, while integrating more and the most recent online services.”

Carlos Torres is a Performance Engineer at Rackspace. At Rackspace, Carlos and his team help other developers performance test their products. They identify critical performance scenarios, develop benchmarks and tools to facilitate the execution and analysis of those scenarios, and provide guidance to other teams to develop their performance tests. Here's what he said:
There are two main cases where I use PKB. One is to provide data for comparative analysis of hardware/software configurations to understand their performance characteristics, and the other is for measuring and tracking performance across software releases. PKB has brought me multiple benefits, but if I had to choose three, I'd say, speed, reproducibility and flexibility.
Speed: Before PKB, configuring and executing a complex benchmark that made use of a multi-node distributed system, such as a 9-node Hadoop cluster with HDFS, took hours of tedious setup, scripting and validation. Maintenance of those scripts, and knowing the current best practices for deploying such systems was a nightmare. Once you executed a benchmark, gathering the data from the tests usually involved manually executing scripts to scrape, parse and copy the data from multiple machines. Now, with PKB, it is very easy to execute, not one, but even multiple of these benchmarks, against every major cloud, usually with just one command. I can rely on the community's expertise, and for the most part, trust the configurations provided with each of the benchmarks. Finally, PKB makes is really easy to consume the data gathered from the tests, since it produces JSON output of the results. 
Reproducibility: Just like in science, reproducibility is a very important aspect of performance engineering. To confirm that either a bottleneck exists, or that it has been fixed, it is necessary to reliably reproduce a workload. Previous to PKB, it was tedious to keep track of all the software versions and configuration settings needed to replicate a benchmark, which sometimes were not documented and hence forgotten. This made reproducibility hard, and error prone. By using the same PKB version, with a single command, I can easily recreate complex benchmarks, and know that I'm executing the same set of software since PKB explicitly tracks versions of the applications, benchmarks and tools used. Also by just sharing the command I used for a test, other users can recreate the same test in no time. 
Flexibility: One of the best features of PKB is the ability execute the same benchmarks across different cloud providers, machine types and compatible operating systems images. While PKB ships with great defaults for most benchmarks, it makes it very easy to execute the benchmarks using custom settings, using commands switches or configuration files that a benchmark might optionally accept. PKB doesn't just make executing benchmarks easy, but contributing new benchmarks is simple as well. It provides a set of libraries that benchmark writers can use to write, for the most part, OS-agnostic benchmark installations.“

Marcin Karkocha and Mateusz Blaszkowski from Intel are working in the software-defined infrastructure (SDI) business to make reference implementations of private clouds.
“We try to determine how specific cloud configuration options impact on workloads running inside the instances. Based on this, we want to create reference architectures for different clouds. We also run Perfkit benchmarks in order to compare and calculate capabilities of reference architectures from different providers. In our case, PKB is used in private cloud  because of that we have slightly different requirements and problems to solve. We do not focus on comparing public cloud offerings. Instead we try to find out what is the most efficient HW/SW configuration for a specific cloud.
PKB as a framework gives us a possibility to create new plugins for providers and benchmarks. Thanks to this, we are able to easily build a custom benchmarking solution which meets most of our requirements.”

Daniel Sanchez and Assistant Professor at MIT EECS told us:
“We are using PKB to investigate new multicore systems. In particular, we are designing new hardware and software techniques that allow servers to provide low, predictable response latencies efficiently. 
PerfKit has made it much easier for us to simulate new hardware techniques on a broad array of cloud computing benchmarks. This is crucial for our work, because traditional benchmark suites are more focused on batch applications, which have quite different needs from cloud computing workloads.

Our goal has always been to help customers make sense of the various cloud products and providers in a simple, transparent way. We want to be innovative, accountable and inclusive in our approach. We're happy to see this effort being welcomed by the industry and academia, and we welcome new partners and feedback.

We invite you to try PerfKit Benchmarker V1 to measure the cloud, and to join the Open Source Benchmarking effort on GitHub.

Production troubleshooting with Cloud Debugger now available for Python

Do you love Python but hate tracking down bugs in production when time is of the essence? Cloud Debugger can help you identify the root-cause in a few clicks. With our lightning fast, low overhead debugger agent, you simply select the line of code and the debugger returns the local variables and a full stack trace when that line is next executed on any of your instances – all without halting your application or slowing down any requests.

Throughout this year we expanded support for Java projects for Google App Engine and Google Compute Engine. Recently we enabled support for Go projects on Compute Engine. Now Python developers can get in on the fun on App Engine and Compute Engine.

Cloud Debugger adds zero overhead on projects that are not being actively debugged. It also adds less than 10ms to request latency when capturing application state without blocking requests to your application.

With this release, Cloud Debugger is now available for Java, Python and Go projects. Try it out today. Support for additional programming languages and frameworks is in the works.

As always, we’d love direct feedback and will be monitoring Stack Overflow for issues and suggestions.

Posted by Keith Smith, Product Manager

2015: the year in cloud

For over a decade, we’ve helped evolve the landscape of cloud computing. In that time, we’ve seen plenty of changes — and the past 12 months have been no exception. From widespread adoption of containers to multi-cloud applications, 2015 was truly transformational.

Here, we’ve put together our top moments and themes of the year. Take a look, then tell us on G+ and Twitter what story or trend you’d add, using #CloudTop10.

1. Enterprise, meet Cloud.

For most organizations, Cloud is no longer a question of “if,” but “when”— and according to new estimates, it’ll be sooner than you might think: 34% of enterprises report plans to host over 60% of their apps on a cloud platform in the next two years. In anticipation, most vendors have taken steps to support enterprise workloads. Just look at Microsoft Azure’s partnership with HP and Google’s custom machine types.

2. Containers rush into the mainstream.

Even a year ago, many developers hadn’t yet given containers a try. Fast-forward to 2015 and we saw containers used not just in testing — but widely adopted in production. In fact, container adoption grew 5X in 2015 according to a recent survey. How did this happen so quickly? Part of the answer lies in the availability of robust open-source technologies like Docker and the Kubernetes project. With these technologies, vendors have been able to accelerate container adoption — from VMware’s vSphere integration to Microsoft’s Docker Client for Windows and our own Container Engine.

3. Big Data needs big insights.

In 2015, Big Data didn't live up to the hype. In a May survey, 77% of organizations felt their big data and analytics deployments are failing or have failed to meet expectations. Yet while the finding is clear, the cause is complex. Siloed teams, high maintenance gear and the need for better tools certainly play a part in the problem. What’s the solution? Most likely, it lies in making tooling and data more accessible to citizen data scientists — whose deep domain knowledge can unlock its true value.

4. Machine learning for all.

The potential benefits of machine learning has been evident for a while. Now, thanks to the increased processing power of computers and data centers, that potential is finally being realized. To help spur this evolution on, software libraries (like TensorFlow) are being open-sourced. This’ll allow ideas and insights to be rapidly exchanged through working code, not just research papers.

5. The Future of IoT.

When most of us hear “Internet of Things“ (IoT), we think of the consumer: connecting the thermostat to the watch to the TV and so on. Yet surprisingly, the greatest adoption of IoT is happening in the enterprise. By 2019, it’s estimated that the enterprise market will account for 9.1 billion of 23.3 billion connected devices. That means scale of ingestion and stream-based data processing will become a critical part of every IT strategy—and interest in technologies like Google Cloud Dataflow and Apache Spark is spiking accordingly.

6. API as a business gets big.

Providing application services-on-demand to developers is now a validated business model — as evidenced by the presence of “unicorn” businesses, such as Twilio and Okta. Both companies closed rounds in 2015 at valuations north of $1 billion, and both provide services that developers can incorporate in their applications.

7. Hybrid clouds on the horizon.

Multi-cloud architecture isn’t new: it’s been used for years as a backup and disaster recovery solution. What is new is the rate at which we’re now seeing multi-cloud orchestration tools, like Kubernetes and Netflix’s Spinnaker being widely deployed. This choice helps prevent lock-in to any one vendor — and with estimates that 50% of enterprises will have hybrid clouds by 2017, this trend shows no signs of slowing down.

8. Shifts and shut-downs.

As the Cloud Platform landscape evolves, we’re seeing increasing consolidation in the market. In part, this is likely due to the cloud’s tremendous hardware and engineering demands. Still, one of the biggest announcements of the year came when Rackspace confirmed it will shift focus from their own cloud offering to supporting third-party cloud infrastructures. With the news that HP will officially shut down Helion in January, this is one trend that’s sure to continue through 2016.

9. Going green.

Customers have spoken and they want their cloud green. What’s still up for debate, however, is how to bring the environmental efficiency of larger, pan-regional data centers to local ones — which may not have the scale to be environmentally efficient.

10. What’s yours?

What cloud story or trend would you add to our list? We want to hear from you: submit your idea on G+ and Twitter, using the hashtag #CloudTop10.

We’ll review all the entries, then select a story — and author — to be featured on our blog.

More ways to discover and manage services on Google Cloud Platform

The cloud console that our customers use to configure and manage Google Cloud Platform resources provides a single comprehensive location for all GCP services, from App Engine instances to viewing Logs to data processing. But which parts of the platform do you use?

Last month, all GCP customers were invited to start using the new console with features such as pinning and search. With overwhelming positive feedback, we’re pleased to announce its release to general availability.

“Wow! Looks great. I love the way you can pin stuff to the top menu. It makes switching between components much easier (notably App Engine and Datastore). I also like the way you can drill into components, so the UI is less cluttered.” - Gwyn Howell, Appogee HR

“The different areas are very well organized now. Very clean. I love that even the side menu can be searched. That is very useful since there are quite a lot of services.” - Noble Ackerson, LYNXFIT

Thank you for helping us improve the console by providing continual feedback during the beta. After some of you reported page loading latency, we discovered and fixed a bug in Angular Material. We also realigned the color palette to improve the experience after several of you noted that the original red palette for Storage could be misinterpreted as a warning bar.

To quickly review, the updated console now enables you to:

  • Pin each of your commonly-used services to the top of the console for fast access
  • Use the search box and its autocomplete options to easily locate the service you wish to manage
  • Access features in several different ways using the new global navigation options 
    • Open the hamburger menu to see all Cloud Platform offerings in one consolidated place
    • Use the keyboard shortcut (‘/’) to quickly enter into search based navigation
  • Focus solely on a single service and view all content within that service in one place
  • Identify and address issues from a configurable dashboard of your resources and services

When you click in the search bar and start typing, you'll see a dynamically populated set of results:

Figure 1: GCP cloud console search box
Clicking on the menu in the top left will expand the full list of available services and allow you to pin your commonly used items (by clicking on the Pin):
Figure 2: GCP console now allows you to pin favorite services.

We encourage you to give the new console a try and use the feedback button to let us know what you think.

- Posted by Stewart Fife, Product Manager, Google Cloud Platform

Improved Compute Engine Quota experience

As part of our constant improvements to the Google Cloud Platform console we’ve recently updated our Google Compute Engine quotas page. Now you can easily see quota consumption levels and sort to find your most-used resources. This gives you a head start on determining and procuring any additional capacity you need so you hit fewer speed bumps on your road to growth and success.
We’ve also improved the process of requesting more quota, which can be initiated directly from the quotas page by clicking on the “Request increase” button. We’ve added additional checks to the request form that help speed up our response processing time; now most requests are completed in minutes. With these changes, we’re making it even easier to do more with Cloud Platform.

You can access your console at https://console.cloud.google.com and learn more about how GCP can help you build better applications faster on the https://cloud.google.com web page.

Posted by Roy Peterkofsky, Product Manager

Google Cloud Vision API changes the way applications understand images

Have you ever wondered how Google Photos helps you find all your favorite dog photos? With today’s release of Google Cloud Vision API, developers can now build powerful applications that can see, and more importantly understand, the content of images. The uses of Cloud Vision API are game changing to developers of all types of applications and we are very excited to see what happens next!

Advances in machine learning, powered by platforms like TensorFlow, have enabled models that can learn and predict the content of an image. Our limited preview of Cloud Vision API encapsulates these sophisticated models as an easy-to-use REST API. Cloud Vision API quickly classifies images into thousands of categories (e.g., "boat", "lion", "Eiffel Tower"), detects faces with associated emotions, and recognizes printed words in many languages. With Cloud Vision API, you can build metadata on your image catalog, moderate offensive content, or enable new marketing scenarios through image sentiment analysis. The REST API can analyze images stored anywhere, or integrate with your image storage on Google Cloud Storage.

The following set of Google Cloud Vision API features can be applied in any combination on an image:

  • Label/Entity Detection picks out the dominant entity (e.g., a car, a cat) within an image, from a broad set of object categories. You can use the API to easily build metadata on your image catalog, enabling new scenarios like image based searches or recommendations.
  • Optical Character Recognition to retrieve text from an image. Cloud Vision API provides automatic language identification, and supports a wide variety of languages.
  • Safe Search Detection to detect inappropriate content within your image. Powered by Google SafeSearch, the feature enables you to easily moderate crowd-sourced content.
  • Facial Detection can detect when a face appears in photos, along with associated facial features such as eye, nose and mouth placement, and likelihood of over 8 attributes like joy and sorrow. We don't support facial recognition and we don’t store facial detection information on any Google server.
  • Landmark Detection to identify popular natural and manmade structures, along with the associated latitude and longitude of the landmark.
  • Logo Detection to identify product logos within an image. Cloud Vision API returns the identified product brand logo, with the associated bounding polybox.


You can currently call the API by embedding an image as part of the request. In future phases, we will add support for integrating with Google Cloud Storage. The Vision API enables you to request one or more annotation types per image.



To show a simple example of the Vision API, we have built a fun Raspberry Pi based platform with just a few hundreds of lines of Python code, calling the Vision API. Our demo robot can roam and identify objects, including smiling faces. This is just one simple example of what can be done with Cloud Vision API:


Aerosense, a subsidiary of Sony Mobile Communications Inc, was among the first early testers to use Cloud Vision API and had some initial feedback to share:


To join the Limited Preview, please sign up here. We cannot wait to see what amazing applications you build with Vision API, and we look forward to hearing from you!

- Posted by Ram Ramanathan, Product Manager, Google Cloud Platform

Containerizing in the real world . . . of Minecraft

Containers are all the rage right now. There are scores of best practices papers and tutorials out there, and "Intro to Containers" sessions at just about every conference even tangentially related to cloud computing. You may have read through the Docker docs, launched an NGINX Docker container, and read through Miles Ward’s Introduction to containers and Kubernetes piece. Still, containers can be a hard concept to internalize, especially if you have an existing application that you’re considering containerizing.

To help you through this conceptual hurdle, I’ve written a four-part series of blog posts that gives you a hands-on introduction to building, updating, and using containers for something familiar: running a Minecraft server. You can check them out here:


In the first part of the series, you’ll learn how to create a container image that includes everything a Minecraft server needs, use that image on Google Compute Engine to run the server, and make it accessible from your Minecraft client. You’ll use the Docker command-line tools to build, test, and run the container, as well as to push the image up into the Google Container Registry for use with a container-optimized instance.


Next, you'll work through the steps needed to separate out storage from the container and learn how to make regular backups of your game. If you’ve ever made a mistake in Minecraft, you know how critical being able to restore world state can be! As Minecraft is always more fun when it’s customized, you'll also learn how to update the container image with modifications you make to the server.properties file.

Finally, you’ll take the skills that you’ve learned and apply them to making something fun and slightly absurd: Minecraft Roulette. This application allows you to randomly connect to one of several different Minecraft worlds using a single IP as your entry point. As you work through this tutorial, you’ll learn the basics of Kubernetes, an open source container orchestrator.

By the end of the series, you’ll have grasped the basics of containers and Kubernetes, and will be set to go out and containerize your own application. Plus, you’ll have had the excuse to play a little Minecraft. Enjoy!

This blog post is not approved by or associated with Mojang or Minecraft.

Posted by Julia Ferraioli, Senior Developer Advocate, Google Cloud Platform