Tag Archives: Containers & Kubernetes

Last month today: July on GCP

The month of July saw our Google Cloud Next ‘18 conference come and go, and there was plenty of exciting news, updates and demos to share from the show. Here’s a look at some of the most-read blog posts from July.

What caught your attention this month: Creating the open cloud
  • One of the most-read posts this month covered the launch of our Cloud Services Platform, which allows you to build a true hybrid cloud infrastructure. Some of the key components of Cloud Services Platform include the managed Istio service mesh, Google Kubernetes Engine (GKE) On-Prem and GKE Policy Management, Cloud Build for fully managed CI/CD, and several serverless offerings (more on that below). Combined, these technologies can help you gain consistency, security, speed and flexibility of the cloud in your local data center, along with the freedom of workload portability to the environment of your choice.
  • Another popular read was a rundown of Google Cloud’s new serverless offerings. These include core serverless compute announcements such as new App Engine runtimes, Cloud Functions general availability and more. It also included serverless containers, so you can run serverless workloads in a fully managed container environment; GKE Serverless add-on to easily run serverless workloads on Kubernetes Engine; and Knative, the open-source project on which that add-on is built. There are even more features included in this post, too, like Cloud Build, Stackdriver monitoring and Cloud Firestore integration with GCP. 
Bringing detailed metrics and Kubernetes apps to the forefront
  • Another must-read post this month for many of you was Transparent SLIs: See Google Cloud the way your application experiences it, announcing the availability of detailed data insights on GCP services that your workloads use—helping you see like a Google site reliability engineer (SRE). These new service-level indicators (SLIs) go way beyond basic uptime and downtime to delve into response codes, latency and more. You can then separate out metrics by GCP service to see things like API version, location and protocol. The result is that you can filter and sort to get extremely fine-grained information on your software and the GCP services you use, which helps cut resolution times and improve the support experience. Transparent SLIs are available now through the Stackdriver monitoring console. Learn more here about the basics of using SLIs and other SRE tools to measure and manage availability.
  • It’s also now faster and easier to find production-ready commercial Kubernetes apps in the GCP Marketplace. These apps are prepackaged and configured to get up and running easily, whether on Kubernetes Engine or other Kubernetes clusters, and run the gamut from security, data analytics and developer tools to storage, machine learning and monitoring.
There was obviously a lot to talk about at the show, and you can get even more detail on what happened at Next ‘18 here.

Building the cloud back-end
  • For all of you developing cloud apps with Java, the availability of Jib was an exciting announcement last month. This open-source container image builder, available as Gradle and Maven plugins, cuts out several steps from the Docker build flow. Jib does all the work required to package your app into a container image—you don’t need to write a Dockerfile or even have Docker installed. You end up with faster builds and reproducible container images.
  • And on that topic, this best practices for building containers post was a hit, too, giving you tips that will set you up to run your environment more smoothly. The tips in this blog post cover graceful application shutdowns, how to simplify containers and how to choose and tag the container images you’ll use. 
It’s been a busy month at GCP, and we’re glad to share lots of new tools with you. Till next time, build away!

Istio reaches 1.0: ready for prod



Today, Google Cloud is proud to announce, together with our collaborators, that the Istio open-source project has reached the 1.0 milestone. This is a key step toward delivering the Cloud Services Platform that we discussed last week, helping you manage your services in a hybrid world where some of your infrastructure runs on VMs and some in Kubernetes, some services run in the cloud and some on-premises.

Istio: a service mesh

Istio is at its heart a service mesh—software that layers transparently onto an existing distributed application. It collects logs, traces and telemetry, and adds security and policy without embedding client libraries. Moreover, Istio is also a platform, complete with APIs that let you integrate with systems for logging, telemetry and policy.

Istio delivers a service-based view of the service interactions across the mesh. Whereas traditional monitoring gives you low-level metrics such as nodes’ CPU consumption, Istio measures the actual traffic between services: requests per second, error rates and latency. It also generates a dependency graph so you can see how services affect one another.

With Istio, your DevOps team gets the tools it needs to run distributed apps smoothly. Istio does canary rollouts, letting you smoke-test a new build to make sure it’s performing well before ramping up. It also offers fault-injection, retry logic and circuit breaking so DevOps teams can do more testing and change network behavior at runtime to keep applications up and running.

And finally, Istio adds security. It can be used to layer mTLS on every call, adding encryption-in-flight and giving you the ability to authorize every single call on your cluster and in your mesh.

Istio in action

Istio provides foundational capabilities for your infrastructure, freeing developers to work on code that is critical to your business. But there’s only one way to prove that Istio is ready for the enterprise: by running real workloads on it in production. Already, there are at least a dozen companies running Istio in production, including several on GCP. We worked with them through early hurdles, incorporated their feedback, and they’re reaping the benefits of Istio already. A great example is Auto Trader UK, which used Istio to help accelerate their move to containers and the public cloud.

Auto Trader UK is not only migrating from private cloud to public cloud, but also moving from virtual machines to Kubernetes. The level of control and visibility that Istio provides has enabled us to significantly de-risk this ambitious work, and in several cases has actually helped surface issues we were previously unaware of. We've been able to accelerate the delivery of capabilities such as mutual TLS, that previously would have taken significant engineering effort, allowing us to focus on our market differentiators.
- Karl Stoney, Delivery Infrastructure Lead, Auto Trader UK

A true joint effort

We first released Istio as open source last year, and what a year it’s been. Since that first 0.1 release, Istio has improved and matured significantly, with eight versions, 200+ contributors, and 4,000+ check-ins adding an ever growing set of functionality.

Getting to version 1.0 was truly a community-driven effort. IBM was a key collaborator and co-founder, and Lyft’s Envoy proxy is a key component of the project. Since then, the number of companies involved in Istio has skyrocketed, including Cisco, Red Hat, and VMware consolidating industry support with the goal of accelerating adoption and meeting the service mesh needs of their customers.

“The growth of Istio since its launch last year has been tremendous, and it’s quickly taking its place as the standard way to manage microservices in the cloud,” said Jason McGee, IBM Fellow and VP, IBM Cloud. “Our mission since Istio’s launch has been to enable everyone to succeed with microservices, especially in the enterprise. This is why we’ve focused the community around improving security and scale, and heavily leaned our contributions on what we’ve learned from building agile cloud architectures for companies of all sizes.”
- Jason McGee, IBM Fellow and VP, IBM Cloud 
"We see Istio's potential to be able to solve some of the most complex aspects of application development and deployment. It brings a control plane for service mesh, cluster orchestration, and network control that will support and enable developers to focus on the more important aspects of their application development. We are looking forward to leveraging Istio in Red Hat OpenShift to enable developers to deploy their applications in a more secure and efficient manner." 
- Brian 'Redbeard' Harrington, product manager, Istio, Red Hat
“VMware has been an integral part of the community developing Istio service mesh. We see great potential in Istio’s service-based approach to connectivity, security, and observability. We believe it will become an infrastructure cornerstone, spanning across vSphere and Kubernetes platforms and multiple private and public clouds, and helping our enterprise customers improve development efficiencies and deliver on their SLAs / SLOs in a secure manner. Istio’s application layer complements the network virtualization layer, and together allow enterprises to achieve defense in depth, improve performance and scalability, and speed time to application value.” 
- Pere Monclus, CTO Network and Security, VMware

We’re also thrilled with the number of companies writing adapters for Istio—from observability software from SolarWinds and Datadog, to deployment tools from Weaveworks and CodeFresh, to policy and security offerings from Aspenmesh and Octarine. While Istio is transparent to application developers, it provides a standard integration interface for anyone writing observability tools or policy engines.

Working and integrating with other open source projects in the community drives our success, as well. Integrations with SPIFFE, the Open Policy Agent and OpenTracing all improve the state of open source and the lives of developers.

Istio on GCP

While the open-source Istio project is a major undertaking, we’re also intent on making it especially easy to use on Google Cloud Platform. Last week at Google Cloud Next we announced the alpha release of Managed Istio: open-source Istio that’s automatically installed and upgraded on your Kubernetes Engine clusters as a part of the Cloud Services Platform. Managed Istio will help provide the visibility, security and control you need over services running in hybrid environments, and it integrates with other Google products like Stackdriver and Apigee.

Achieving 1.0 is just a first step, both for the project and for us at Google Cloud. We have ambitious plans for adding features and improving Istio’s usability with  the ultimate goal of delivering a complete set of tools to manage all of your services, so that you can focus on writing software and running a business.

To find out more about Istio and how to get started using it on GCP, please visit cloud.google.com/istio.

Cloud Services Platform: bringing the best of the cloud to you



In the decade since cloud computing became mainstream, it’s captured the hearts and minds of developers and enterprises everywhere. But for most IT organizations, cloud is still but a glimmer of what it could be—or what it should be. Today, we’re excited to share our vision for Cloud Services Platform, an integrated family of cloud services that lets you increase speed and reliability, improve security and governance and build once to run anywhere, across GCP and on-premise environments. With Cloud Services Platform, we bring the benefits of the cloud to you, no matter where you deploy your IT infrastructure today—or tomorrow.

Cloud Services Platform puts all your IT resources into a consistent development, management and control framework, automating away low-value and insecure tasks across your on-premise and Google Cloud infrastructure. Specifically, we’re announcing:
  • Service mesh: Availability of Istio 1.0 in open source, Managed Istio, and Apigee API Management for Istio
  • Hybrid computing: GKE On-Prem with multi-cluster management
  • Policy enforcement: GKE Policy Management, to take control of Kubernetes workloads
  • Ops tooling: Stackdriver Service Monitoring
  • Serverless computing: GKE Serverless add-on and Knative, an open source serverless framework
  • Developer tools: Cloud Build, a fully managed CI/CD platform
The Cloud Services Platform family

“We needed a consistent platform to deploy and manage containers on-premise and in the cloud. As Kubernetes has become the industry standard, it was natural for us to adopt Kubernetes Engine on GCP to reduce the risk and cost of our deployments.”
- Dinesh KESWANI, Global Chief Technology Officer at HSBC
Cloud Services Platform is technologically and architecturally aligned with the joint hybrid cloud products we've been developing and bringing to market with our partner, Cisco, with whom we have been collaborating closely. Our joint solution, Cisco Hybrid Cloud Platform for Google Cloud, will be generally available next month and is now certified to be consistent with Kubernetes Engine, enabling GCP out of the box.

Today, let’s take a look at aspects of the Cloud Services Platform, and how it lays a foundation for a fully realized cloud infrastructure.

Modernizing application architecture with Istio

Last year, we took a step toward helping organizations move from reactive IT management to proactive service operations—the idea of managing at a higher layer of the stack, enabling greater application awareness and control. In collaboration with several industry partners, we announced Istio, an open-source service mesh that gives operators the controls they need to manage microservices at scale. We are excited to say that open-source Istio will move to version 1.0 shortly, making it ready for production deployments.

Building on that open-source foundation, we are announcing a managed Istio service that you can use to manage services within a Kubernetes Engine cluster. Managed Istio, in alpha, is an Istio-powered service mesh available in Kubernetes Engine, complete with enterprise support. Managed Istio accelerates your journey to service operations with three high-level capabilities:
  • Service discovery and intelligent traffic management—Managed Istio surfaces all the services running in your cluster and manages network traffic between them. Using application-level load balancing and sophisticated traffic routing for container and VM workloads, it also provides health checks, plus canary and blue/green deployments, enabling fault tolerant applications with circuit breaking and timeouts.
  • Secure, authenticated communications—Managed Istio offers segmentation and granular policy for endpoints, compliance and detecting anomalous behavior, and traffic encryption by default using mTLS.
  • Monitoring and management—Understand and troubleshoot the system of services running across Managed Istio, including integration with Stackdriver, our suite of monitoring and management tools.
It's still early days, but we are very excited about Istio and Managed Istio, foundational technologies that will drive the use of containers and microservices, while helping to make your environment much more manageable, scalable and available.

Enterprise-grade Kubernetes, wherever you go

A great path to well-managed applications is undoubtedly containers and microservices, and having a common Kubernetes management layer can help get you there that much faster. Four years ago, we released Kubernetes, and the resulting Kubernetes Engine managed service is battle-tested and growing by leaps and bounds: In 2017 Kubernetes Engine core-hours grew 9X year over year.

Today, we are excited to bring that same managed Kubernetes Engine experience to your on-premise infrastructure. GKE On-Prem, soon to be in alpha, is Google-configured Kubernetes that you can deploy in the environment of your choice. GKE On-Prem makes it easy to install and upgrade Kubernetes and provides access to the following capabilities across GCP and on-premise:
  • Unified multi-cluster registration and upgrade management
  • Centralized monitoring and logging with Stackdriver integration
  • Hybrid Identity and Access Management
  • GCP Marketplace for Kubernetes applications
  • Unified cluster management for GCP and on-premise
  • Professional services and enterprise-grade support
Now, with GKE On-Prem, you can begin to modernize existing applications on-premise, without necessarily moving to the cloud. You gain control of your journey to the cloud at your own pace.

Automatically take control of your Kubernetes workloads

When it comes to managing clusters at scale, it’s imperative to have the right security controls in place and ensure your policies can be easily managed and enforced. Today, we’re pleased to announce GKE Policy Management which delivers centralized capabilities that make it far easier for administrators to configure Kubernetes (wherever it may be running).

With GKE Policy Management, Kubernetes administrators create a single source of truth for their policies that automatically syncs with any enrolled cluster. GKE Policy Management supports policies stored as definitions in a repository, and can also use your existing Google Cloud IAM policies to make it simple to secure your clusters. GKE Policy Management is coming soon to alpha; sign up here to express interest.

A service-centric view of your environment

More than simply making it easier to migrate workloads to the cloud, the technologies found in Cloud Services Platform lay the groundwork for improving service operations, by providing administrators with a service-centric view of their infrastructure, rather than infrastructure views of services. Today, we are announcing Stackdriver Service Monitoring, which provides the following new views:
  • Service graph: A real-time bird’s-eye visualization of the entire environment—see all your microservices, how they communicate, and their dependencies.
  • Service level objective (SLO) monitoring: Monitor and alert in the same customer-centric, low-toil manner as Google Site Reliability Engineers (SRE) do for our own services.
  • Service dashboard: All your signals for a given service are in a single place so that you can debug faster and easier than ever before and lower your mean-time-to-resolution (MTTR).
Stackdriver Service Monitoring is designed for workloads running on opinionated Istio infrastructure, as well as App Engine.

When microservices become APIs

Microservices provide a simple, compelling way for organizations to accelerate moving workloads to the cloud, serving as a path towards a larger cloud strategy. Istio enables service discovery, connection and management for microservices. But as soon as those services are needed for internal groups, partners or developers outside of the enterprise, they quickly cross the line and become APIs.

Just as organizations need services management for microservices, they need API management for their APIs. Apigee API Management complements Istio with the robust features of Google Cloud's Apigee API management platform, Apigee Edge, by extending API management natively into the microservices stack. Apigee Edge features include API usage, access, productization, catalog and discovery, plus a developer portal to create a smooth experience for developers and increase API consumption.

Making cloud all it could be

Here at Google, we could never have done what we do today without containers and Kubernetes, but taking a service-oriented view of our operations has been equally critical. In addition to the core capabilities mentioned above, Cloud Services Platform provides access to other new areas of functionality:
  • GKE serverless add-on lets you run serverless workloads on Kubernetes Engine with a one-step deploy. You can go from source to containers amazingly fast, auto-scale your stateless container-based workloads, and even scale down to zero. Sign up for an early preview for the GKE serverless add-on here.
  • Knative (pronounced kay-nay-tiv), open-source serverless components from the same technology that enables the GKE serverless add-on. Knative lets you create modern, container-based and cloud-native applications by providing building blocks you need to build and deploy container-based serverless applications anywhere on Kubernetes.
  • Cloud Build is a fully-managed Continuous Integration/Continuous Delivery (CI/CD) platform that lets you build, test, and deploy software quickly, at scale.
Now, with Cloud Services Platform, we’re excited to bring the full potential of the cloud to you, wherever your workloads may be. For more on Cloud Services Platform, you can read about how it relates to serverless computing.

Bringing the best of serverless to you



Every business wants to innovate—and deliver—great software, faster. In recent years, serverless computing has changed application development, bringing the focus on the application logic instead of infrastructure. With zero server management, auto-scaling to meet any traffic demands, and managed integrated security, developers can move faster, stay agile and focus on what matters most—building great applications.

Google helped pioneer the notion of serverless more than 10 years ago with the introduction of App Engine. Making developers more productive is just as important today as it was then. Over the past few years, we have been working hard to bring the benefits of serverless that we learned from App Engine to our compute, storage, database, messaging services, data analytics, and machine learning offerings.

Today, in tandem with the launch of our Cloud Services Platform, we are sharing several important developments to our serverless compute stack:
  • New App Engine runtimes
  • Cloud Functions general availability, support for additional languages, plus performance, networking and security features
  • Serverless containers on Cloud Functions
  • GKE serverless add-on
  • Knative, Kubernetes-based building blocks for serverless workloads
  • Integration of Cloud Firestore with GCP services

Expanding serverless compute

Today we are announcing support for new second-generation App Engine standard runtimes such as Python 3.7 and PHP 7.2 in addition to recent support for Node.js 8. Second generation runtimes provide developers idiomatic, open-source language runtimes capable of running any framework, library, or binary. Based on gVisor technology, these new runtimes enable faster deployments and increased application performance.

Also, Cloud Functions, our event-driven compute service, is generally available starting today, complete with predictable service guaranteed by an SLA, and a global footprint with new regions in Europe and Asia. In addition, we are bolstering Cloud Functions with a range of new and heavily requested features including support for Python 3.7 and Node.js 8, networking and security controls, and performance improvements across the board. Cloud Functions also lets you seamlessly connect and extend more than 20 GCP services such as BigQuery, Cloud Pub/Sub, machine learning APIs, G Suite, Google Assistant and many more.

Serverless and containers: the best of both worlds

Whether you’re using App Engine or Cloud Functions, Google’s serverless platform offers a complete mix of tools and services. However, many customers tell us they have custom requirements like specific runtimes, custom binaries, or workload portability. More often than not, they turn to containers for an answer. At Google Cloud, we want to bring the best of both serverless and containers together.

Today, we’re also introducing serverless containers, which allow you to run container-based workloads in a fully managed environment and still only pay for what you use. Sign up for an early preview of serverless containers on Cloud Functions to run your own containerized functions on GCP with all the benefits of serverless.

And what if you are already using Kubernetes Engine? A new GKE serverless add-on lets you run serverless workloads on Kubernetes Engine with a one-step deploy. You can go from source to containers instantaneously, auto-scale your stateless container-based workloads, and even scale down to zero. Here’s what T-mobile had to say about running their serverless workloads on Kubernetes Engine:
"The technology behind the GKE serverless add-on enabled us to focus on just the business logic, as opposed to worrying about overhead tasks such as build/deploy, autoscaling, monitoring and observability"
-Ram Gopinathan, Principal Technology Architect, T- Mobile

With Knative, run your serverless workloads anywhere

While we believe Google Cloud is a great place to run all types of workloads, some customers need to run on-premises or across multiple clouds. Based on this feedback, we’re excited to announce Knative (pronounced kay-nay-tiv), which is an open-source set of components from the same technology that enables the GKE serverless add-on.

Developed in close partnership with Pivotal, IBM, Red Hat, and SAP, Knative pushes Kubernetes-based computing forward by providing the building blocks you need to build and deploy modern, container-based serverless applications.

Knative focuses on the common but challenging parts of running apps, such as orchestrating source-to-container builds, routing and managing traffic during deployment, auto-scaling workloads, and binding services to event ecosystems. Knative provides you with familiar, idiomatic language support and standardized patterns you need to deploy any workload, whether it’s a traditional application, function, or container.

Knative provides reusable implementations of common patterns and codified best practices, shared by successful, real-world Kubernetes-based frameworks and applications. For instance, Knative comes with a build component that provides powerful abstraction and flexible workflow for building, testing, or deploying container images or non-container artifacts on a Kubernetes cluster. By integrating Knative into your own platform, you don’t have to choose between the portability and familiarity of containers and the automation and efficiency of serverless computing. And you can enjoy the benefits of Google Cloud’s extensive experience delivering serverless computing whether you run on GCP, on-premises or in any other cloud. Get started today with Knative or join the conversation.

A comprehensive serverless ecosystem

Of course, serverless computing is a non-starter if you can’t easily build and deploy the code, store your data, and manage your applications in production as part of your overall IT environment. At Google Cloud, we’re committed to enabling the comprehensive ecosystem of serverless offerings.

Cloud Build, for instance, lets you create a continuous integration and delivery (CI/CD) pipeline for your serverless applications. You can define custom workflows for building, testing, and deploying across multiple serverless environment such Cloud Functions, App Engine and even Knative.

Cloud Firestore, one of the most recent additions to our serverless stack, lets you store and sync your app data at global scale. Soon, app developers will be able to easily access Cloud Firestore within the GCP Console, and it will also be compatible with Cloud Datastore.

Finally, our Stackdriver suite has four core capabilities—monitoring, logging, application performance management (APM) and the newly released Service Monitoring—and lets you operate and rapidly diagnose your serverless applications in production.

Toward ubiquitous serverless computing

We’re firm believers in finding ways to simplify operations and bring solutions to market faster. Last week’s launch of commercial Kubernetes applications in GCP Marketplace demonstrates how third-party solutions providers are adopting new technologies rapidly to support enterprise demand for extensible solutions. Now, with these new offerings, we’ll help more developers adopt serverless computing in the languages and platforms of their choice.

Click here to learn about the full breadth of Google Cloud serverless technologies.

Kubernetes wins OSCON Most Impact Award



Today at the Open Source Awards at OSCON 2018, Kubernetes won the inaugural Most Impact Award, which recognizes a project that has had a ‘significant impact on how software is being written and applications are built’ in the past year. Thank you O’Reilly OSCON for the recognition, and more importantly, thank you to the vast Kubernetes community that has driven the project to where it is today.

When we released Kubernetes just four years ago, we never quite imagined how successful the project would be. We designed Kubernetes from a decade of experience running production workloads at Google, but we didn’t know whether the outside world would adopt it. However we believed that if we remained open to new ideas and new voices, the community would provide feedback and contributions to move the project forward to meet the needs of users everywhere.

This openness led to Kubernetes’ rapid adoption—and it’s also one of the core pillars of Google Cloud: our belief in an open cloud, so that you can pick-up and move your app wherever you want. Whether it’s Tensorflow, an open source library for machine learning, Asylo, a framework for confidential computing, or Istio, an open platform to connect microservices, openness remains a core value here at Google Cloud.

To everyone who has helped make Kubernetes the success it is today, many thanks again.

If you haven’t tried Kubernetes, it’s easy to get started with using Google Kubernetes Engine. If you’re interested to learn more about Kubernetes and the ecosystem it spawned, then subscribe to the Kubernetes Podcast from Google to hear weekly insights from leaders in the community.

Introducing commercial Kubernetes applications in GCP Marketplace



Building, deploying and managing applications with Kubernetes comes with its own set of unique challenges. Today, we are excited to be the first major cloud provider to offer production-ready commercial Kubernetes apps right from our marketplace, bringing you simplified deployment, billing, and third-party licensing.

Now you can find the solution you need in Google Cloud Platform Marketplace (formerly Cloud Launcher) and deploy quickly on Kubernetes clusters running on Google Cloud Platform (GCP), Kubernetes Engine, on-prem, or even other public clouds.

Enterprise-ready containerized applications - We are on a mission to make containers accessible to everyone, especially the enterprise. When we released Kubernetes as open source, one of the first challenges that the industry tackled was management. Our hosted Kubernetes Engine takes care of cluster orchestration and management, but getting apps running on a Kubernetes cluster can still be a manual, time-consuming process. With GCP Marketplace, you can now easily find prepackaged apps and deploy them onto the cluster of your choice.

Simplified deployments - Kubernetes apps are configured to get up and running fast. Enjoy click-to-deploy to Kubernetes Engine, or deploy them to other Kubernetes clusters off-GCP. Now, deploying from Kubernetes Engine is even easier, with a Marketplace window directly in the Kubernetes Engine console.

Production-ready security and reliability - All Kubernetes apps listed on GCP Marketplace are tested and vetted by Google, including vulnerability scanning and partner agreements for maintenance and support. Additionally, we work with open-source Special Interest Groups (SIGs) to create standards for Kubernetes apps, bringing the knowledge of the open-source community to your enterprise.

Supporting hybrid environments - One of the great things about containers is their portability across environments. While Kubernetes Engine makes it easy to click-to-deploy these apps, you can also deploy them in your other Kubernetes clusters—even if they’re on-premises. This lets you use the cloud for development and then move your workloads to your production environment, wherever it may be.

Commercial Kubernetes applications available now

Our commercial Kubernetes apps, developed by third-party partners, support usage-based billing on many parameters (API calls, number of hosts, storage per month), simplifying license usage and giving you more consumption options. Further, the usage charges for your apps are consolidated and billed through GCP, no matter where they are deployed (not including any non-GCP resources they need to run on).


“Cloud deployment and manageability are core to Aerospike's strategy. GCP Marketplace makes it simpler for our customers to buy, deploy and manage Aerospike through Kubernetes Engine with one-click deployment. This provides a seamless experience for customers by allowing them to procure both Aerospike solutions and Kubernetes Engine on a single, unified Google bill and providing them with the flexibility to pay as they go.”
- Bharath Yadla, VP-Product Strategy, EcoSystems, Aerospike

"As an organization focused on supporting enterprises with security for their container-based applications, we are delighted that we can now offer our solutions as commercial Kubernetes application more simply to customers through the GCP Marketplace commercial Kubernetes application option. GCP Marketplace helps us reach GCP customers, and the one-click deployment of our applications to Google Kubernetes Engine makes it easier for enterprises to use our solution. We are also excited about GCP’s commitment to enterprise agility by allowing our solution to be deployed on-premises, letting us reach enterprises where they are today."
- Upesh Patel, VP Business Development, Aqua Security

“Couchbase is excited to see GCP Marketplace continue the legacy of GCP by bringing new technologies to market. We've seen GCP Marketplace as a key part of our strategy in reaching customers, and the new commercial Kubernetes application option differentiates us as innovators for both prospects and customers."
-Matt McDonough, VP of Business Development, Couchbase

"With the support for commercial Kubernetes applications, GCP Marketplace allows us to reach a wider range of customers looking to deploy our graph database both to Google Kubernetes Engine and hybrid environments. We're excited to announce our new offering on GCP Marketplace as a testament to both Neo4j and Google's innovation in integrations to Kubernetes."
- David Allen, Partner Solution Architect, Neo4j

Popular open-source Kubernetes apps available now

In addition to our new commercial offerings, GCP Marketplace already features popular open-source projects that are ready to deploy into Kubernetes. These apps are packaged and maintained by Google Cloud and implement best practices for running on Kubernetes Engine and GCP. Each app includes clustered images and documented upgrade steps, so it’s ready to run in production.

One-stop shopping on GCP Marketplace

As you may have noticed, Google Cloud Launcher has been renamed to GCP Marketplace, a more intuitive name for the place to discover the latest partner and open source solutions. Like Kubernetes apps, we test and vet all solutions available through the GCP Marketplace, which include virtual machines, managed services, data sets, APIs, SaaS, and more. In most instances, we also recommend Marketplace solutions for your projects.
With GCP Marketplace, you can verify that a solution will work for your environment with free trials from select partners. You can also combine those free trials with our $300 sign-up credit. Once you’re up and running, GCP Marketplace supports existing relationships between you and your partners with private pricing. Private pricing is currently available for managed services, and support for more solution types will be rolling out in the coming months.

Get started today

We’re excited to bring support for Kubernetes apps to you and our partners, featuring the extensibility of Kubernetes, commercial solutions, usage-based pricing, and discoverability on the newly revamped GCP Marketplace.
If you are a partner and want to learn more about selling your solution on GCP Marketplace, please visit our sign-up page.

Google Home meets .NET containers using Dialogflow



I use my Google Home all the time to check the weather before leaving home, set up alarms, listen to music, but I never considered writing an app for it. What does it take to write an app for the Google Home assistant? And can we make it smarter by leveraging Google Cloud? Those were the questions that my colleague Chris Bacon, and I were thinking about when we decided to build a demo for a conference talk.

My initial instinct was that building an app for Google Home would be quite complicated. After all, we’re talking about real humans talking to a device that triggers some service running in the cloud. There are many details to figure out and many things that could potentially go wrong.

Turns out, it is much easier than I thought and a lot of fun as well. In this post, I want to give you a glimpse of what we built. If you want to setup and run the demo yourself, instructions and code are hosted here on GitHub.

Overview

Our main goal with the app was to showcase Google Cloud .NET libraries in a fun and engaging way while highlighting Google’s unique strengths. After some brainstorming, we decided to build a voice-driven app using Dialogflow where we asked some random questions and let Google Home answer by harnessing the power of the cloud.

In our app, you can ask Google Home to search for images of a city. Once it finds the images, they are displayed on a web frontend. You can select an image and ask more questions such as “Can you describe the image?” or “Does the image contain landmarks?” You can also ask questions about global temperatures such as “What was the hottest temperature in France in 2015?” or about Hacker News, for example “What was the top Hacker News story on May 1, 2018?” A picture is worth a thousand words. Here’s how the app ended up looking at the high level.

The voice command is first captured by Google Home device and passed to Google Assistant. We use Dialogflow to handle inputs to Google Assistant. Some inputs are handled directly in Dialogflow and some are passed to a pre-defined external webhook (in this case an HTTPS endpoint running in Google Cloud).

I should also mention that the app works anywhere Google Assistant is supported as long as you’re logged in the same Google account with which you created your Dialogflow app. If you don’t have a Google Home, you can simply use your Google Assistant-enabled phone to interact with the app.

Let’s take a look at the implementation in more detail.

Dialogflow

Dialogflow is a developer platform for building natural and rich conversational experiences. When we started thinking about this implementation, we quickly realized that Dialogflow would be a good starting point for the voice-driven part of the app. There are editions of Dialogflow (standard and enterprise) with different limits and SLAs. For our demo, the standard edition was more than enough.

You start by creating an agent for your app in Dialogflow console. Within the agent, you create intents. An intent represents a mapping between what a user says and what action should be taken by your app. You don’t have to list all the phrases that can trigger a certain intent. Instead, you provide some training phrases and Dialogflow uses machine learning to learn what to expect. It can also pick up entities from those phrases such as a city name or a date. If the app requires an entity, Dialogflow makes sure that the user provides them. All these small features greatly simplify the work of creating a conversational app.

Some intents can be handled directly in Dialogflow; simply provide the text response for Dialogflow to say. In our app, you can say “Say hi to everyone,” which Dialogflow handles directly with a simple response.

You can also enable an external endpoint to handle intents via a webhook. When an intent is triggered, Dialogflow passes the request to the defined endpoint. The only requirement is that the endpoint supports HTTPS. This is where the power of cloud comes in. In our app, we hosted an endpoint on Google Cloud to handle more complicated questions about images or global temperatures.

ASP.NET Core on App Engine (Flex)

For the endpoint, we decided to host a containerized ASP.NET Core web app on Google Cloud Platform (GCP). Since it’s a container running on Linux (yes, .NET runs on Linux!), we could have hosted on Google Kubernetes Engine or App Engine. We decided to go with App Engine, as it provides an HTTPS endpoint by default with minimal hassle. It also gives us versioning, so we can host multiple versions of our endpoint to do A/B testing or easy rollbacks.

The web app serves two purposes. First, it’s the visual frontend to show images or queries (handled by HomeController). Second, it handles webhook calls from Dialogflow for more complicated queries about images or global temperatures (handled by ConversationController).

ConversationController delegates to DialogflowApp to handle the request. DialogflowApp picks up the session id of the request and either creates a new Conversation or finds the existing one. Then, Conversation picks up the intent name and matches that to a subclass of BaseHandler using IntentAttribute at the beginning of handler classes.

Searching for image

When the user says “Search for images of Paris”, that triggers a webhook-enabled “vision.search” intent in Dialogflow. This intent picks up “Paris” as an entity and passes it to the webhook as search term. The call is then routed to VisionSearchHandler running on App Engine. This class uses Google Custom Search APIs to search for images using the search term. In the end, you see a list of images in the web frontend of the app.

Vision API

Once you have a list of images, you can say “Select first picture” to select one. Now it gets interesting. For example, saying something like “Describe the image” triggers VisionDescribeHandler, which makes a call to Vision API using our Vision API .NET library, and gets labels back. We pass these labels back to Dialogflow, which in turn passes them to Google Home to say out loud. You can also say “Does the image contain landmarks?” which uses Vision API’s landmark detection feature (handled by VisionLandmarksHandler). Or you can say “Is the image safe?” to make sure the image does not contain any unsafe images (handled by VisionSafeHandler).

BigQuery

BigQuery is Google's serverless data warehousing solution. It has many public datasets available for anyone to search and analyze. We decided to use two of those: Hacker News Data and NOAA Global Weather Data.

For example, if you were to say “What was the top hacker news on May 1, 2018?” It would be picked up by the “bigquery.hackernews” intent and eventually routed to BigQueryHackerNewsHandler with the date entity. This class uses BigQuery .NET library to run a query against the Hacker News Data and picks up the top 10 Hacker News articles on that day.

SImilarly, if you say “What was the hottest temperature in France in 2015?” this triggers BigQueryNoaaextremeHandler to run a query against the global weather data and display the top 10 temperatures and locations for that country in that year in the web frontend.

All this is done by scanning gigabytes of data in a few seconds and made possible by BigQuery’s massively parallel infrastructure.

Logging and monitoring

This was all fun but we wanted to make sure that we could maintain our app going forward. Stackdriver is Google Cloud’s logging, monitoring, tracing and debugging tool. Enabling Stackdriver entailed a single API call (UseGoogleDiagnostics in Program) and making a slight modification to a Dockerfile. All of a sudden, we got application logs, tracing for all HTTP calls, monitoring and last but not least, the ability to do live production debugging.

With Stackdriver Debugger, we can point to our code on GitHub and then take snapshots from anywhere in the code. Currently supported languages are Java, Python, Node.js, Go and C# (alpha). A snapshot can be captured on live production code without stopping or delaying the app. The snapshot can also be conditional, and contains local variables and stack traces, which are invaluable for production debugging.

Conclusion

In software development, something that should be easy usually ends up being much more complicated when you get into details. In this case, it was quite the opposite. Dialogflow made the voice recognition and routing of requests in our Google Home app very simple and straightforward. We deployed a containerized ASP.NET Core app on App Engine with a single command, and our Google Cloud .NET libraries for Vision API and BigQuery were straightforward and consistent to use.

In the end, I had a lot of fun writing this demo with Chris! If you want to try this out yourself, the code and instructions are on GitHub.

7 best practices for building containers



Kubernetes Engine is a great place to run your workloads at scale. But before being able to use Kubernetes, you need to containerize your applications. You can run most applications in a Docker container without too much hassle. However, effectively running those containers in production and streamlining the build process is another story. There are a number of things to watch out for that will make your security and operations teams happier. This post provides tips and best practices to help you effectively build containers.

1. Package a single application per container

Get more details

A container works best when a single application runs inside it. This application should have a single parent process. For example, do not run PHP and MySQL in the same container: it’s harder to debug, Linux signals will not be properly handled, you can’t horizontally scale the PHP containers, etc. This allows you to tie together the lifecycle of the application to that of the container.
The container on the left follows the best practice. The container on the right does not.


2. Properly handle PID 1, signal handling, and zombie processes

Get more details

Kubernetes and Docker send Linux signals to your application inside the container to stop it. They send those signals to the process with the process identifier (PID) 1. If you want your application to stop gracefully when needed, you need to properly handle those signals.

Google Developer Advocate Sandeep Dinesh’s article —Kubernetes best practices: terminating with grace— explains the whole Kubernetes termination lifecycle.

3. Optimize for the Docker build cache

Get more details

Docker can cache layers of your images to accelerate later builds. This is a very useful feature, but it introduces some behaviors that you need to take into account when writing your Dockerfiles. For example, you should add the source code of your application as late as possible in your Dockerfile so that the base image and your application’s dependencies get cached and aren’t rebuilt on every build.

Take this Dockerfile as example:
FROM python:3.5
COPY my_code/ /src
RUN pip install my_requirements
You should swap the last two lines:
FROM python:3.5
RUN pip install my_requirements
COPY my_code/ /src
In the new version, the result of the pip command will be cached and will not be rerun each time the source code changes.

4. Remove unnecessary tools

Get more details

Reducing the attack surface of your host system is always a good idea, and it’s much easier to do with containers than with traditional systems. Remove everything that the application doesn’t need from your container. Or better yet, include just your application in a distroless or scratch image. You should also, if possible, make the filesystem of the container read-only. This should get you some excellent feedback from your security team during your performance review.

5. Build the smallest image possible

Get more details

Who likes to download hundreds of megabytes of useless data? Aim to have the smallest images possible. This decreases download times, cold start times, and disk usage. You can use several strategies to achieve that: start with a minimal base image, leverage common layers between images and make use of Docker’s multi-stage build feature.
The Docker multi-stage build process.

Google Developer Advocate Sandeep Dinesh’s article —Kubernetes best practices: How and why to build small container images— covers this topic in depth.

6. Properly tag your images

Get more details

Tags are how the users choose which version of your image they want to use. There are two main ways to tag your images: Semantic Versioning, or using the Git commit hash of your application. Whichever your choose, document it and clearly set the expectations that the users of the image should have. Be careful: while users expect some tags —like the “latest” tag— to move from one image to another, they expect other tags to be immutable, even if they are not technically so. For example, once you have tagged a specific version of your image, with something like “1.2.3”, you should never move this tag.

7. Carefully consider whether to use a public image

Get more details

Using public images can be a great way to start working with a particular piece of software. However, using them in production can come with a set of challenges, especially in a high-constraint environment. You might need to control what’s inside them, or you might not want to depend on an external repository, for example. On the other hand, building your own images for every piece of software you use is not trivial, particularly because you need to keep up with the security updates of the upstream software. Carefully weigh the pros and cons of each for your particular use-case, and make a conscious decision.

Next steps

You can read more about those best practices on Best Practices for Building Containers, and learn more about our Kubernetes Best Practices. You can also try out our Quickstarts for Kubernetes Engine and Container Builder.

Introducing Jib — build Java Docker images better



Containers are bringing Java developers closer than ever to a "write once, run anywhere" workflow, but containerizing a Java application is no simple task: You have to write a Dockerfile, run a Docker daemon as root, wait for builds to complete, and finally push the image to a remote registry. Not all Java developers are container experts; what happened to just building a JAR?

To address this challenge, we're excited to announce Jib, an open-source Java containerizer from Google that lets Java developers build containers using the Java tools they know. Jib is a fast and simple container image builder that handles all the steps of packaging your application into a container image. It does not require you to write a Dockerfile or have docker installed, and it is directly integrated into Maven and Gradle—just add the plugin to your build and you'll have your Java application containerized in no time.

Docker build flow:

Jib build flow:


How Jib makes development better:


Jib takes advantage of layering in Docker images and integrates with your build system to optimize Java container image builds in the following ways:
  1. Simple - Jib is implemented in Java and runs as part of your Maven or Gradle build. You do not need to maintain a Dockerfile, run a Docker daemon, or even worry about creating a fat JAR with all its dependencies. Since Jib tightly integrates with your Java build, it has access to all the necessary information to package your application. Any variations in your Java build are automatically picked up during subsequent container builds.
  2. Fast - Jib takes advantage of image layering and registry caching to achieve fast, incremental builds. It reads your build config, organizes your application into distinct layers (dependencies, resources, classes) and only rebuilds and pushes the layers that have changed. When iterating quickly on a project, Jib can save valuable time on each build by only pushing your changed layers to the registry instead of your whole application.
  3. Reproducible - Jib supports building container images declaratively from your Maven and Gradle build metadata, and as such can be configured to create reproducible build images as long as your inputs remain the same.

How to use Jib to containerize your application

Jib is available as plugins for Maven and Gradle and requires minimal configuration. Simply add the plugin to your build definition and configure the target image. If you are building to a private registry, make sure to configure Jib with credentials for your registry. The easiest way to do this is to use credential helpers like docker-credential-gcr. Jib also provides additional rules for building an image to a Docker daemon if you need it.

Jib on Maven
<plugin>
  <groupId>com.google.cloud.tools</groupId>
  <artifactId>jib-maven-plugin</artifactId>
  <version>0.9.0</version>
  <configuration>
    <to>
      
    </to>
  </configuration>
</plugin>
# Builds to a container image registry.
$ mvn compile jib:build
# Builds to a Docker daemon.
$ mvn compile jib:dockerBuild
Jib on Gradle
plugins {
  id 'com.google.cloud.tools.jib' version '0.9.0'
}
jib.to.image = 'gcr.io/my-project/image-built-with-jib'
# Builds to a container image registry.
$ gradle jib
# Builds to a Docker daemon.
$ gradle jibDockerBuild

We want everyone to use Jib to simplify and accelerate their Java development. Jib works with most cloud providers; try it out and let us know what you think at github.com/GoogleContainerTools/jib.

Kubernetes 1.11: a look from inside Google



Congratulations to everyone involved in the recent Kubernetes 1.11 release. Now that the core has been stabilized, we here at Google have been focusing our upstream work on increasing Kubernetes’ plugability, i.e., moving more pieces out into other repositories. As the project has matured, adding a plugin no longer means "sending Tim Hockin a pull request," but instead means creating proper, well-defined interfaces with names like CNI, CRI and CSI. In fact, this maturity and extendability has been one of the things that helps us make Google Kubernetes Engine an enterprise-ready platform. Back in March, we gave you a look at what was new in Kubernetes 1.10. Now, with the release of 1.11, let’s take a look at the core Kubernetes work that Google is driving, as well as some of the innovation we've built on Kubernetes’ foundations in the last three months.

New features in 1.11

Priority and preemption
Pod priority and preemption is one of the main features of our internal scheduling system that lets us achieve high resource utilization in our data centers. We wrote about that key use case when we introduced it in Alpha in Kubernetes 1.9, and since then, we’ve added improved scheduling performance and better support for critical system pods. Now, we're pleased to move it to Beta in this release, meaning it’s enabled by default in Kubernetes Engine clusters that run 1.11. This is a feature that many users who run larger clusters have been waiting for!

Changes to CRDs
Custom Resource Definitions (CRDs) are one of the most popular extension mechanisms for Kubernetes, and new features in 1.11 make them even more powerful. CRDs are used for a broad array of Kubernetes extensions, for example to enable the use of Spark or Functions natively through the Kubernetes API.

Kubernetes objects have a schema version (e.g. v1beta1 or v1), but we only ever store one version in the etcd database. When you query an object at a particular version, a server-side conversion is done to convert the object to match the schema of the version you request.

Previously, CRD authors had to delete and recreate resources to move them between different versions. In 1.11, you can now define multiple versions for your own resources. The next step will be to enable server-side conversion for CRD, to allow for schema changes like renaming fields, without breaking existing clients.

Cloud Provider plugins
Google continues to invest in the long-term sustainability and multi-cloud portability of core Kubernetes. The Cloud Provider interface allows infrastructure providers to deliver a "batteries-included" experience for user workloads on their platform, powering common services like dynamic provisioning and management of storage and external load balancing for Services.

This code is currently compiled into Kubernetes core binaries. Google is leading a long running effort to extract this functionality into provider-specific repositories, in order to reduce the scope of the Kubernetes core. This will also allow providers to deliver enhancements and fixes to users more quickly than Kubernetes’ three-month release cadence. As a part of this effort, we’re excited to announce the creation of SIG-Cloud Provider to provide technical oversight and governance for this effort.

New features not in 1.11

That's not a headline you normally see, right?

One thing that is not in 1.11 — not even a bit of it — is Server-side Apply, a feature which moves the logic for kubectl apply from the client to server, making the expected behavior clearer, and allowing more clients to take advantage of server-side processing without shelling out to kubectl.

Normally, a feature like this would be committed to the project as it was built. But if a release is due, and the feature isn't ready, a large amount of effort would be required to go towards reverting it. Instead, Google has been leading the effort to introduce feature branches in Kubernetes, which let us work on long-running features in parallel to the main codebase. This lets us avoid last-minute scrambles to adjust for surprises, and is an example of how we are working to ensure the stability of the Kubernetes project.

Work on server-side apply is happening in the open in its feature branch, and we look forward to welcoming it into Kubernetes when it's ready — and not a moment before.

Kubernetes ecosystem work
Our work with Kubernetes doesn't stop at releasing core binaries every three months. Some of the work we are most excited about is in the form of extensions we've released since the last Kubernetes release:

Kustomize
We've thought a lot about how to declaratively manage application configuration. A common pattern that we saw was the use of templating solutions such as Helm (based on Google Cloud's Deployment Manager), which requires a user to learn a different configuration language than what the API server returns when you query it. A templating approach also means that if you download a YAML example, you have to turn it into a template before you can use it in your environment.

With kustomize, we're introducing a new approach to application definition. Kustomize allows you to apply overlays to existing YAML configurations, so you can customize a forked repository with your local changes, or define different configs for 'staging' and 'production' with different configs and replica counts.

Kustomize is well suited for a GitOps-style workflow, where there's a common base configuration that is tweaked in various directions with overlays to create different variants. The base and overlays can be managed by separate teams in different repositories.

Application API
Applications are made up of many services and resources, but the whole is more than the sum of its parts. After they are created, there is no well-defined way of identifying all the parts that relate to an application to Kubernetes. We want cluster users to be able to think in terms of their applications, and allow tools and UIs to define, update and display an application-centric view of your cluster.

The new Application API provides a way to aggregate Kubernetes components (e.g. Services, Deployments, StatefulSets, Ingresses, CRDs), and manage them as a group.

We have had contributions from friends at Samsung, Bitnami, Heptio, Red Hat and more, and we are looking for more contributions and feedback to ensure that the project adds value across the community.

The Application API is currently in Alpha. We hope to promote it to Beta in the next few weeks, and you'll hear more about it from us then.

Looking forward to Kubernetes Engine

If you'd like to get access to Kubernetes 1.11 on Kubernetes Engine ahead of general availability, please complete this form.

And if you liked reading this post, you'll love the Kubernetes Podcast from Google, which I co-host with Adam Glick. Every Tuesday we take a look at the week’s news and talk with Googlers or members of the wider Kubernetes community. So far we've spoken about product launches, processes and community, and this week we talk to the Kubernetes 1.11 release leads. Subscribe now!