Tag Archives: Announcements

Wrapping up Google Code-in 2017

Today marks the conclusion of the 8th annual Google Code-in (GCI), our contest that teaches teenage students through contributions to open source projects. As with most years, the contest evolved a bit and grew. And it grew. And it doubled. And then it grew some more...
These numbers may increase as mentors finish reviewing the final work submitted by students.
Mentors from each of the 25 open source organizations are now busy reviewing the last of the work submitted by student participants. We’re looking forward to sharing the stats.

Each organization will pick two Grand Prize Winners who will be flown to Northern California to visit Google’s headquarters, enjoy a day of adventure in San Francisco, and meet their mentors and Google engineers.

We’d like to congratulate all of the student participants for challenging themselves and making a contribution to open source in the process! We’d also like to congratulate the mentors for surviving the unusually busy contest.

Further, we’d like to thank the mentors and the organization administrators. They are the heart of this program, volunteering countless hours creating tasks, reviewing student work, and helping students into the world of open source. Mentors teach young students about the many facets of open source development, from community standards and communicating across time zones to version control and testing. We couldn’t run this program without you!

Stay tuned, we’ll be announcing the Grand Prize Winners and Finalists on January 31st.

By Josh Simmons, Google Open Source

Get latest Kubernetes version 1.9 on Google’s managed offering



We're excited to announce that Kubernetes version 1.9 will be available on Google Kubernetes Engine next week in our early access program. This release includes greater support for stateful and stateless applications, hardware accelerator support for machine learning workloads and storage enhancements. Overall, this release achieves a big milestone in making it easy to run a wide variety of production-ready applications on Kubernetes without having to worry about the underlying infrastructure. Google is the leading contributor to open-source Kubernetes releases and now you can access the latest Kubernetes release on our fully-managed Kubernetes Engine, and let us take care of managing, scaling, upgrading, backing up and helping to secure your clusters. Further, we recently simplified our pricing by removing the fee for cluster management, resulting in real dollar savings for your environment.

We're committed to providing the latest technological innovation to Kubernetes users with one new release every quarter. Let’s a take a closer look at the key enhancements in Kubernetes 1.9.

Workloads APIs move to GA


The core Workloads APIs (DaemonSet, Deployment, ReplicaSet and StatefulSet), which let you run stateful and stateless workloads in Kubernetes 1.9, move to general availability (GA) in this release, delivering production-grade quality, support and long-term backwards compatibility.

Hardware accelerator enhancements


Google Cloud Platform (GCP) provides a great environment for running machine learning and data analytics workloads in containers. With this release, we’ve improved support for hardware accelerators such as NVIDIA Tesla P100 and K80 GPUs. Compute-intensive workloads will benefit greatly from cost-effective and high performance GPUs for many use cases ranging from genomics and computational finance to recommendation systems and simulations.

Local storage enhancements for stateful applications


Improvements to the Kubernetes scheduler in this release make it easier to use local storage in Kubernetes. The local persistent storage feature (alpha) enables easy access to local SSD on GCP through Kubernetes’ standard PVC (Persistent Volume Claim) interface in a simple and portable way. This allows you to take an existing Helm Chart, or StatefulSet spec using remote PVCs, and easily switch to local storage by just changing the StorageClass name. Local SSD offers superior performance including high input/output operations per second (IOPS), low latency, and is ideal for high performance workloads, distributed databases, distributed file systems and other stateful workloads.

Storage interoperability through CSI


This Kubernetes release introduces an alpha implementation of Container Storage Interface (CSI). We've been working with the Kubernetes community to provide a single and consistent interface for different storage providers. CSI makes it easy to add different storage volume plugins in Kubernetes without requiring changes to the core codebase. CSI underscores our commitment to being open, flexible and collaborative while providing maximum value—and options—to our users.

Try it now!


In a few days, you can access the latest Kubernetes Engine release in your alpha clusters by joining our early access program.

Talk shop with Google Open Source

Hello world! The Google Open Source team is ringing in the new year by launching accounts on Twitter, Facebook, and Google+ to engage more with the community and keep folks up to date.
Free and open source software (FOSS) is fundamental to computing, the internet, and Google. Since 2004, Google Open Source has helped Googlers get code in and out of Google and supported FOSS through student programs and financial support. One thing is clear after 14 years: FOSS is all about community.

We’re part of that community, seeing people at events, on mailing lists, and in the trenches of code repositories. And few things are more enjoyable and productive than talking with people in the community…

… so we thought we’d start doing more of that. We want to:
We hope you’ll come along and let us know. You’ll find us at @GoogleOSS and +GoogleOpenSource, as well as on Facebook and YouTube.

By Josh Simmons, Google Open Source

Talk shop with Google Open Source

Hello world! The Google Open Source team is ringing in the new year by launching accounts on Twitter, Facebook, and Google+ to engage more with the community and keep folks up to date.
Free and open source software (FOSS) is fundamental to computing, the internet, and Google. Since 2004, Google Open Source has helped Googlers get code in and out of Google and supported FOSS through student programs and financial support. One thing is clear after 14 years: FOSS is all about community.

We’re part of that community, seeing people at events, on mailing lists, and in the trenches of code repositories. And few things are more enjoyable and productive than talking with people in the community…

… so we thought we’d start doing more of that. We want to:
We hope you’ll come along and let us know. You’ll find us at @GoogleOSS and +GoogleOpenSource, as well as on Facebook and YouTube.

By Josh Simmons, Google Open Source

Talk shop with Google Open Source

Hello world! The Google Open Source team is ringing in the new year by launching accounts on Twitter, Facebook, and Google+ to engage more with the community and keep folks up to date.
Free and open source software (FOSS) is fundamental to computing, the internet, and Google. Since 2004, Google Open Source has helped Googlers get code in and out of Google and supported FOSS through student programs and financial support. One thing is clear after 14 years: FOSS is all about community.

We’re part of that community, seeing people at events, on mailing lists, and in the trenches of code repositories. And few things are more enjoyable and productive than talking with people in the community…

… so we thought we’d start doing more of that. We want to:
We hope you’ll come along and let us know. You’ll find us at @GoogleOSS and +GoogleOpenSource, as well as on Facebook and YouTube.

By Josh Simmons, Google Open Source

Simplify Cloud VPC firewall management with service accounts



Firewalls provide the first line of network defense for any infrastructure. On Google Cloud Platform (GCP), Google Cloud VPC firewalls do just that—controlling network access to and between all the instances in your VPC. Firewall rules determine who's allowed to talk to whom and more importantly who isn’t. Today, configuring and maintaining IP-based firewall rules is a complex and manual process that can lead to unauthorized access if done incorrectly. That’s why we’re excited to announce a powerful new management feature for Cloud VPC firewall management: support for service accounts.

If you run a complex application on GCP, you’re probably already familiar with service accounts in Cloud Identity and Access Management (IAM) that provide an identity to applications running on virtual machine instances. Service accounts simplify the application management lifecycle by providing mechanisms to manage authentication and authorization of applications. They provide a flexible yet secure mechanism to group virtual machine instances with similar applications and functions with a common identity. Security and access control can subsequently be enforced at the service account level.


Using service accounts, when a cloud-based application scales up or down, new VMs are automatically created from an instance template and assigned the correct service account identity. This way, when the VM boots up, it gets the right set of permissions and within the relevant subnet, so firewall rules are automatically configured and applied.

Further, the ability to use Cloud IAM ACLs with service accounts allows application managers to express their firewall rules in the form of intent, for example, allow my “application x” servers to access my “database y.” This remediates the need to manually manage Server IP Address lists while simultaneously reducing the likelihood of human error.
This process is leaps-and-bounds simpler and more manageable than maintaining IP address-based firewall rules, which can neither be automated nor templated for transient VMs with any semblance of ease.

Here at Google Cloud, we want you to deploy applications with the right access controls and permissions, right out of the gate. Click here to learn how to enable service accounts. And to learn more about Cloud IAM and service accounts, visit our documentation for using service accounts with firewalls.

Seeking open source projects for Google Summer of Code 2018

Do you lead or represent a free or open source software organization? Are you seeking new contributors? (Who isn’t?) Do you enjoy the challenge and reward of mentoring new developers? Apply to be a mentor organization for Google Summer of Code 2018!

We are seeking open source projects and organizations to participate in the 14th annual Google Summer of Code (GSoC). GSoC is a global program that gets student developers contributing to open source. Each student spends three months working on a project, with the support of volunteer mentors, for participating open source organizations.

Last year 1,318 students worked with 198 open source organizations. Organizations include individual projects and umbrella organizations that serve as fiscal sponsors, such as Apache Software Foundation or the Python Software Foundation.

You can apply starting today. The deadline to apply is January 23 at 16:00 UTC. Organizations chosen for GSoC 2018 will be posted on February 12.

Please visit the program site for more information on how to apply, a detailed timeline of important deadlines and general program information. We also encourage you to check out the Mentor Guide and join the discussion group.

Best of luck to all of the applicants!

By Josh Simmons, Google Open Source

With Google Kubernetes Engine regional clusters, master nodes are now highly available



We introduced highly available masters for Google Kubernetes Engine earlier this fall with our alpha launch of regional clusters. Today, regional clusters are in beta and ready to use at scale in Kubernetes Engine.

Regional clusters allow you to create a Kubernetes Engine cluster with a multi-master, highly available control plane that helps ensure higher cluster uptime. With regional clusters in Kubernetes Engine, you gain:
  • Resilience from single zone failure - Because your masters and nodes are available across a region rather than a single zone, your Kubernetes cluster is still fully functional if a zone goes down.
  • No downtime during master upgrades - Kubernetes Engine minimizes downtime during all Kubernetes master upgrades, but with a single master, some downtime is inevitable. By using regional clusters, the control plane remains online and available, even during upgrades.

How regional clusters work


When you create a regional cluster, Kubernetes Engine spreads your masters and nodes across three zones in a region, ensuring that you can experience a zonal failure and still remain online.

By default, Kubernetes Engine creates three nodes in each zone (giving you nine total nodes), but you can change the number of nodes in your cluster with the --num-nodes flag.
Creating a Kubernetes Engine regional cluster is simple. Let’s create a regional cluster with two nodes in each zone.

$ gcloud beta container clusters create my-regional-cluster --region=us-central1 --num-nodes=2

Or you can use the Cloud Console to create a regional cluster:
For a more detailed explanation of the regional clusters feature along with additional flags you can use, check out the documentation.

Kubernetes Engine regional clusters are offered at no additional charge during the beta period. We will announce pricing as part of general availability. Until then, please send any feedback to gke-regional-clusters-feedback@google.com.


Meet the Kubernetes Engine team at #KubeCon


This week the Kubernetes community gathers in Austin for the annual #KubeCon conference. The Google Cloud team will host various activities throughout the week. Join us for parties, workshops, and more than a dozen talks by experts. More info and ways to RSVP at g.co/kubecon.

Manage Google Kubernetes Engine from Cloud Console dashboard, now generally available



There are two main ways to manage Google Kubernetes Engine: the kubectl command line interface and Cloud Console, a web-based dashboard. Cloud Console for Kubernetes Engine is now generally available, and includes several new and exciting features to help you understand the state of your app, troubleshoot it and perform fixes.

Troubleshoot your app


To walk you through these new features, we’d like to introduce you to Alice, a DevOps admin running her environment on Kubernetes Engine. Alice logs into Cloud Console to see the status of her apps. She starts by looking at the unified Workloads view where she can inspect all her apps, no matter which cluster they run on. This is especially handy for Alice, as her team has different clusters for different environments. In this example she spots an issue with one of the frontends – its status is showing up red.
(click to enlarge)

By clicking on the name of the workload, Alice sees a detailed view where she can start debugging. Here she sees graphs for CPU, memory and disk utilization and spots a sudden spike in the resource usage.
(click to enlarge)

Before she starts to investigate the root cause, Alice decides to turn on Horizontal Pod Autoscaling to mitigate the application outage. The autoscale action is available from the menu at the top of the Cloud Console page. She increases the number of maximum replicas to 15 and enables autoscaling.
(click to enlarge)

Now that the service is scaling up and and can handle user traffic again, Alice decides to investigate the root cause of the increased CPU usage. She starts by investigating one of the pods and sees that it has high CPU usage. To look into this further she opens the Logs tab to browse the recent logs for the offending pod.
(click to enlarge)
The logs indicate that the problem is with the frontend’s http server. With this insight, Alice decides to connect to the running pod to debug it further. She opens Cloud Shell directly from Cloud Console and attaches to the selected pod. Alice does not need to worry about remembering the exact commands, finding the right credentials and setting kubectl context—the correct command is fully populated when Cloud Shell loads.

By running the linux "top" command, Alice can see that the http server process is the culprit behind the spiking CPU. She can now investigate the code, find the bug and fix it using her favorite tools. Once the new code is ready, Alice comes back to the UI to perform a rolling update. Again, she finds the rolling update action in the top of the UI, and updates the image version. Cloud Console then performs the rolling update, displays its progress and highlights any problems that might have occurred during the update.
(click to enlarge)
(click to enlarge)
Alice now inspects resource usage charts, status and logs for the frontend deployment to verify that it is working correctly. She can also perform the same rolling update action on a similar frontend deployment on a different cluster, without having to context-switch and provide new credentials.

Kubernetes Engine Cloud Console comes with other features to assist Kubernetes administrators with their daily routines. For example, it includes a YAML editor to modify Kubernetes objects, or service visualizations that aggregate its related resources like pods or load balancers. You can learn more about those features through the Kubernetes Engine dashboards documentation.

Manage Kubernetes Engine clusters


Kubernetes Engine’s new Cloud Console experience also offers improvements for cluster administrators. Bob works in the same company as Alice and is responsible for administering the cluster where the frontend app lives.

While investigating the the list of nodes in the cluster Bob notices that all the nodes are running close to full utilization and that there's not enough capacity left in the cluster to schedule other workloads. He clicks on one of the nodes to investigate what’s happening with the pods scheduled there. He quickly realizes that due to Alice turning on the Horizontal Pod Autoscaler there are now multiple replicas of the frontend pods that take up all the space in the cluster.

Bob decides to edit the the cluster right from Cloud Console and turn on cluster autoscaling. After a couple of minutes to scale up the cluster, everything starts to work again.

These are just some of the things that you can do from the Kubernetes Engine Cloud Console dashboard. To get started, simply login to Cloud Console and click on the Kubernetes Engine tab. Let us know how you like it by clicking on the feedback button in the upper righthand corner of the UI.

Meet the Kubernetes Engine team at #KubeCon


This week the Kubernetes community gathers in Austin for the annual #KubeCon conference. The Google Cloud team will host various activities throughout the week. Join us for parties, workshops, and more than a dozen talks by experts. More info and ways to RSVP at g.co/kubecon.

Introducing an easy way to deploy containers on Google Compute Engine virtual machines



Containers are a popular way to deploy software thanks to their lightweight size and resource requirements, dependency isolation and portability. Today, we’re introducing an easy way to deploy and run containers on Google Compute Engine virtual machines and managed instance groups. This feature, which is currently in beta, allows you to take advantage of container deployment consistency while staying in your familiar IaaS environment.

Now you can easily deploy containers wherever you may need them on Google Cloud: Google Kubernetes Engine for multi-workload, microservice friendly container orchestration, Google App Engine flexible environment, a fully managed application platform, and now Compute Engine for VM-level container deployment.

Running containers on Compute Engine instances is handy in a number of scenarios: when you need to optimize CI/CD pipeline for applications running on VMs, finetune VM shape and infrastructure configuration for a specialized workload, integrate a containerized application into your existing IaaS infrastructure or launch a one-off instance of an application.

To run your container on a VM instance, or a managed instance group, simply provide an image name and specify your container runtime options when creating a VM or an instance template. Compute Engine takes care of the rest including supplying an up-to-date Container-Optimized OS image with Docker and starting the container upon VM boot with your runtime options.

You can now easily use containers without having to write startup scripts or learn about container orchestration tools, and can migrate to full container orchestration with Kubernetes Engine when you’re ready. Better yet, standard Compute Engine pricing applies VM instances running containers cost the same as regular VMs.

How to deploy a container to a VM


To see the new container deployment method in action, let’s deploy an NGINX HTTP server to a virtual machine. To do this, you only need to configure three settings when creating a new instance:
  • Check Deploy a container image to this VM instance.
  • Provide Container image name. 
  • Check Allow HTTP traffic so that the VM instance can receive HTTP requests on port 80. 
Here's how the flow looks in Google Cloud Console:

Run a container from the gcloud command line

You can run a container on a VM instance with just one gcloud command:

gcloud beta compute instances create-with-container nginx-vm \
  --container-image gcr.io/cloud-marketplace/google/nginx1:1.12 \
  --tags http-server

Then, create a firewall rule to allow HTTP traffic to the VM instance so that you can see the NGINX welcome page:

gcloud compute firewall-rules create allow-http \
  --allow=tcp:80 --target-tags=http-server

To update such a container is just as easy:

gcloud beta compute instances update-container nginx-vm \
  --container-image gcr.io/cloud-marketplace/google/nginx1:1.13

Run a container on a managed instance group

With managed instance groups, you can take advantage of VM-level features like autoscaling, automatic recreation of unhealthy virtual machines, rolling updates, multi-zone deployments and load balancing. Running containers on managed instance groups is just as easy as on individual VMs and takes only two steps: (1) create an instance template and (2) create a group.

Let’s deploy the same NGINX server to a managed instance group of three virtual machines.

Step 1: Create an instance template with a container.

gcloud beta compute instance-templates create-with-container nginx-it \
  --container-image gcr.io/cloud-marketplace/google/nginx1:1.12 \
  --tags http-server

The http-server tag allows HTTP connections to port 80 of the VMs, created from the instance template. Make sure to keep the firewall rule from the previous example.

Step 2: Create a managed instance group.

gcloud compute instance-groups managed create nginx-mig \
  --template nginx-it \
  --size 3

The group will have three VM instances, each running the NGINX container.

Get started!

Interested in deploying containers on Compute Engine VM instances or managed instance groups? Take a look at the detailed step-by-step instructions and learn how to configure a range of container runtime options including environment variables, entrypoint command with parameters and volume mounts. Then, help us help you make using containers on Compute Engine even easier! Send your feedback, questions or requests to containers-on-mig@google.com.

Sign up for Google Cloud today and get $300 in credits to try out running containers directly on Compute Engine instances.