Tag Archives: Announcements

The thing is . . . Cloud IoT Core is now generally available



Today, we’re excited to announce that Cloud IoT Core, our fully managed service to help securely connect and manage IoT devices at scale, is now generally available.

With Cloud IoT Core, you can easily connect and centrally manage millions of globally dispersed connected devices. When used as part of the broader Google Cloud IoT solution, you can ingest all your IoT data and connect to our state-of-the-art analytics and machine learning services to gain actionable insights.

Already, Google Cloud Platform (GCP) customers are using connected devices and Cloud IoT Core as the foundation of their IoT solutions. Whether it’s smart cities, the sharing economy or next-generation seismic research, we’re thrilled that Cloud IoT Core is helping innovative companies build the future.


Customers share feedback


Schlumberger is the world's leading provider of technology for reservoir characterization, drilling, production, and processing to the oil and gas industry.
"As part of our IoT integration strategy, Google Cloud IoT Core has helped us focus our engineering efforts on building oil and gas applications by leveraging existing IoT services to enable fast, reliable and economical deployment. We have been able to build quick prototypes by connecting a large number of devices over MQTT and perform real-time monitoring using Cloud Dataflow and BigQuery."  
 Chetan Desai, VP Digital Technology, Schlumberger Limited

Smart Parking is a New Zealand-based company that has used Cloud IoT Core from its earliest days to build out a smart city platform, helping direct traffic, parking and city services.
"Using Google Cloud IoT Core, we have been able to completely redefine how we manage the deployment, activation and administration of sensors and devices. Previously, we needed to individually set up each sensor/device. Now we allocate manufactured batches of devices into IoT Core for site deployments and then, using a simple activation smartphone app, the onsite installation technician can activate the sensor or device in moments. Job done!" 
  John Heard, Group CTO, Smart Parking Limited
Bike-sharing pioneer Blaze uses Cloud IoT Core to manage its Blaze Future Data Platform, which uses a combination of GPS, accelerometers and atmospheric sensing for its smart bikes. Its capabilities include air pollution sensing, pothole detection, recording accidents and near misses, and capturing insights around journeys.
"Blaze is able to rapidly build the technology platform our customers and cyclists require on Google Cloud by more securely connecting our products and fleets of bikes to Cloud IoT Core and then run demand forecasting using BigQuery and Machine Learning." 
 Philip Ellis, Co-Founder & COO, Blaze

Grupo ADO is the largest bus operator in Latin America. It operates inter-city routes as well as short routes and tourist charters.
"Agosto, a Google Cloud Premier partner, performed business and technical reviews of MOBILITY ADO’s existing architecture, applications and core data workflows which had been in place for about 12 years. These systems were originally very robust, but over time, we faced challenges with innovating on the existing technology stack, as well as with the optimization of operational costs. Agosto created a proof-of-concept which showcased that a Cloud IoT Core-based architecture was a viable path to modernization and functional optimization of many of our existing, core components. MOBILITY ADO now has real time access to bus diagnostic data via Google Cloud data and analytics services and a clear path to future-proof our platform."  
 Humberto Campos, IT Director, MOBILITY ADO


Enabling the Cloud IoT Core partner ecosystem

At the same time, we continue to grow our ecosystem of partners, providing companies with the insight and staff to build custom IoT solution that best fits their needs. On the device side, we have a variety of partners whose hardware works seamlessly with IoT Core. Application partners, meanwhile, help customers build solutions using IoT Core and other Google Cloud services.
(click to enlarge)

Improving the Cloud IoT Core experience


Since we announced the public beta of Cloud IoT Core last fall, we’ve been actively listening to your feedback. This general availability release incorporates an important new feature: You can now publish data streams from the IoT Core protocol bridge to multiple Cloud Pub/Sub topics, simplifying deployments.

For example, imagine you have a device that publishes multiple types of data, such as temperature, humidity and logging data. By directing these data streams to their own individual Pub/Sub topics, you can eliminate the need to separate the data into different categories after publishing.

And that’s just the beginning—watch this space as we build out Cloud IoT Core with additional features and enhancements. We look forward to helping you scale your production IoT deployments. To get started check out this quick-start tutorial on Cloud IoT Core, and provide us with your feedback—we’d love to hear from you!

96 vCPU Compute Engine instances are now generally available


Today we're happy to announce the general availability of Compute Engine machine types with 96 vCPUs and up to 624 GB of memory. Now you can take advantage of the performance improvements and increased core count provided by the new Intel Xeon Scalable Processors (Skylake). For applications that can scale vertically, you can leverage all 96 vCPUs to decrease the number of VMs needed to run your applications, while reducing your total cost of ownership (TCO).

You can launch these high-performance virtual machines (VMs) as three predefined machine types, and as custom machine types. You can also adjust your extended memory settings to create a machine with the exact amount of memory and vCPUs you need for your applications.

These new machine types are available in GCP regions globally. You can currently launch 96 vCPU VMs in us-central1, northamerica-northeast1, us-east1, us-west1, europe-west1, europe-west4, and asia-east1, asia-south1 and asia-southeast1. Stay up-to-date on additional regions by visiting our available regions and zones page.

Customers are doing exciting things with the new 96 vCPU machine types including running in-memory databases such as SAP HANA, media rendering and production, and satellite image analysis.
"When preparing petabytes of global satellite imagery to be calibrated, cleaned up, and "science-ready" for our machine learning models, we do a tremendous amount of image compression. By leveraging the additional compute resources available with 96 vCPU machine types, as well as Advanced Vector Extensions such as AVX-512 with Skylake, we have seen a 38% performance improvement in our compression and a 23% improvement in our imagery expansions. This really adds up when working with petabytes of satellite and aerial imagery." 
- Tim Kelton, Co-Founder, Descartes Labs
The 96 vCPU machine types enable you to take full advantage of the performance improvements available through the Intel Xeon Scalable Processor (Skylake), and the supported AVX-512 instruction set. Our partner Altair demonstrated how you can achieve up to 1.8X performance improvement using the new machine types for HPC workloads. We also worked with Intel to support your performance and scaling efforts by providing the Intel Performance libraries freely on Compute Engine. You can take advantage of these components across all machine types, but they're of particular interest for applications that can exploit the scale of 96 vCPU instances on Skylake-based servers.

The following chart shows an example of the performance improvements delivered by using the Intel Distribution for Python: scikit-learn on Compute Engine with 96 vCPUs.

Visit the GCP Console to create a new instance. To learn more, you can read the documentation for instructions on creating new virtual machines with the gcloud command line tool. 


At Google Cloud, we’re committed to helping customers access state-of-the-art compute infrastructure on GCP. To get started, sign up for a free trial today and get $300 in free cloud credits to get started! 

Get the most out of Google Kubernetes Engine with Priority and Preemption



Wouldn’t it be nice if you could ensure that your most important workloads always get the resources they need to run in a Kubernetes cluster? Now you can. Kubernetes 1.9 introduces an alpha feature called “priority and preemption” that allows you to assign priorities to your workloads, so that more important pods evict less important pods when the cluster is full.

Before priority and preemption, Kubernetes pods were scheduled purely on a first-come-first-served basis, and ran to completion (or forever, in the case of pods created by something like a Deployment or StatefulSet). This meant less important workloads could block more important, later-arriving, workloads from running—not the desired effect. Priority and preemption solves this problem.

Priority and preemption is valuable in a number of scenarios. For example, imagine you want to cap autoscaling to a maximum cluster size to control costs, or you have clusters that you can’t grow in real-time (e.g., because they are on-premises and you need to buy and install additional hardware). Or you have high-priority cloud workloads that need to scale up faster than the cluster autoscaler can add nodes. In short, priority and preemption lead to better resource utilization, lower costs and better service levels for critical applications.


Predictable cluster costs without sacrificing safety


In the past year, the Kubernetes community has made tremendous strides in system scalability and support for multi-tenancy. As a result, we see an increasing number of Kubernetes clusters that run both critical user-facing services (e.g., web servers, application servers, back-ends and other microservices in the direct serving path) and non-time-critical workloads (e.g., daily or weekly data analysis pipelines, one-off analytics jobs, developer experiments, etc.). Sharing a cluster in this way is very cost-effective because it allows the latter type of workload to partially or completely run in the “resource holes” that are unused by the former, but that you're still paying for. In fact, a study of Google’s internal workloads found that not sharing clusters between critical and non-critical workloads would increase costs by as much as almost 60 percent. In the cloud, where node sizes are flexible and there's less resource fragmentation, we don’t expect such dramatic results from Kubernetes priority and preemption, but the general premise still holds.

The traditional approach to filling unused resources is to run less important workloads as BestEffort. But because the system does not explicitly reserve resources for BestEffort pods, they can be starved of CPU or killed if the node runs out of memory—even if they're only consuming modest amounts of resources.

A better alternative is to run all workloads as Burstable or Guaranteed, so that they receive a resource guarantee. That, however, leads to a tradeoff between predictable costs and safety against load spikes. For example, consider a user-facing service that experiences a traffic spike while the cluster is busy with non-time-critical analytics workloads. Without the priority and preemption capabilities, you might prioritize safety, by configuring the cluster autoscaler without an upper bound or with a very high upper bound. That way, it can handle the spike in load even while it’s busy with non-time-critical workloads. Alternately, you might pick predictability by configuring the cluster autoscaler with a tight bound, but that may prevent the service from scaling up sufficiently to handle unexpected load.

With the addition of priority and preemption, on the other hand, Kubernetes evicts pods from the non-time-critical workload when the cluster runs out of resources, allowing you to set an upper bound on cluster size without having to worry that the serving pipeline might not scale sufficiently to handle the traffic spike. Note that evicted pods receive a termination grace period before being killed, which is 30 seconds by default.

Even if you don’t care about the predictability vs. safety tradeoff, priority and preemption are still useful, because preemption evicts a pod faster than a cloud provider can usually provision a Kubernetes node. For example, imagine there's a load spike to a high-priority user-facing service, so the Horizontal Pod Autoscaler creates new pods to absorb the load. If there are low-priority workloads running in the cluster, the new, higher-priority pods can start running as soon as pod(s) from low-priority workloads are evicted; they don’t have to wait for the cluster autoscaler to create new nodes. The evicted low-priority pods start running again once the cluster autoscaler has added node(s) for them. (If you want to use priority and preemption this way, a good practice is to set a low termination grace period for your low-priority workloads, so the high-priority pods can start running quickly.)

Enabling priority and preemption on Kubernetes Engine


We recently made Kubernetes 1.9 available in Google Kubernetes Engine, and made priority and preemption available in alpha clusters. Here’s how to get started with this new feature:

  1. Create an alpha cluster—please note the cited limitations. 
  2. Follow the instructions to create at least two PriorityClasses in your Kubernetes cluster. 
  3. Create workloads (using Deployment, ReplicaSet, StatefulSet, Job, or whatever you like) with the priorityClassName field filled in, matching one of the PriorityClasses you created.

If you wish, you can also enable the cluster autoscaler and set a maximum cluster size. In that case your cluster will not grow above the configured maximum number of nodes, and higher-priority pods will evict lower-priority pods when the cluster reaches its maximum size and there are pending pods from the higher priority classes. If you don’t enable the cluster autoscaler, the priority and preemption behavior is the same, except that the cluster size is fixed.

Advanced technique: enforcing “filling the holes”


As we mentioned earlier, one of the motivations for priority and preemption is to allow non-time-critical workloads to “fill the resource holes” between important workloads on a node. To enforce this strictly, you can associate a workload with a PriorityClass whose priority is less than zero. Then the cluster autoscaler does not add the nodes necessary for that workload to run, even if the cluster is below the maximum size configured for the autoscaler.

Thus you can create three tiers of workloads of decreasing importance:

  • Workloads that can access the entire cluster up to the cluster autoscaler maximum size 
  • Workloads that can trigger autoscaling but that will be evicted if the cluster has reached the configured maximum size and higher-priority work needs to run
  • Workloads that will only “fill the cracks” in the resource usage of the higher-priority workloads, i.e., that will wait to run if they can’t fit into existing free resources.

And because PriorityClass maps to an integer, you can of course create many sub-tiers within these three categories.

Let us know what you think!


Priority and preemption are welcome additions in Kubernetes 1.9, making it easier for you to control your resource utilization, establish workload tiers and control costs. Priority and preemption is still an alpha feature. We’d love to know how you are using it, and any suggestions you might have for making it better. Please contact us at [email protected].

To explore this new capability and other features of Kubernetes Engine, you can quickly get started using our 12-month free trial.

Introducing the mentor organizations for Google Summer of Code 2018

We are pleased to announce the open source projects and organizations that were accepted for Google Summer of Code 2018! As usual, we received more applications this year than we did last year, and nearly twice as many as we are able to accept into the program.

After careful review, we have chosen 212 applicants to be mentor organizations this year, 19% of which are new to the program. Please see the program website for a complete list of the accepted organizations.

Are you a student interested in participating? We begin accepting student applications on Monday, March 12, 2018 at 16:00 UTC and the deadline to apply is Tuesday, March 27, 2018 at 16:00 UTC.

The most successful applications come from students who start preparing now. You can start by watching the video below, checking out the Student Guide, and reviewing the list of accepted organizations.


You can find more information on our website, including a full timeline of important dates. We also highly recommend perusing the FAQ and Program Rules.

A hearty congratulations–and thank you–to all of our mentor organizations! We look forward to working with all of you during Google Summer of Code 2018.

By Josh Simmons, Google Open Source

Cloud TPU machine learning accelerators now available in beta



Starting today, Cloud TPUs are available in beta on Google Cloud Platform (GCP) to help machine learning (ML) experts train and run their ML models more quickly.
Cloud TPUs are a family of Google-designed hardware accelerators that are optimized to speed up and scale up specific ML workloads programmed with TensorFlow. Built with four custom ASICs, each Cloud TPU packs up to 180 teraflops of floating-point performance and 64 GB of high-bandwidth memory onto a single board. These boards can be used alone or connected together via an ultra-fast, dedicated network to form multi-petaflop ML supercomputers that we call “TPU pods.” We will offer these larger supercomputers on GCP later this year.

We designed Cloud TPUs to deliver differentiated performance per dollar for targeted TensorFlow workloads and to enable ML engineers and researchers to iterate more quickly. For example:

  • Instead of waiting for a job to schedule on a shared compute cluster, you can have interactive, exclusive access to a network-attached Cloud TPU via a Google Compute Engine VM that you control and can customize. 
  • Rather than waiting days or weeks to train a business-critical ML model, you can train several variants of the same model overnight on a fleet of Cloud TPUs and deploy the most accurate trained model in production the next day. 
  • Using a single Cloud TPU and following this tutorial, you can train ResNet-50 to the expected accuracy on the ImageNet benchmark challenge in less than a day, all for well under $200! 

ML model training, made easy

Traditionally, writing programs for custom ASICs and supercomputers has required deeply specialized expertise. By contrast, you can program Cloud TPUs with high-level TensorFlow APIs, and we have open-sourced a set of reference high-performance Cloud TPU model implementations to help you get started right away:


To save you time and effort, we continuously test these model implementations both for performance and for convergence to the expected accuracy on standard datasets.

Over time, we'll open-source additional model implementations. Adventurous ML experts may be able to optimize other TensorFlow models for Cloud TPUs on their own using the documentation and tools we provide.

By getting started with Cloud TPUs now, you’ll be able to benefit from dramatic time-to-accuracy improvements when we introduce TPU pods later this year. As we announced at NIPS 2017, both ResNet-50 and Transformer training times drop from the better part of a day to under 30 minutes on a full TPU pod, no code changes required.

Two Sigma, a leading investment management firm, is impressed with the performance and ease of use of Cloud TPUs.
"We made a decision to focus our deep learning research on the cloud for many reasons, but mostly to gain access to the latest machine learning infrastructure. Google Cloud TPUs are an example of innovative, rapidly evolving technology to support deep learning, and we found that moving TensorFlow workloads to TPUs has boosted our productivity by greatly reducing both the complexity of programming new models and the time required to train them. Using Cloud TPUs instead of clusters of other accelerators has allowed us to focus on building our models without being distracted by the need to manage the complexity of cluster communication patterns." 
Alfred Spector, Chief Technology Officer, Two Sigma

A scalable ML platform


Cloud TPUs also simplify planning and managing ML computing resources:

  • You can provide your teams with state-of-the-art ML acceleration and adjust your capacity dynamically as their needs change. 
  • Instead of committing the capital, time and expertise required to design, install and maintain an on-site ML computing cluster with specialized power, cooling, networking and storage requirements, you can benefit from large-scale, tightly-integrated ML infrastructure that has been heavily optimized at Google over many years.
  • There’s no more struggling to keep drivers up-to-date across a large collection of workstations and servers. Cloud TPUs are preconfigured—no driver installation required!
  • You are protected by the same sophisticated security mechanisms and practices that safeguard all Google Cloud services.

“Since working with Google Cloud TPUs, we’ve been extremely impressed with their speed—what could normally take days can now take hours. Deep learning is fast becoming the backbone of the software running self-driving cars. The results get better with more data, and there are major breakthroughs coming in algorithms every week. In this world, Cloud TPUs help us move quickly by incorporating the latest navigation-related data from our fleet of vehicles and the latest algorithmic advances from the research community.
Anantha Kancherla, Head of Software, Self-Driving Level 5, Lyft
Here at Google Cloud, we want to provide customers with the best cloud for every ML workload and will offer a variety of high-performance CPUs (including Intel Skylake) and GPUs (including NVIDIA’s Tesla V100) alongside Cloud TPUs.

Getting started with Cloud TPUs


Cloud TPUs are available in limited quantities today and usage is billed by the second at the rate of $6.50 USD / Cloud TPU / hour.

We’re thrilled to see the enthusiasm that customers have expressed for Cloud TPUs. To help us manage demand, please sign up here to request Cloud TPU quota and describe your ML needs. We’ll do our best to give you access to Cloud TPUs as soon as we can.

To learn more about Cloud TPUs, join us for a Cloud TPU webinar on February 27th, 2018.

GPUs in Kubernetes Engine now available in beta



Last year we introduced our first GPU offering for Google Kubernetes Engine with the alpha launch of NVIDIA Tesla GPUs and received an amazing customer response. Today, GPUs in Kubernetes Engine are in beta and ready to be used widely from the latest Kubernetes Engine release.

Using GPUs in Kubernetes Engine can turbocharge compute-intensive applications like machine learning (ML), image processing and financial modeling. By packaging your CUDA workloads into containers, you can benefit from the massive processing power of Kubernetes Engine’s GPUs whenever you need it, without having to manage hardware or even VMs.

With its best-in-class CPUs, GPUs, and now TPUs, Google Cloud provides the best choice, flexibility and performance for running ML workloads in the cloud. The ride-sharing pioneer Lyft, for instance, uses GPUs in Kubernetes Engine to accelerate training of its deep learning models.
"GKE clusters are ideal for deep learning workloads, with out-of-the box GPU integration, autoscaling clusters for our spiky training workloads, and integrated container logging and monitoring." 
— Luc Vincent, VP of Engineering at Lyft

Both the NVIDIA Tesla P100 and K80 GPUs are available as part of the beta—and V100s are on the way. Recently, we also introduced Preemptible GPUs as well as new lower prices to unlock new opportunities for you. Check out the latest prices for GPUs here.

Getting started with GPUs in Kubernetes Engine


Creating a cluster with GPUs in Kubernetes Engine is easy. From the Cloud Console, you can expand the machine type on the "Creating Kubernetes Cluster" page to select the types and the number of GPUs.
And if you want to add nodes with GPUs to your existing cluster, you can use the Node Pools and Cluster Autoscaler features. By using node pools with GPUs, your cluster can use GPUs whenever you need them. Autoscaler, meanwhile, can automatically create nodes with GPUs whenever pods requesting GPUs are scheduled, and scale down to zero when GPUs are no longer consumed by any active pods.

The following command creates a node pool with GPUs that can scale up to five nodes and down to zero nodes.

gcloud beta container node-pools create my-gpu-node-pool 
--accelerator=type=nvidia-tesla-p100,count=1 
--cluster=my-existing-cluster --num-nodes 2 
--min-nodes 0 --max-nodes 5 --enable-autoscaling

Behind the scenes, Kubernetes Engine applies taint and toleration techniques to ensure only pods requesting GPUs will be scheduled on the nodes with GPUs, and prevent pods that don't require GPUs from running on them.

While Kubernetes Engine does a lot of things behind the scenes for you, we also want you to understand how your GPU jobs are performing. Kubernetes Engine exposes metrics for containers using GPUs, such as how busy the GPUs are, how much memory is available, and how much memory is allocated. You can also visualize these metrics by using Stackdriver.

Figure 1: GPU duty cycle for three different jobs

For a more detailed explanation of Kubernetes Engine with GPUs, for example installing NVIDIA drivers and how to configure a pod to consume GPUs, check out the documentation.

Tackling new workloads with Kubernetes


In 2017, Kubernetes Engine core-hours grew 9X year over year, and the platform is gaining momentum as a premier deployment platform for ML workloads. We’re very excited about open source projects like Kubeflow that make it easy, fast and extensible to run ML stacks in Kubernetes. We hope that the combination of these open-source ML projects and GPUs in Kubernetes Engine will help you innovate in business, engineering and science.

Try it today


To get started using GPUs with Kubernetes Engine using our free-trial of $300 credits, you’ll need to upgrade your account and apply for a GPU quota for the credits to take effect.

Thanks for the support and feedback in shaping our roadmap to better serve your needs. Keep the conversation going, and connect with us on the Kubernetes Engine Slack channel.

Announcing Spring Cloud GCP—integrating your favorite Java framework with Google Cloud



For many years, the Spring Framework has been an innovative force in the Java ecosystem. Spring and its vast ecosystem are widely adopted, and are among the most popular Java frameworks. To do more for developers in the Spring community and meet our developers where they are, we’re announcing the Spring Cloud GCP project, a collaboration with Pivotal to better integrate Spring with Google Cloud Platform (GCP), so that running your Spring code on our platform is as easy as possible.

Spring Boot takes an opinionated view of the Spring platform and third-party libraries, making it easy to create stand-alone, production-grade Spring-based applications. With minimal configuration, Spring Boot provides your application with fully configured Java objects, getting you from nothing to a highly functional application in minutes.

By focusing on Spring Boot support, Spring Cloud GCP allows you to greatly cut down on boilerplate code and consume GCP services in a Spring-idiomatic way. In most cases, you won't even need to change your code to take advantage of GCP services.

As part of Spring Cloud GCP, we created integrations between popular Spring libraries and GCP services:

Google Cloud Platform Spring Framework Description
Cloud SQL Spring JDBC Spring Cloud GCP SQL automatically configures the JDBC URLs and driver class names and helps establish secure SSL connection using client certificates.
Cloud Pub/Sub Spring Integration Use Spring Integration concepts like channels, gateways, etc. and sending/receiving messages from Cloud Pub/Sub.
Cloud Storage Spring Resource Use Spring Resource objects to access and store files in Cloud Storage buckets.
Stackdriver Trace Spring Cloud Sleuth Use Spring Cloud Sleuth and its annotations to trace your microservices and send the trace data to Stackdriver Trace for storage and analysis.
Runtime Configuration API Spring Cloud Config Store and access configuration values in managed Runtime Configuration service without running your own config server.

From Milestone 2, all the above integrations are compatible with the latest Spring Framework 5 and Spring Boot 2.

The Spring Cloud GCP libraries are in Beta stage and are available from Pivotal’s Milestones Maven Repository.

To get started, check out the code samples, reference documentation, the Spring Cloud GCP project page and the Spring Cloud code labs! More resources are available on the GCP Spring documentation. We would also love to hear from you at our GitHub issue tracker.

We’re working on other exciting integrations and planning for general availability soon. So stay tuned for more news!

Announcing the Winners of Google Code-in 2017

Google Code-in (GCI) 2017 was epic in every regard. It was a very, very busy 7 weeks for everyone - we had 3,555 students from 78 countries completing 16,468 tasks with a record 25 open source organizations!

Today we are excited to announce the Grand Prize Winners and Finalists with each organization. The 50 Grand Prize Winners completed an impressive 1,739 tasks between them while also helping other students.

Each of the Grand Prize Winners will be awarded a four day trip to Google’s campus in northern California to meet with Google engineers, meet with one of the mentors they worked with during the contest, and enjoy some fun in the California sun with the other winners. We look forward to meeting these winners in a few months!

Grand Prize Winners

The Grand Prize Winners hail from 12 countries, listed by first name alphabetically below:
Name Organization Country
Aadi Bajpai CCExtractor India
Aarnav Bos OpenWISP India
Abishek V Ashok FOSSASIA India
Aditya Giri OpenWISP India
Akshit Dewan XWiki United States
Albert Wolszon Wikimedia Poland
Andrew Dassonville coala United States
Arav Singhal MovingBlocks India
Arun Pattni XWiki United Kingdom
Aryaman Agrawal Systers Community India
Bartłomiej Rasztabiga OpenMRS Poland
Carol Chen Sugar Labs Canada
Chandra Catrobat Indonesia
Chirag Gupta The Mifos Initiative India
Cynthia Lin Zulip United States
Erika Tan Systers Community United States
Eshan Singh MetaBrainz India
Euan Ong Sugar Labs United Kingdom
Fawwaz Yusran OpenMRS Indonesia
Grzegorz Stark Apertium Poland
Hiếu Lê Haiku Vietnam
Jake Du LibreHealth United States
Jatin Luthra JBoss Community India
Jeff Sieu BRL-CAD Singapore
Jerry Huang OSGeo United States
Jonathan Pan Apertium United States
Jude Birch Catrobat United Kingdom
Konrad Krawiec Ubuntu Poland
Mahdi Dolatabadi BRL-CAD Canada
Marcin Mikołajczak Ubuntu Poland
Marco Burstein Zulip United States
Mateusz Grzonka LibreHealth Poland
Matthew Katz The Mifos Initiative Canada
Mehant Kammakomati SCoRe India
Nalin Bhardwaj coala India
Naveen Rajan FOSSASIA Sri Lanka
Nikita Volobuiev Wikimedia Ukraine
Omshi Samal Liquid Galaxy Project India
Owen Pan Haiku United States
Padam Chopra SCoRe India
Palash Taneja CloudCV India
Pavan Agrawal CloudCV United States
Sheik Meeran Ashmith Kifah Drupal Mauritius
Shiyuan Yu CCExtractor China
Sunveer Singh OSGeo India
Tanvish Jha Drupal India
Tarun Ravi Liquid Galaxy Project United States
Thomas O'Keeffe MovingBlocks United States
Vriyas Hartama Adesaputra MetaBrainz Indonesia
Zhao Wei Liew JBoss Community Singapore

Finalists

And a big congratulations to our 75 Finalists from 20 countries who will receive a special hoodie to commemorate their achievements in the contest. They are listed alphabetically by organization below:
Name Organization Name Organization
Alexander Mamaev Apertium Shamroy Pellew MetaBrainz
Robin Richtsfeld Apertium Aleksander Wójtowicz MovingBlocks
Ryan Chi Apertium Jindřich Dítě MovingBlocks
Caleb Parks BRL-CAD Nicholas Bates MovingBlocks
Lucas Prieels BRL-CAD Jyothsna Ashok OpenMRS
Mitesh Gulecha BRL-CAD Matthew Whitaker OpenMRS
Aditya Rathore Catrobat Tomasz Domagała OpenMRS
Andreas Lukita Catrobat Alan Zhu OpenWISP
Martina Hanusova Catrobat Hizkia Winata OpenWISP
John Chew CCExtractor Vidya Haikal OpenWISP
Matej Plavevski CCExtractor Ethan Zhao OSGeo
William CCExtractor Neev Mistry OSGeo
Adam Štafa CloudCV Shailesh Kadam OSGeo
Adarsh Kumar CloudCV Emily Ong Hui Qi Sugar Labs
Naman Sood CloudCV Koh Pi Rong Sugar Labs
Anu Dookna coala Sanatan Chaudhary Sugar Labs
Marcos Gómez Bracamonte coala Adhyan Dhull SCoRe
Wonsang Chung coala Gaurav Pandey SCoRe
Kartik Goel Drupal Moses Paul SCoRe
Sagar Khatri Drupal Fidella Widjojo Systers Community
Tanish Kapur Drupal Valentin Sergeev Systers Community
Aditya Dutt FOSSASIA Yuyuan Luo Systers Community
Saarthak Chaturvedi FOSSASIA Janice Kim The Mifos Initiative
Yash Kumar Verma FOSSASIA Muhammad Rafly Andrianza The Mifos Initiative
Bach Nguyen Haiku Shivam Kumar Singh The Mifos Initiative
Đắc Tùng Dương Haiku Daniel Lim Ubuntu
Xiang Fan Haiku Qazi Omair Ahmed Ubuntu
Anhai Wang JBoss Community Simran Ubuntu
Divyansh Kulshreshtha JBoss Community David Siedtmann Wikimedia
Sachin Rammoorthy JBoss Community Rafid Aslam Wikimedia
Adrien Zier LibreHealth Yifei He Wikimedia
Miguel Dinis LibreHealth Akash Chandrasekaran XWiki
Vishwas Adiga LibreHealth Siddh Raman Singh XWiki
Shruti Singh Liquid Galaxy Project Srijan Jha XWiki
Kshitijaa Jaglan Liquid Galaxy Project Freddie Miller Zulip
Surya Tanwar Liquid Galaxy Project Priyank Patel Zulip
Enjeck Mbeh Cleopatra MetaBrainz Steven Hans Zulip
Kartik Ohri MetaBrainz

GCI is a contest that the Google Open Source team is honored to run every year. We saw immense growth this year, the seventh year of the contest, both in the number of students participating and the number of countries represented by these students. 

Our 730+ mentors, the heart and soul of GCI, are the reason the contest thrives. Mentors volunteer their time to help these bright students become open source contributors. Mentors spend hundreds of hours during their holiday breaks answering questions, reviewing submitted tasks, and welcoming the students to their communities. GCI would not be possible without their patience and tireless efforts.

We will post more statistics and fun stories that came from GCI 2017 here on the Google Open Source Blog over the next few months, so please stay tuned!

Congratulations to our Grand Prize Winners, Finalists, and all of the students who spent the last couple of months learning about and contributing to open source.

By Stephanie Taylor, Google Open Source

Wrapping up Google Code-in 2017

Today marks the conclusion of the 8th annual Google Code-in (GCI), our contest that teaches teenage students through contributions to open source projects. As with most years, the contest evolved a bit and grew. And it grew. And it doubled. And then it grew some more...
These numbers may increase as mentors finish reviewing the final work submitted by students.
Mentors from each of the 25 open source organizations are now busy reviewing the last of the work submitted by student participants. We’re looking forward to sharing the stats.

Each organization will pick two Grand Prize Winners who will be flown to Northern California to visit Google’s headquarters, enjoy a day of adventure in San Francisco, and meet their mentors and Google engineers.

We’d like to congratulate all of the student participants for challenging themselves and making a contribution to open source in the process! We’d also like to congratulate the mentors for surviving the unusually busy contest.

Further, we’d like to thank the mentors and the organization administrators. They are the heart of this program, volunteering countless hours creating tasks, reviewing student work, and helping students into the world of open source. Mentors teach young students about the many facets of open source development, from community standards and communicating across time zones to version control and testing. We couldn’t run this program without you!

Stay tuned, we’ll be announcing the Grand Prize Winners and Finalists on January 31st.

By Josh Simmons, Google Open Source

Get latest Kubernetes version 1.9 on Google’s managed offering



We're excited to announce that Kubernetes version 1.9 will be available on Google Kubernetes Engine next week in our early access program. This release includes greater support for stateful and stateless applications, hardware accelerator support for machine learning workloads and storage enhancements. Overall, this release achieves a big milestone in making it easy to run a wide variety of production-ready applications on Kubernetes without having to worry about the underlying infrastructure. Google is the leading contributor to open-source Kubernetes releases and now you can access the latest Kubernetes release on our fully-managed Kubernetes Engine, and let us take care of managing, scaling, upgrading, backing up and helping to secure your clusters. Further, we recently simplified our pricing by removing the fee for cluster management, resulting in real dollar savings for your environment.

We're committed to providing the latest technological innovation to Kubernetes users with one new release every quarter. Let’s a take a closer look at the key enhancements in Kubernetes 1.9.

Workloads APIs move to GA


The core Workloads APIs (DaemonSet, Deployment, ReplicaSet and StatefulSet), which let you run stateful and stateless workloads in Kubernetes 1.9, move to general availability (GA) in this release, delivering production-grade quality, support and long-term backwards compatibility.

Hardware accelerator enhancements


Google Cloud Platform (GCP) provides a great environment for running machine learning and data analytics workloads in containers. With this release, we’ve improved support for hardware accelerators such as NVIDIA Tesla P100 and K80 GPUs. Compute-intensive workloads will benefit greatly from cost-effective and high performance GPUs for many use cases ranging from genomics and computational finance to recommendation systems and simulations.

Local storage enhancements for stateful applications


Improvements to the Kubernetes scheduler in this release make it easier to use local storage in Kubernetes. The local persistent storage feature (alpha) enables easy access to local SSD on GCP through Kubernetes’ standard PVC (Persistent Volume Claim) interface in a simple and portable way. This allows you to take an existing Helm Chart, or StatefulSet spec using remote PVCs, and easily switch to local storage by just changing the StorageClass name. Local SSD offers superior performance including high input/output operations per second (IOPS), low latency, and is ideal for high performance workloads, distributed databases, distributed file systems and other stateful workloads.

Storage interoperability through CSI


This Kubernetes release introduces an alpha implementation of Container Storage Interface (CSI). We've been working with the Kubernetes community to provide a single and consistent interface for different storage providers. CSI makes it easy to add different storage volume plugins in Kubernetes without requiring changes to the core codebase. CSI underscores our commitment to being open, flexible and collaborative while providing maximum value—and options—to our users.

Try it now!


In a few days, you can access the latest Kubernetes Engine release in your alpha clusters by joining our early access program.