Tag Archives: Announcements

Introducing the Google Summer of Code 2017 Mentor Organizations

Today’s the day! We are excited to announce the mentor organizations accepted for this year’s Google Summer of Code (GSoC). Every year we receive more applications than we can accept and 2017 was no exception. After carefully reviewing almost 400 applications, we have chosen 201 open source projects and organizations, 18% of which are new to the program. Please see the program website for a complete list of the accepted organizations.

Interested in participating as a student? We will begin accepting student applications on Monday, March 20, 2017 at 16:00 UTC and the deadline is Monday, April 3, 2017 at 16:00 UTC.

Over the next three weeks, students who’d like to participate in Google Summer of Code should research the organizations and their Ideas Lists to explore which organizations are a good fit for their interests and skills and learn how they might contribute. Some of the most successful proposals have been completely new ideas submitted by students, so if you don’t see a project that appeals to you, don’t hesitate to suggest a new idea to the organization! There are contacts listed for each organization on their Ideas List — students should contact the organization directly to discuss their ideas. We also strongly encourage all interested students to reach out to and become familiar with the organization before applying.

You can find more information on our website, including a full timeline of important dates and program milestones. We also highly recommend all interested students read the Student Manual, FAQ and the Program Rules.

Congratulations to all of our mentor organizations! We look forward to working with all of you during Google Summer of Code 2017.

By Josh Simmons, Open Source Programs Office

Google Cloud Platform is the first cloud provider to offer Intel Skylake



I’m excited to announce that Google Cloud Platform (GCP) is the first cloud provider to offer the next generation Intel Xeon processor, codenamed Skylake.

Customers across a range of industries, including healthcare, media and entertainment and financial services ask for the best performance and efficiency for their high-performance compute workloads. With Skylake processors, GCP customers are the first to benefit from the next level of performance.

Skylake includes Intel Advanced Vector Extensions (AVX-512), which make it ideal for scientific modeling, genomic research, 3D rendering, data analytics and engineering simulations. When compared to previous generations, Skylake’s AVX-512 doubles the floating-point performance for the heaviest calculations.

We optimized Skylake for Google Compute Engine’s complete family of VMs  standard, highmem, highcpu and Custom Machine Types to help bring the next generation of high performance compute instances to everyone.
"Google and Intel have had a long standing engineering partnership working on Data Center innovation. We're happy to see the latest Intel Xeon technology now available on Google Cloud Infrastructure. This technology delivers significant enhancements for compute-intensive workloads, efficiently accelerating data analytics that businesses depend on for operations and growth.”  Diane Bryant, Intel Executive Vice President and GM of the Data Center Group
Skylake processors are available in five GCP regions: Western US, Eastern US, Central US, Western Europe and Eastern Asia Pacific. Sign up here to take advantage of the new Skylake processors.

You can learn more about Skylake for Google Compute Engine and see it in action at Google Cloud NEXT ’17 in San Francisco on March 8-10. Register today!

GPUs are now available for Google Compute Engine and Cloud Machine Learning



Google Cloud Platform gets a performance boost today with the much anticipated public beta of NVIDIA Tesla K80 GPUs. You can now spin up NVIDIA GPU-based VMs in three GCP regions: us-east1, asia-east1 and europe-west1, using the gcloud command-line tool. Support for creating GPU VMs using the Cloud Console appears later this week.

If you need extra computational power for deep learning, you can attach up to eight GPUs (4 K80 boards) to any custom Google Compute Engine virtual machine. GPUs can accelerate many types of computing and analysis, including video and image transcoding, seismic analysis, molecular modeling, genomics, computational finance, simulations, high performance data analysis, computational chemistry, finance, fluid dynamics and visualization.

NVIDIA K80 GPU Accelerator Board

Rather than constructing a GPU cluster in your own datacenter, just add GPUs to virtual machines running in our cloud. GPUs on Google Compute Engine are attached directly to the VM, providing bare-metal performance. Each NVIDIA GPU in a K80 has 2,496 stream processors with 12 GB of GDDR5 memory. You can shape your instances for optimal performance by flexibly attaching 1, 2, 4 or 8 NVIDIA GPUs to custom machine shapes.

Google Cloud supports as many as 8 GPUs attached to custom VMs, allowing you to optimize the performance of your applications.

These instances support popular machine learning and deep learning frameworks such as TensorFlow, Theano, Torch, MXNet and Caffe, as well as NVIDIA’s popular CUDA software for building GPU-accelerated applications.

Pricing

Like the rest of our infrastructure, the GPUs are priced competitively and are billed per minute (10 minute minimum). In the US, each K80 GPU attached to a VM is priced at $0.700 per hour per GPU and in Asia and Europe, $0.770 per hour per GPU. As always, you only pay for what you use. This frees you up to spin up a large cluster of GPU machines for rapid deep learning and machine learning training with zero capital investment.

Supercharge machine learning

The new Google Cloud GPUs are tightly integrated with Google Cloud Machine Learning (Cloud ML), helping you slash the time it takes to train machine learning models at scale using the TensorFlow framework. Now, instead of taking several days to train an image classifier on a large image dataset on a single machine, you can run distributed training with multiple GPU workers on Cloud ML, dramatically shorten your development cycle and iterate quickly on the model.

Cloud ML is a fully-managed service that provides end-to-end training and prediction workflow with cloud computing tools such as Google Cloud Dataflow, Google BigQuery, Google Cloud Storage and Google Cloud Datalab.

Start small and train a TensorFlow model locally on a small dataset. Then, kick off a larger Cloud ML training job against a full dataset in the cloud to take advantage of the scale and performance of Google Cloud GPUs. For more on Cloud ML, please see the Quickstart guide to get started, or this document to dive into using GPUs.

Next steps

Register for Cloud NEXT, sign up for the CloudML Bootcamp and learn how to Supercharge performance using GPUs in the cloud. You can use the gcloud command-line to create a VM today and start experimenting with TensorFlow-accelerated machine learning. Detailed documentation is available on our website.


Delivering a better platform for your SQL Server Enterprise workloads



Our goal at Google Cloud Platform (GCP) is to be the best enterprise cloud environment. Throughout 2016, we worked hard to ensure that Windows developers and IT administrators would feel right at home when they came to GCP: whether it’s building an ASP.NET application with their favorite tools like Visual Studio and PowerShell, or deploying the latest version of Windows Server onto Google Compute Engine.

Continuing our work in providing great infrastructure for enterprises running Windows, we’re pleased to announce pre-configured images for Microsoft SQL Server Enterprise and Windows Server Core on Compute Engine. High-availability and disaster recovery are top of mind for our larger customers, so we’re also announcing support for SQL Server AlwaysOn Availability Groups and persistent disk snapshots integrated with Volume Shadow Copy Service (VSS) on Windows Server. Finally, all of our Windows Server images are now enabled with Windows Remote Management support, including our Windows Server Core 2016 and 2012 R2 images.

SQL Server Enterprise Edition images on GCE


You can now launch Compute Engine VMs with Microsoft SQL Server Enterprise Edition pre-installed, and pay by the minute for SQL Server Enterprise and Windows Server licenses. Customers can also choose to bring their own licenses for SQL Server Enterprise.

We now support pre-configured images for the following versions in Beta:

  • SQL Server Enterprise 2016
  • SQL Server Enterprise 2014
  • SQL Server Enterprise 2012 
Supported SQL Server images available on Compute Engine (click to enlarge)

SQL Server Enterprise
targets mission-critical workloads by supporting more cores, higher memory and important enterprise features, including:

  • In-memory tables and indexes
  • Row-level security and encryption for data at rest or in motion
  • Multiple read-only replicas for integrated HA/DR and read scale-out
  • Business intelligence and rich visualizations on all platforms, including mobile
  • In-database advanced analytics with R


Combined with Google’s world-class infrastructure, SQL Server instances running on Compute Engine benefit from price-to-performance advantages, highly customizable VM sizes and state-of-the-art networking and security capabilities. With automatic sustained use discounts and the prospect of retiring hardware and associated maintenance on the horizon, customers can achieve total costs lower than those of other cloud providers.

To get started, learn how to create SQL Server instances easily on Google Compute Engine.



High-availability and disaster recovery for SQL Server VMs


Mission-critical SQL Server workloads require support for high-availability and disaster recovery. To achieve this, GCP supports Windows Server Failover Clustering (WSFC) and SQL Server AlwaysOn Availability Groups. AlwaysOn Availability Groups is SQL Server’s flagship HA/DR solution, allowing you to configure replicas for automatic failover in case of failure. These replicas can be readable, allowing you to offload read workloads and backups.

Compute Engine users can now configure AlwaysOn Availability Groups. This includes configuring replicas on VMs in different isolated zones as described in these instructions.
A highly available SQL Server reference architecture using Windows Server Failover Clustering and SQL Server AlwaysOn Availability Groups (click to enlarge)


Better backups with VSS-integrated persistent disk snapshots for Windows VMs


Being able to take snapshots in coordination with Volume Shadow Copy Service ensures that you get application-consistent snapshots for persistent disks attached to an instance running Windows -- without having to shut it down. This feature is useful when you want to take a consistent backup for VSS-enabled applications like SQL Server and Exchange Server without affecting the workload running on the VMs.

To get started with VSS-enabled persistent disk snapshots, select Snapshots under the Cloud Console Compute Engine page. There you'll see a new check-box on the disk snapshot creation page that allows you to specify whether a snapshot should be VSS-enabled.
(click to enlarge)

This feature can also be invoked via the gcloud SDK and API, following these instructions.

Looking ahead


GCP’s expanded support for SQL Server images and high availability are our latest efforts to improve Windows support on Compute Engine, and to build a cloud environment for enterprise Windows that leads the industry. Last year we expanded our list of pre-configured images to include SQL Server Standard, SQL Server Web and Windows Server 2016, and announced comprehensive .NET developer solutions, including a .NET client library for all GCP APIs through NuGet. We have lots more in store for the rest of 2017!

For more resources on Windows Server and Microsoft SQL Server on GCP, check out cloud.google.com/windows and cloud.google.com/sql-server. And for hands-on training on how to deploy and manage Windows and SQL Server workloads on GCP, come to the GCP NEXT ‘17 Windows Bootcamp. Finally, if you need help migrating your Windows workloads, don’t hesitate to contact us. We’re eager to hear your feedback!

Announcing the Google Code-in 2016 Winners!

Drum roll please! We are very proud to announce the 2016 Google Code-in (GCI) Grand Prize Winners and Finalists. Each year we see the number of student participants increase, and 2016 was no exception: 1,340 students from 62 countries completed an impressive 6,418 tasks. Winners and Finalists were chosen by the 17 open source organizations and are listed alphabetically below.
First is a list of our Grand Prize winners. These 34 teens completed an astounding 842 total tasks. Each Grand Prize winner will be flown to the Google campus for four days this summer to meet with Google engineers and enjoy the Bay Area.

GRAND PRIZE WINNERS
Name Organization Country
Matthew Marting Apertium United States
Shardul Chiplunkar Apertium United States
Michal Hanus BRL-CAD Czech Republic
Sudhanshu Agarwal BRL-CAD India
Alexandru Bratosin CCExtractor Development Romania
Evgeny Shulgin CCExtractor Development Russian Federation
Joshua Pan Copyleft Games Group United States
Shriank Kanaparti Copyleft Games Group India
Dhanat Satta-awalo Drupal Thailand
Utkarsh Dixit Drupal India
Kaisar Arkhan FOSSASIA Indonesia
Oana Roşca FOSSASIA Romania
Raefaldhi Amartya Junior Haiku Indonesia
Vanisha Kesswani Haiku India
Ilya Bizyaev KDE Russian Federation
Sergey Popov KDE Russian Federation
Anshuman Agarwal MetaBrainz India
Daniel Hsing MetaBrainz Hong Kong
Dhruv Shrivastava Mifos India
Sawan Kumar Mifos India
Ong Jia Wei, Isaac Moving Blocks Singapore
Scott Moses Sunarto Moving Blocks Indonesia
Mira Yang OpenMRS United States
Nji Collins OpenMRS Cameroon
Cristian García Sugar Labs Uruguay
Tymon Radzik Sugar Labs Poland
August van de Ven SCoRe Netherlands
Deniz Karakay SCoRe Turkey
Jacqueline Bronger Systers Germany
Soham Sen Systers India
Filip Grzywok Wikimedia Poland
Justin Du Wikimedia United States
Sampriti Panda Zulip India
Tommy Ip Zulip United Kingdom

And below are the Finalists. Each of these 51 students will receive a digital certificate of completion, a GCI t-shirt and hooded sweatshirt.

FINALISTS
Name Organization
Bror Hultberg Apertium
Kamil Bujel Apertium
Ngadou Sylvestre Apertium
Apratim Ranjan Chakrabarty BRL-CAD
Tianyue Gao BRL-CAD
Trung Nguyen Hoang BRL-CAD
Danila Fedorin CCExtractor Development
Manveer Basra CCExtractor Development
Matej Plavevski CCExtractor Development
Daniel Wee Soong Lim Copyleft Games Group
Jonathan Pan Copyleft Games Group
Oscar Belletti Copyleft Games Group
Ashmith Kifah Sheik Meeran Drupal
Heervesh Lallbahadur Drupal
Neeraj Pandey Drupal
Adarsh Kumar FOSSASIA
Ridhwanul Haque FOSSASIA
Sanchit Mishra FOSSASIA
Dmytro Shynkevych Haiku
Stephanie Fu Haiku
Tudor Nazarie Haiku
Harpreet Singh KDE
Sangeetha S KDE
Spencer Brown KDE
Daniel Theis MetaBrainz
Divya Prakash Mittal MetaBrainz
Tigran Kostandyan MetaBrainz
Illia Andrieiev Mifos
Justin Du Mifos
Tan Gemicioglu Mifos
J Young Kim Moving Blocks
Maxim Borsch Moving Blocks
Quinn Roberts Moving Blocks
Shivani Thaker OpenMRS
Tenzin Zomkyi OpenMRS
Yusuf Karim OpenMRS
Emily Ong Hui Qi Sugar Labs
Euan Ong Sugar Labs
Pablo Salomón Ortega Quintana Sugar Labs
Basil Najjar SCoRe
Jupinder Parmar SCoRe
Thuận Nguyễn SCoRe
Muaaz Kasker Systers
Muhammed Shamil K Systers
Phoebe Fletcher Systers
David Siedtmann Wikimedia
Nikita Volobuev Wikimedia
Yurii Shnitkovskyi Wikimedia
Cynthia Lin Zulip
Rafid Aslam Zulip
Robert Hönig Zulip


The Google Open Source Programs Office is proud to run this contest each year. The quality of work from our participating students is incredible, and each year we look forward to meeting our Grand Prize winners in person. It’s exciting to see the next generation of coders emerge! We also owe a huge debt of gratitude to all of the mentors who helped guide each of the participants through their tasks. Without their tireless work over the past 7 weeks, GCI would not be possible.

Stay tuned to the open source blog - we’ll regularly post Google Code-in 2016 stories in the upcoming months including a full breakdown of contest statistics, wrap-up posts from the organizations, student highlights and more.

By Mary Radomile, Open Source Programs Office

Join the POSSE Workshop on Student Involvement in Humanitarian Free and Open Source Software

Are you a university or college instructor interested in providing students with experience in real-world projects? Are you interested in supporting participation in humanitarian free and open source software (HFOSS)? If so, join the Professor's Open Source Software Experience (POSSE) workshop being held at Google’s San Francisco Office, April 20-22, 2017.

Over 100 faculty members have attended past workshops and there is a growing community of faculty members helping students learn within HFOSS projects. This three-stage faculty workshop will prepare you to support student participation in open source projects. In the workshop, you will:

  • Become part of the community of educators which involves students in HFOSS
  • Learn how to support student learning within real-world project environments
  • Motivate students and raise their appreciation of computing for social good
  • Meet and collaborate with instructors who have similar interests and goals

Workshop Format

Stage 1: Starts February 23, 2017 with online activities. These activities will take 2-3 hours per week and include interaction among workshop instructors and participants.

Stage 2: The face-to-face workshop will be held at the Google San Francisco office, April 20-22, 2017. Participants include the workshop organizers, POSSE alumni and members of the open source community.

Stage 3: Comprises online activities and interactions among small groups. Participants will have support while involving students in an HFOSS project in the classroom.

Please click here to learn more about the POSSE workshop in April.

How to Apply

To apply, please complete and submit the application by February 13th. Prior work with FOSS projects is not required. The POSSE workshop committee will send you a confirmation email to notify you of the status of your application by February 23rd, 2017.

Participant Support

POSSE is supported by the National Science Foundation (NSF) and Google. NSF funding will provide two nights lodging and meals during the workshop. Travel costs will be covered up to $500. At this time,we can only support US-based faculty members. However, if you can support your own travel, please do submit an application!

Why is Google participating?

Google is participating in order to help educators overcome challenges identified in the POSSE workshop held last June, and to better support FOSS education in academia. We are very happy to host the first POSSE workshop located on the west coast of the United States.

See you in San Francisco this April!

By Helen Hu, Open Source Programs Office

Now accepting organization applications for Google Summer of Code 2017



We’re heading into the 13th year of Google Summer of Code (GSoC) and are now accepting applications for open source organizations. GSoC is a global program that gets student developers involved as open source contributors. Students spend three months working under the guidance of mentors on projects to expand and improve open source software.

Last year we had 178 open source organizations and 1,200 students participate. Open source organizations include open source projects and the umbrella organizations that often serve as their fiscal sponsors.



Do you represent a free or open source software organization? Are you seeking new contributors? (Of course!) Do you love the challenge and reward of mentoring new developers in your community? Apply to be a mentor organization for Google Summer of Code! Starting today we are accepting applications from open source projects who would like to serve as mentor organizations for enthusiastic student developers.

The deadline to apply is February 9 at 16:00 UTC. Organizations chosen for GSoC 2017 will be posted on February 27.

Please visit the program site for more information on how to apply, a detailed timeline of important deadlines and general program information. We also encourage you to check out the Mentor Manual and join the discussion group. You can also learn more by exploring our series of guest posts written by mentor organizations who participated in GSoC 2016.

Best of luck to all of the applicants!

By Josh Simmons, Open Source Programs Office

Google Code-in 2016: another record breaking year

Today we celebrate the closing of the 7th annual Google Code-in (GCI) which, like last year, was bigger and better than ever. Mentors from each of the 17 organizations are busy reviewing the last of the work submitted by student participants.

Each organization will pick two Grand Prize Winners who will receive a trip to Google’s Northern California headquarters this summer where they will meet Google engineers, see exciting demos and presentations and enjoy a day of adventure in San Francisco. You can learn about the experiences of the 2015 Grand Prize Winners in our short series of wrap-up blog posts. We’ll announce the new Grand Prize Winners and the Finalists here on January 30.

We would like to congratulate all of the new and returning students who participated this year. We’re thrilled with the turnout: over the last seven weeks, 1,374* students from 62 countries completed 6,397* tasks in the contest.

And a HUGE thanks to the people who are the heart of our program: the mentors and organization administrators. These volunteers spend countless hours creating and reviewing hundreds of tasks. They teach the young students who participate in GCI about the many facets of open source development, from community standards and communicating across time zones to version control and testing. We couldn’t run this program without you!

By Josh Simmons, Open Source Programs Office

* These numbers will increase over the coming days as mentors review the final work submitted by students.

Automated node management, stateful apps and HIPAA compliance come to Google Container Engine



Today, we’re bringing the latest Kubernetes 1.5 release to Google Cloud Platform (GCP) customers. In addition to the full slate of features available in Kubernetes, Google Container Engine brings a simplified user experience for cross-cloud federation, support for running stateful applications and automated maintenance of your clusters.

Highlights of this Container Engine release include:

  • Auto-upgrade and auto-repair for nodes simplify on-going management of your clusters
  • Simplified cross-cloud federation with support for the new "kubefed" tool
  • Automated scaling for key cluster add-ons, ensuring improved uptime for critical cluster services
  • StatefulSets (originally called PetSets) in beta, enabling you to run stateful workloads on Container Engine
  • HIPAA compliance allowing you to run HIPAA regulated workloads in containers (after agreement to Google Cloud’s standard Business Associate Agreement).

The adoption of Kubernetes and growth of the community has propelled it to be one of the fastest and most active open source projects, and that growth is mirrored in the accelerating usage of Container Engine. By using the fully managed services, companies can focus on delivering value for their customers, rather than on maintaining their infrastructure. Some recent customer highlights include:

  • GroupBy uses Container Engine to support continuous delivery of new commerce application capabilities for their customers, including retailers such as The Container Store, Urban Outfitters and CVS Health.

Google Container Engine provides us with the openness, stability and scalability we need to manage and orchestrate our Docker containers. This year, our customers flourished during Black Friday and Cyber Monday with zero outages, downtime or interruptions in service thanks, in part, to Google Container Engine.” - Will Warren, Chief Technology Officer at GroupBy.

  • MightyTV ported their workloads to Container Engine to power their video recommendation engine, reducing their cost by 33% compared to running on traditional virtual machines. Additionally, they were able to remove a third-party monitoring and logging service and let go of maintaining Kubernetes on their own.


If you’d like to help shape the future of Kubernetes the core technology Container Engine is built on  join the open Kubernetes community and participate via the kubernetes-users-mailing list or chat with us on the kubernetes-users Slack channel.

Finally, if you’d like to try Kubernetes or GCP, it’s super easy to get started with one-click Kubernetes clusters creation with Container Engine. Sign up for a free trial here.

Thank you for your support!

Google joins the Cloud Foundry Foundation



From the beginning, our goal for Google Cloud Platform has been to build the most open cloud for all developers and businesses alike, and make it easy for them to build and run great software. A big part of this is being an active member of the open source community and working directly with developers where they are, whether they’re at an emerging startup or a large enterprise.


Today we're pleased to announce that Google has joined the Cloud Foundry Foundation as a Gold member to further our commitment to these goals.

Building on success

We've done a lot of work with the Cloud Foundry community this year, including the delivery of the BOSH Google CPI release, enabling the deployment of Cloud Foundry on GCP, and the recent release of the Open Service Broker API. These efforts have led to additional development and integration with tools like Google Stackdriver for hybrid monitoring, and custom service brokers for eight of our GCP services:

Collaborating with customers and partners as we’ve worked on these projects made the decision to join the Cloud Foundry Foundation simple. It's an energized community with vast enterprise adoption, and the technical collaboration has been remarkable between the various teams.


What’s next

Joining the Cloud Foundry Foundation allows us to be even more engaged and collaborative with the entire Cloud Foundry ecosystem. And as we enter 2017, we look forward to even more integrations and more innovations between Google, the Cloud Foundry Foundation and our joint communities.