Tag Archives: Announcements

Artifact management for open source software

Posted by Kit Merker, JFrog

It's often said that open source is free like speech, not free like beer. But every so often, the developers behind an open source project can take advantage of free services to make their project better.

We believe in supporting the good work of open source projects to help the maintainers, who do an often thankless job, to be more productive.


Last year, we collaborated with Google to announce the availability of Artifactory Pro hosted on Google Cloud Platform free of charge for qualifying open source projects. The idea was to make sure that open source maintainers could reliably share their build outputs between team members for development, testing and deployment. This will help ensure that the open source projects which developers around the world rely on are easy to consume.

Since the announcement, over 30 projects have qualified for and joined, including OpenMRS, Psono, and Grails.

If you run an open source project and are interested, we encourage you to apply.

Introducing Agones: Open-source, multiplayer, dedicated game-server hosting built on Kubernetes



In the world of distributed systems, hosting and scaling dedicated game servers for online, multiplayer games presents some unique challenges. And while the game development industry has created a myriad of proprietary solutions, Kubernetes has emerged as the de facto open-source, common standard for building complex workloads and distributed systems across multiple clouds and bare metal servers. So today, we’re excited to announce Agones (Greek for "contest" or "gathering"), a new open-source project that uses Kubernetes to host and scale dedicated game servers.

Currently under development in collaboration with interactive gaming giant Ubisoft, Agones is designed as a batteries-included, open-source, dedicated game server hosting and scaling project built on top of Kubernetes, with the flexibility you need to tailor it to the needs of your multiplayer game.

The nature of dedicated game servers


It’s no surprise that game server scaling is usually done by proprietary software—most orchestration and scaling systems simply aren’t built for this kind of workload.

Many of the popular fast-paced online multiplayer games such as competitive FPSs, MMOs and MOBAs require a dedicated game server—a full simulation of the game world—for players to connect to as they play within it. This dedicated game server is usually hosted somewhere on the internet to facilitate synchronizing the state of the game between players, but also to be the arbiter of truth for each client playing the game, which also has the benefit of safeguarding against players cheating.

Dedicated game servers are stateful applications that retain the full game simulation in memory. But unlike other stateful applications, such as databases, they have a short lifetime. Rather than running for months or years, a dedicated game server runs for a few minutes or hours.

Dedicated game servers also need a direct connection to a running game server process’ hosting IP and port, rather than relying on load balancers. These fast-paced games are extremely sensitive to latency, which a load balancer only adds more of. Also, because all the players connected to a single game server share the in-memory game simulation state at the same time, it’s just easier to connect them to the same machine.

Here’s an example of a typical dedicated game server setup:


  1. Players connect to some kind of matchmaker service, which groups them (often by skill level) to play a match. 
  2. Once players are matched for a game session, the matchmaker tells a game server manager to provide a dedicated game server process on a cluster of machines.
  3. The game server manager creates a new instance of a dedicated game server process that runs on one of the machines in the cluster. 
  4. The game server manager determines the IP address and the port that the dedicated game server process is running on, and passes that back to the matchmaker service.
  5. The matchmaker service passes the IP and port back to the players’ clients.
  6. The players connect directly to the dedicated game server process and play the multiplayer game against one another. 

Building Agones on Kubernetes and open-source 

Agones replaces the bespoke cluster management and game server scaling solution we discussed above, with a Kubernetes cluster that includes a custom Kubernetes Controller and matching GameServer Custom Resource Definitions.
With Agones, Kubernetes gets native abilities to create, run, manage and scale dedicated game server processes within Kubernetes clusters using standard Kubernetes tooling and APIs. This model also allows any matchmaker to interact directly with Agones via the Kubernetes API to provision a dedicated a game server.

Building Agones on top of Kubernetes has lots of other advantages too: it allows you to run your game workloads wherever it makes the most sense, for example, on game developers’ machines via platforms like minikube, in-studio clusters for group development, on-premises machines and on hybrid-cloud or full-cloud environments, including Google Kubernetes Engine.

Kubernetes also simplifies operations. Multiplayer games are never just dedicated game servers—there are always supporting services, account management, inventory, marketplaces etc. Having Kubernetes as a single platform that can run both your supporting services as well as your dedicated game servers drastically reduces the required operational knowledge and complexity for the supporting development team.

Finally, the people behind Agones aren’t just one group of people building a game server platform in isolation. Agones, and the developers that use it, leverages the work of hundreds of Kubernetes contributors and the diverse ecosystem of tools that have been built around the Kubernetes platform.

Founding contributor to the Agones project, Ubisoft brought their deep knowledge and expertise in running top-tier, AAA multiplayer games for a global audience.
“Our goal is to continually find new ways to provide the highest-quality, most seamless services to our players so that they can focus on their games. Agones helps by providing us with the flexibility to run dedicated game servers in optimal datacenters, and by giving our teams more control over the resources they need. This collaboration makes it possible to combine Google Cloud’s expertise in deploying Kubernetes at scale with our deep knowledge of game development pipelines and technologies.”  
Carl Dionne, Development Director, Online Technology Group, Ubisoft. 


Getting started with Agones 


Since Agones is built with Kubernetes’ native extensions, you can use all the standard Kubernetes tooling to interact with it, including kubectl and the Kubernetes API.

Creating a GameServer 

Authoring a dedicated game server to be deployed on Kubernetes is similar to developing a more traditional Kubernetes workload. For example, the dedicated game server is simply built into a container image like so:

Dockerfile
FROM debian:stretch
RUN useradd -m server

COPY ./bin/game-server /home/server/game-server
RUN chown -R server /home/server && \
    chmod o+x /home/server/game-server

USER server
ENTRYPOINT ["/home/server/game-server"]

By installing Agones into Kubernetes, you can add a GameServer resource to Kubernetes, with all the configuration options that also exist for a Kubernetes Pod.

gameserver.yaml
apiVersion: "stable.agon.io/v1alpha1"
kind: GameServer
metadata:
  name: my-game-server
spec:
  containerPort: 7654
  # Pod template
  template:
    spec:
      containers:
      - name: my-game-server-container
        image: gcr.io/agon-images/my-game-server:0.1

You can then apply it through the kubectl command or through the Kubernetes API:

$ kubectl apply -f gamesever.yaml
gameserver "my-game-server" created

Agones manages starting the game server process defined in the yaml, assigning it a public port, and retrieving the IP and port so that players can connect to it. It also tracks the lifecycle and health of the configured GameServer through an SDK that's integrated into the game server process code.

You can query Kubernetes to get details about the GameServer, including its State, and the IP and port that player game clients can connect to, either through kubectl or the Kubernetes API:

$ kubectl describe gameserver my-game-server
Name:         my-game-server
Namespace:    default
Labels:       
Annotations:  
API Version:  stable.agones.dev/v1alpha1
Kind:         GameServer
Metadata:
  Cluster Name:
  Creation Timestamp:  2018-02-09T05:02:18Z
  Finalizers:
    stable.agones.dev
  Generation:        0
  Initializers:      
  Resource Version:  13422
  Self Link:         /apis/stable.agones.dev/v1alpha1/namespaces/default/gameservers/my-game-server
  UID:               6760e87c-0d56-11e8-8f17-0800273d63f2
Spec:
  Port Policy:     dynamic
  Container:       my-game-server-container
  Container Port:  7654
  Health:
    Failure Threshold:      3
    Initial Delay Seconds:  5
    Period Seconds:         5
  Host Port:                7884
  Protocol:                 UDP
  Template:
    Metadata:
      Creation Timestamp:  
    Spec:
      Containers:
        Image:  gcr.io/agones-images/my-game-server:0.1
        Name:   my-game-server-container
        Resources:
Status:
  Address:    192.168.99.100
  Node Name:  agones
  Port:       7884
  State:      Ready
Events:
  Type    Reason    Age   From                   Message
  ----    ------    ----  ----                   -------
  Normal  PortAllocation  3s    gameserver-controller  Port allocated
  Normal  Creating        3s    gameserver-controller  Pod my-game-server-q98sz created
  Normal  Starting        3s    gameserver-controller  Synced
  Normal  Ready           1s    gameserver-controller  Address and Port populated

What’s next for Agones


Agones is still in very early stages, but we’re very excited about its future! We’re already working on new features like game server Fleets, planning a v0.2 release and working on a roadmap that includes support for Windows, game server statistic collection and display, node autoscaling and more.

If you would like to try out a v0.1 alpha release of Agones, you can install it directly on a Kubernetes cluster such as GKE or minikube and take it for a spin. We have a great installation guide that will take you through getting setup!

And we would love your help! There are multiple ways to get involved:

Thanks to everyone has been involved in the project so far across Google Cloud Platform and Ubisoft, we're very excited for the future of Agones!

Student applications open for Google Summer of Code 2018

Originally posted by Josh Simmons from the Google Open Source Team on the Google Open Source Blog.

Ready, set, go! Today we begin accepting applications from university students who want to participate in Google Summer of Code (GSoC) 2018. Are you a university student? Want to use your software development skills for good? Read on.

Now entering its 14th year, GSoC gives students from around the globe an opportunity to learn the ins and outs of open source software development while working from home. Students receive a stipend for successful contribution to allow them to focus on their project for the duration of the program. A passionate community of mentors help students navigate technical challenges and monitor their progress along the way.

Past participants say the real-world experience that GSoC provides sharpened their technical skills, boosted their confidence, expanded their professional network and enhanced their resume.

Interested students can submit proposals on the program site between now and Tuesday, March 27, 2018 at 16:00 UTC.

While many students began preparing in February when we announced the 212 participating open source organizations, it's not too late to start! The first step is to browse the list of organizations and look for project ideas that appeal to you. Next, reach out to the organization to introduce yourself and determine if your skills and interests are a good fit. Since spots are limited, we recommend writing a strong proposal and submitting a draft early so you can get feedback from the organization and increase the odds of being selected.

You can learn more about how to prepare in the video below and in the Student Guide.

You can find more information on our website, including a full timeline of important dates. We also highly recommend perusing the FAQ and Program Rules, as well as joining the discussion mailing list.

Remember to submit your proposals early as you only have until Tuesday, March 27 at 16:00 UTC. Good luck to all who apply!

Student applications open for Google Summer of Code 2018

Ready, set, go! Today we begin accepting applications from university students who want to participate in Google Summer of Code (GSoC) 2018. Are you a university student? Want to use your software development skills for good? Read on.

Now entering its 14th year, GSoC gives students from around the globe an opportunity to learn the ins and outs of open source software development while working from home. Students receive a stipend for successful contribution to allow them to focus on their project for the duration of the program. A passionate community of mentors help students navigate technical challenges and monitor their progress along the way.

Past participants say the real-world experience that GSoC provides sharpened their technical skills, boosted their confidence, expanded their professional network and enhanced their resume.

Interested students can submit proposals on the program site between now and Tuesday, March 27, 2018 at 16:00 UTC.

While many students began preparing in February when we announced the 212 participating open source organizations, it’s not too late to start! The first step is to browse the list of organizations and look for project ideas that appeal to you. Next, reach out to the organization to introduce yourself and determine if your skills and interests are a good fit. Since spots are limited, we recommend writing a strong proposal and submitting a draft early so you can get feedback from the organization and increase the odds of being selected.

You can learn more about how to prepare in the video below and in the Student Guide.


You can find more information on our website, including a full timeline of important dates. We also highly recommend perusing the FAQ and Program Rules, as well as joining the discussion mailing list.

Remember to submit your proposals early as you only have until Tuesday, March 27 at 16:00 UTC. Good luck to all who apply!

By Josh Simmons, Google Open Source

Announcing new Stackdriver pricing — visibility for less



Today we're introducing simplified pricing for Stackdriver Monitoring and Logging, and bringing advanced functionality that was limited to a premium pricing tier to all Stackdriver users.

Starting June 30, 2018, you get the advanced alerting and notification options you need to monitor your cloud applications, as well as the flexibility to create monitoring dashboards and alerting policies—without having to opt-in to premium pricing.

Stackdriver Monitoring


Stackdriver Monitoring provides visibility into the performance, uptime and overall health of cloud-powered applications. A hybrid service, Stackdriver Monitoring integrates with GCP, AWS and a variety of common application components.

Highlights of the new Stackdriver Monitoring pricing model include:

  • Flexible pay-as-you-go pricing model that optimizes your spend—pay only for the monitoring data you send, not by the number of resources you have in your projects.
  • Permanent free allocation replaces free trials — all GCP metrics and the first 150 MB of non-GCP metrics per month are available at no cost. 
  • Automatic volume-based discounts — for non-GCP metrics including agent metrics, AWS metrics, logs based metrics, and custom metrics, this volume-based pricing of $.258 down to $.061 per MB ingested represents an up to 80% discount over previously announced prices.


Stackdriver Logging


The key to a well-managed application is to retain meaningful quantities of logging data. Stackdriver Logging allows you to store, search, analyze, monitor and alert on log data and events from GCP, AWS, or ingest custom log data from any source. Beginning today, we’re increasing the retention of logs from seven days to 30 days for all users regardless of tier. In addition, we’re delaying enforcement of log pricing until June 30 from our previously announced date of March 31.

The pricing model for logs is:

  • 50 GB per month free allocation of logs ingested
  • Logs over the free allocation are billed based on volume ingested at $.50 per GB
  • Stackdriver Monitoring and Logging are priced independently


In order to help you control costs, we also provide exclusion filters that enable you to pay only for the logs you want to keep—or even to turn off log ingestion to Stackdriver completely while still allowing logs to be exported to GCS, PubSub or BigQuery.

Here at Google Cloud, we believe that monitoring, logging and performance management are the foundation of any well-managed application—in our cloud, on another cloud, or on-premises. We hope that this new pricing model will enable you to use the Stackdriver family of tools widely and freely. Thank you for your continued feedback—it helps us make our products better. To learn more about Stackdriver, check out our documentation or join in the conversation in our discussion group.

Congratulating the latest Open Source Peer Bonus winners

Originally posted by Maria Webb from the Google Open Source Team on the Google Open Source Blog.

To kick off the new year, we're pleased to announce the first round of Open Source Peer Bonus winners. First started by the Google Open Source team seven years ago, this program encourages Google employees to express their gratitude to open source contributors.

As part of the program, Googlers nominate open source contributors outside of the company for their contributions to open source projects, including those used by Google. Nominees are reviewed by a team of volunteers and the winners receive our heartfelt thanks with a token of our appreciation.

So far more than 600 contributors from dozens of countries have received Open Source Peer Bonuses for volunteering their time and talent to over 400 open source projects. You can find some of the previous winners in these blog posts.

We'd like to recognize the latest round of winners and the projects they worked on. Listed below are the individuals who gave us permission to thank them publicly:

Name Project Name Project
Adrien Devresse Abseil C++ Friedel Ziegelmayer Karma
Weston Ruter AMP Plugin for WordPress Davanum Srinivas Kubernetes
Thierry Muller AMP Plugin for WordPress Jennifer Rondeau Kubernetes
Adam Silverstein AMP Project Jessica Yao Kubernetes
Levi Durfee AMP Project Qiming Teng Kubernetes
Fabian Wiles Angular Zachary Corleissen Kubernetes
Paul King Apache Groovy Reinhard Nägele Kubernetes Charts
Eric Eide C-Reduce Erez Shinan Lark
John Regehr C-Reduce Alex Gaynor Mercurial
Yang Chen C-Reduce Anna Henningsen Node.js
Ajith Kumar Velutheri Chromium Michaël Zasso Node.js
Orta Therox CocoaPods Michael Dalessio Nokogiri
Idwer Vollering coreboot Gina Häußge OctoPrint
Paul Ganssle dateutil Michael Stramel Polymer
Zach Leatherman Eleventy La Vesha Parker Progressive HackNight
Daniel Stone freedesktop.org Ian Stapleton Cordasco Python Code Quality Authority
Sergiu Deitsch glog Fabian Henneke Secure Shell
Jonathan Bluett-Duncan Guava Rob Landley Toybox
Karol Szczepański Heracles.ts Peter Wong V8
Paulus Schoutsen Home Assistant Timothy Gu Web platform & Node.js
Nathaniel Welch Fog for Google Ola Hugosson WebM
Shannon Coen Istio Dominic Symes WebM & AOMedia
Max Beatty jsPerf

To each and every one of you: thank you for your contributions to the open source community and congratulations!

Expanding the reach of Google Cloud Platform’s HIPAA-compliant offerings for healthcare



At Google Cloud, we strive to create innovative and elegant solutions to help you address the unique challenges of your industries. In particular, we have a strong and growing focus on making Google Cloud the best platform for the healthcare industry as has been evidenced at numerous events over the past year, including HIMSS, RSNA and Google Cloud Next. We’ve showcased a number of solutions, including a clinical data warehouse, integration with multiple radiology workflows, an API-enabling an entire country’s healthcare system, as well as a petabyte-scale genomics processing capabilities.

Of course, no solution, whether it be for handling patient data or billing records, can be considered complete without proper consideration of the relevant data security and compliance requirements. Google Cloud Platform (GCP) offers market-leading security technologies, such as encryption by default, both at rest and in transit, trusted server boot and data loss prevention tools, which can help our customers jumpstart their compliance journeys. We've been steadily increasing the number of services covered by the Google Cloud Platform HIPAA Business Associate Agreement (BAA) in line with the overall growth of the product suite. Currently, we have around 75% of applicable GCP services covered under our BAA.

Today we're excited to announce a new addition to our HIPAA BAA, Google App Engine. App Engine offers customers the ability to build highly scalable web and mobile applications without having to worry about managing the underlying infrastructure and other overhead that comes from managing large-scale web applications. With this release, customers will now be able to leverage App Engine to build applications serving the healthcare sector. Many of our customers, for example CSG Actuarial, LLC, are already taking advantage of these additions:
"CSG Actuarial, LLC utilizes Google Cloud Platform to quickly design and implement innovative solutions for producers in the insurance industry. We are excited to be introducing a multi-carrier Medicare Supplement online enrollment tool in February, where we will be able to securely store personal health information with Google App Engine under the Google Cloud Platform HIPAA BAA" 
Bryan Neary, Principal, CSG Actuarial, LLC 
In addition, GCP’s AI and machine learning capabilities, including the Speech, Translation, Vision, and Natural Language APIs, as well as Cloud Machine Learning Engine, are covered by the HIPAA BAA. These products allow customers to leverage pre-trained models in the form of APIs and custom trained models with Cloud Machine Learning Engine. Cloud Machine Learning Engine provides a managed solution for the popular TensorFlow open source machine learning framework, allowing customers to develop, train, and run custom models on HIPAA-covered data on Google Cloud.
"The ability to train robust machine learning models in a secure, privacy-respecting and HIPAA compliant manner is central to our business. We found Google Cloud Platform to go beyond our expectations in terms of supporting infrastructure enabling us to focus on developing our core application and building on top of the stack that GCP provides."   
John Axerio-Cilies, CTO, Arterys

While there's no formal certification process recognized by the US Department of Health and Human Services for HIPAA compliance, and complying with HIPAA is a shared responsibility between the customer and Google, GCP has undergone several independent audits to assess the controls present in our systems, facilities and operations. This includes the ISO 27018 international standard of practice for protection of personally identifiable information in public cloud services. More information on GCP’s security and compliance efforts, as well as the complete list of services covered by our HIPAA BAA, can be found on our comprehensive compliance site here.

In addition to supporting healthcare, we have also developed industry guidance for life sciences customers working to deploy and validate “good practices” (commonly referred to as GxP) on GCP. Please contact a Google Cloud representative for details. Our recently announced partner, Flex, just launched its BrightInsight platform, built on GCP, that enables pharmaceutical and medical technology companies to optimize therapies through better data management and analysis from Class I, II and III medical devices and combination products. BrightInsight will accelerate the R&D and go-to-market timelines for these companies by delivering a secure, managed infrastructure service for regulated medical devices and therapies, and provides a platform for the development of advanced machine learning and analytics capabilities to deliver real-time actionable insights to its customers.
"Flex saw the need for a secure cloud platform designed to support highly-regulated connected drug delivery and medical devices, going beyond simple connectivity to deliver real-time intelligence and actionable insights. Through our strategic partnership with Google, we will be able to deliver a new level of intelligence to healthcare all built within a regulated, managed services framework that is designed to comply with the varying privacy and security laws around the world."  
Kal Patel, MD, Senior Vice President of Digital Health, Flex
To learn more about what we’re doing in the healthcare and life sciences space, visit us at HIMSS this week. In particular, come learn the basics of how to set up a project in the Developer Innovation Lab that can support HIPAA compliance so that you can take advantage of our comprehensive infrastructure, analytics and machine learning capabilities.

Congratulating the latest Open Source Peer Bonus winners

We’re pleased to introduce 2018’s first round of Open Source Peer Bonus winners. First started by the Google Open Source team seven years ago, this program encourages Google employees to express their gratitude to open source contributors.

Twice a year Googlers nominate open source contributors outside of the company for their contributions to open source projects, including those used by Google. Nominees are reviewed by a team of volunteers and the winners receive our heartfelt thanks with a token of our appreciation.

So far more than 600 contributors from dozens of countries have received Open Source Peer Bonuses for volunteering their time and talent to over 400 open source projects. You can find some of the previous winners in these blog posts.

We’d like to recognize the latest round of winners and the projects they worked on. Listed below are the individuals who gave us permission to thank them publicly:

Name Project Name Project
Adrien Devresse Abseil C++ Friedel Ziegelmayer Karma
Weston Ruter AMP Plugin for WordPress Davanum Srinivas Kubernetes
Thierry Muller AMP Plugin for WordPress Jennifer Rondeau Kubernetes
Adam Silverstein AMP Project Jessica Yao Kubernetes
Levi Durfee AMP Project Qiming Teng Kubernetes
Fabian Wiles Angular Zachary Corleissen Kubernetes
Paul King Apache Groovy Reinhard Nägele Kubernetes Charts
Eric Eide C-Reduce Erez Shinan Lark
John Regehr C-Reduce Alex Gaynor Mercurial
Yang Chen C-Reduce Anna Henningsen Node.js
Ajith Kumar Velutheri Chromium Michaël Zasso Node.js
Orta Therox CocoaPods Michael Dalessio Nokogiri
Idwer Vollering coreboot Gina Häußge OctoPrint
Paul Ganssle dateutil Michael Stramel Polymer
Zach Leatherman Eleventy La Vesha Parker Progressive HackNight
Daniel Stone freedesktop.org Ian Stapleton Cordasco Python Code Quality Authority
Sergiu Deitsch glog Fabian Henneke Secure Shell
Jonathan Bluett-Duncan Guava Rob Landley Toybox
Karol Szczepański Heracles.ts Peter Wong V8
Paulus Schoutsen Home Assistant Timothy Gu Web platform & Node.js
Nathaniel Welch Fog for Google Ola Hugosson WebM
Shannon Coen Istio Dominic Symes WebM & AOMedia
Max Beatty jsPerf

To each and every one of you: thank you for your contributions to the open source community and congratulations!

By Maria Webb, Google Open Source

Richer Google Analytics User Management

Today we are introducing more powerful ways to manage access to your Analytics accounts: user groups inside Google Analytics, and enforceable user policies. These new features increase your ability to tightly manage who has access to your data, and amplify the impact of the user management features we launched last year.

User Groups

User groups can now be created from and used within Google Analytics, simplifying user management across teams of people. This is a big time saver if you find yourself repeatedly giving out similar permissions to many people, and simplifies granting permissions as individuals rotate into or out of a team.

To start with user groups, visit either Suite Home or Google Analytics, navigate to the user management section, and click the “+” button. You will then see an option to add new groups, which will walk you through creating a user group, adding people to it, and assigning permissions to the group. Here is a full list of steps to make a user group.

Google Analytics User Management page highlighting the new option to create a user group

Enforced User Policies


Google Analytics 360 Suite user policies let you define which users will have access to your Analytics accounts, and which do not. When a user violates a policy, you will be warned of this through the user management section in Google Analytics or Suite Home and have the option to remove that user from your organization.

We have enhanced these policies so you can choose to block policy-violating users from being added to your Analytics accounts. While policies aren’t enforced by default, you have the option to block violator additions.  When you create or edit your organization’s user policy, you will see a toggle switch like the one below:

User policy setup showcasing the new enforced policy option
User groups and enforced user policies are supported in Google Analytics today, and support for more products is coming, as we continue to plan features that help customers better manage access to their critical business data.

Posted by Matt Matyas, Product Manager Google Analytics 360 Suite

Managing your Compute Engine instances just got easier



If you use Compute Engine, you probably spend a lot of time creating, cloning and managing VM instances. We recently added new management features that will make performing those tasks much easier.

More ways to create instances and use instance templates


With the recent updates to Compute Engine instance templates, now you can create instances from existing instance templates, and create instance templates based on existing VM instances. These features are available independently of Managed Instance Groups, giving you more power (and flexibility) in creating (and managing) your VM instances.

Imagine you're running a VM instance as part of your web-based application, and are moving from development to production. You can now configure your instance exactly the way you want it and then save your golden config as an instance template. You can then use the template to launch as many instances as you need, configured exactly the way you want. In addition, you can tweak VMs launched from an instance template using the override capability.

You can create instance templates using the Cloud Console, CLI or the API. Let’s look at how to create an instance template and instance from the console. Select a VM instance, click on the “Create instance” drop down button, and choose “From template.” Then select the template you would like to use to create the instance.

Create multiple disks when you launch a VM instance


Creating a multiple disk configuration for a VM instance also just got easier. Now you can create multiple persistent disks as part of the virtual machine instance creation workflow. Of course, you can still attach disks later to existing VM instances—that hasn’t changed.

This feature is designed to help you when you want to create data disks and/or application disks that are separate from your operating system disk. You can also use the ability to create multiple disks on launch for instances within a managed instance group by defining multiple disks in the instance template, which makes the MIG a scalable way to create a group of VMs that all have multiple disks.

To create additional disks in the Google Cloud SDK (gcloud CLI), use the --create-disk flag.

Create an image from a running VM instance


When creating an image of a VM instance for cloning, sharing or backup purposes, you may not want to disrupt the services running on that instance. Now you can create images from a disk that's attached to a running VM instance. From the Cloud Console, check the “Keep instance running” checkbox, or from the API, set the force-create flag to true.


Protect your virtual machines from accidental deletion


Accidents happen from time to time, and sometimes that means you delete a VM instance and interrupt key services. You can now protect your VMs from accidental deletion by setting a simple flag. This is especially important for VM instances running critical workloads and applications such as SQL Server instances, shared file system nodes, license managers, etc.

You can enable (and disable) the flag using the Cloud Console, SDK or the API. The screenshot below shows how to enable it through the UI; and how to view the deletion protection status of your VM instances from the list view.

Conclusion


If you already use Compute Engine, you can start using these new features right away from the console, Google Cloud SDK or through APIs. If you aren’t yet using Compute Engine, be sure to sign up for a free trial to get $300 in free cloud credits. To learn more, please visit the instance template, instance creation, custom images and deletion protection product documentation pages.