Tag Archives: Announcements

Google Analytics is Enhancing Support for AMP

Over the past year, developers have adopted the Accelerated Mobile Pages (AMP) technology to build faster-loading pages for all types of sites, ranging from news to recipes to e-commerce. Billions of AMP pages have been published to date and Google Analytics continues its commitment to supporting our customers who have adopted AMP.

However, we have heard feedback from Google Analytics customers around challenges in understanding the full customer journey due to site visitors being identified inconsistently across AMP and non-AMP pages. So we're announcing today that we are rolling out an enhancement that will give you an even more accurate understanding of how people are engaging with your business across AMP and non-AMP pages of your website.

How will this work?
This change brings consistency to users across AMP and non-AMP pages served from your domain. It will have the effect of improving user analysis going forward by unifying your users across the two page formats. It does not affect AMP pages served from the Google AMP Cache or any other AMP cache.

When will this happen?
We expect these improvements to be complete, across all Google Analytics accounts, over the next few weeks.

Are there any other implications of this change?
As we unify your AMP and non-AMP users when they visit your site in the future, you may see changes in your user and session counts, including changes to related metrics. User and session counts will go down over time as we recognize that two formerly distinct IDs are in fact the same user; however, at the time this change commences, the metric New Users may rise temporarily as IDs are reset.

In addition, metrics like time on site, page views per session, and bounce rate will rise consistent with sessions with AMP and non-AMP pageviews no longer being treated as multiple sessions. This is a one-time effect that will continue until all your users who have viewed AMP pages in the past are unified (this can take a short or long period of time depending on how quickly your users return to your site/app).

Is there anything I need to do to get this update?
There is no action required on your part, these changes will be automatically rolled out.

Will there be changes to unify users who view my pages both on my domain and in other contexts?
Some AMP pages are not visited directly on the domain where the content is originally hosted but instead via AMP caches or in platform experiences. However we decided to focus on fixing the publisher domain case first as this was the fastest way we could add value for our clients.

We are committed to ensuring the best quality data for user journey analysis across AMP and non-AMP pages alike and this change makes that easy for AMP pages served on your domain. We hope you enjoy these improvements - and as always, happy analyzing!

Introducing Google Cloud IoT Core: for securely connecting and managing IoT devices at scale



Today we're announcing a new fully-managed Google Cloud Platform (GCP) service called Google Cloud IoT Core. Cloud IoT Core makes it easy for you to securely connect your globally distributed devices to GCP, centrally manage them and build rich applications by integrating with our data analytics services. Furthermore, all data ingestion, scalability, availability and performance needs are automatically managed for you in GCP style.

When used as part of a broader Google Cloud IoT solution, Cloud IoT Core gives you access to new operational insights that can help your business react to, and optimize for, change in real time. This advantage has value across multiple industries; for example:
  • Utilities can monitor, analyze and predict consumer energy usage in real time
  • Transportation and logistics firms can proactively stage the right vehicles/vessels/aircraft in the right places at the right times
  • Oil and gas and manufacturing companies can enable intelligent scheduling of equipment maintenance to maximize production and minimize downtime

So, why is this the right time for Cloud IoT Core?


About all the things


Many enterprises that rely on industrial devices such as sensors, conveyor belts, farming equipment, medical equipment and pumps particularly, globally distributed ones are struggling to monitor and manage those devices for several reasons:
  • Operational cost and complexity: The overhead of managing the deployment, maintenance and upgrades for exponentially more devices is stifling. And even with a custom solution in place, the resource investments required for necessary IT infrastructure are significant.
  • Patchwork security: Ensuring world-class, end-to-end security for globally distributed devices is out of reach or at least not a core competency for most organizations.
  • Data fragmentation: Despite the fact that machine-generated data is now an important data source for making good business decisions, the massive amount of data generated by these devices is often stored in silos with a short expiration date, and hence never reaches downstream analytic systems (nor decision makers).
Cloud IoT Core is designed to help resolve these problems by removing risk, complexity and data silos from the device monitoring and management process. Instead, it offers you the ability to more securely connect and manage all your devices as a single global system. Through a single pane of glass you can ingest data generated by all those devices into a responsive data pipeline and, when combined with other Cloud IoT services, analyze and react to that data in real time.

Key features and benefits


Several key Cloud IoT Core features help you meet these goals, including:

  • Fast and easy setup and management: Cloud IoT Core lets you connect up to millions of globally dispersed devices into a single system with smooth and even data ingestion ensured under any condition. Devices are registered to your service quickly and easily via the industry-standard MQTT protocol. For Android Things-based devices, firmware updates can be automatic.
  • Security out-of-the-box: Secure all device data via industry-standard security protocols. (Combine Cloud IoT Core with Android Things for device operating-system security, as well.) Apply Google Cloud IAM roles to devices to control user access in a fine-grained way.
  • Native integration with analytic services: Ingest all your IoT data so you can manage it as a single system and then easily connect it to our native analytic services (including Google Cloud Dataflow, Google BigQuery and Google Cloud Machine Learning Engine) and partner BI solutions (such as Looker, Qlik, Tableau and Zoomdata). Pinpoint potential problems and uncover solutions using interactive data visualizations, or build rich machine-learning models that reflect how your business works.
  • Auto-managed infrastructure: All this in the form of a fully-managed, pay-as-you-go GCP service, with no infrastructure for you to deploy, scale or manage.
"With Google Cloud IoT Core, we have been able to connect large fleets of bicycles to the cloud and quickly build a smart transportation fleet management tool that provides operators with a real-time view of bicycle utilization, distribution and performance metrics, and it forecasts demand for our customers."
 Jose L. Ugia, VP Engineering, Noa Technologies

Next steps

Cloud IoT Core is currently available as a private beta, and we’re launching with these hardware and software partners:

Cloud IoT Device Partners
Cloud IoT Application Partners

When generally available, Cloud IoT Core will serve as an important, foundational tool for hardware partners and customers alike, offering scalability, flexibility and efficiency for a growing set of IoT use cases. In the meantime, we look forward to your feedback!

Cloud Spanner is now production-ready; let the migrations begin!



Cloud Spanner, the world’s first horizontally-scalable and strongly-consistent relational database service, is now generally available for your mission-critical OLTP applications.

We’ve carefully designed Cloud Spanner to meet customer requirements for enterprise databases — including ANSI 2011 SQL support, ACID transactions, 99.999% availability and strong consistency — without compromising latency. As a combined software/hardware solution that includes atomic clocks and GPS receivers across Google’s global network, Cloud Spanner also offers additional accuracy, reliability and performance in the form of a fully-managed cloud database service. Thanks to this unique combination of qualities, Cloud Spanner is already delivering long-term value for our customers with mission-critical applications in the cloud, including customer authentication systems, business-transaction and inventory-management systems, and high-volume media systems that require low latency and high throughput. For example, Snap uses Cloud Spanner to power part of its search infrastructure.

Looking toward migration


In preparation for general availability, we’ve been working closely with our partners to make adoption as smooth and easy as possible. Thus today, we're also announcing our initial data integration partners: Alooma, Informatica and Xplenty.

Now that these partners are in the early stages of Cloud Spanner “lift-and-shift” migration projects for customers, we asked a couple of them to pass along some of their insights about the customer value of Cloud Spanner, as well as any advice about planning for a successful migration:

From Alooma:

Cloud Spanner is a game-changer because it offers horizontally scalable, strongly consistent, highly available OLTP infrastructure in the cloud for the first time. To accelerate migrations, we recommend that customers replicate their data continuously between the source OLTP database and Cloud Spanner, thereby maintaining both infrastructures in the same state — this allows them to migrate their workloads gradually in a predictable manner.

From Informatica:
“Informatica customers are stretching the limits of latency and data volumes, and need innovative enterprise-scale capabilities to help them outperform their competition. We are excited about Cloud Spanner because it provides a completely new way for our mutual customers to disrupt their markets. For integration, migration and other use cases, we are partnering with Google to help them ingest data into Cloud Spanner and integrate a variety of heterogeneous batch, real-time, and streaming data in a highly scalable, performant and secure way.”

From Xplenty:
"Cloud Spanner is one of those cloud-based technologies for which businesses have been waiting: With its horizontal scalability and ACID compliance, it’s ideal for those who seek the lower TCO of a fully managed cloud-based service without sacrificing the features of a legacy, on-premises database. In our experience with customers migrating to Cloud Spanner, important considerations include accounting for data types, embedded code and schema definitions, as well as understanding Cloud Spanner’s security model to efficiently migrate your current security and access-control implementation."

Next steps


We encourage you to dive into a no-cost trial to experience first-hand the value of a relational database service that offers strong consistency, mission-critical availability and global scale (contact us about multi-regional instances) with no workarounds — and with no infrastructure for you to deploy, scale or manage. (Read more about Spanner’s evolution inside Google in this new paper presented at the SIGMOD ‘17 conference today.) If you like what you see, a growing partner ecosystem is standing by for migration help, and to add further value to Cloud Spanner use cases via data analytics and visualization tooling.

Students, Start Your Engineerings!


It’s that time again! Our 201 mentoring organizations have selected 1,318 the students they look forward to working with during the 13th Google Summer of Code (GSoC). Congratulations to our 2017 students and a big thank you to everyone who applied!

The next step for participating students is the Community Bonding period which runs from May 4th through May 30th. During this time, students will get up to speed on the culture and toolset of their new community. They’ll also get acquainted with their mentor and learn more about the languages or tools they will need to complete their projects. Coding commences May 30th.

To the more than 4,200 students who were not chosen this year - don’t be discouraged! Many students apply at least once to GSoC before being accepted. You can improve your odds for next time by contributing to the open source project of your choice directly; organizations are always eager for new contributors! Look around GitHub and elsewhere on the internet for a project that interests you and get started.

Happy coding, everyone!

By Cat Allman, Google Open Source

Google Cloud Natural Language API launches new features and Cloud Spanner graduating to GA



Today at Google Cloud Next London we're excited to announce product news that will help customers innovate and transform their businesses faster via the cloud: first, that Google Cloud Natural Language API is adding support for new languages and entity sentiment analysis, and second, that Google Cloud Spanner is graduating to general availability (GA).

Cloud Natural Language API beta


Since we launched Cloud Natural Language API, a fully managed service for extracting meaning from text via machine learning, we’ve seen customers such as Evernote and Ocado enhance their businesses in fascinating ways. For example, they use Cloud Natural Language API to analyze customer feedback and sentiment, extract key entities and metadata from unstructured text such as emails or web articles, and enable novel features (such as deriving action items from meeting notes).

These use cases, among many others, highlighted the need to expand language support and add improvements in the quality of our base NLU technology. We've incorporated this feedback into the product and are pleased to announce the following new capabilities under beta:

  • Expanded language support for entity, document sentiment and syntax analysis for the following languages: Chinese (Simplified and Traditional), French, German, Italian, Korean and Portuguese. This is in addition to existing support for English, Spanish and Japanese.
  • Understand sentiment for specific entities and not just whole document or sentence: We're introducing a new method that identifies entities in a block of text and also determines sentiment for those entities. Entity sentiment analysis is currently only available for the English language. For more information, see Analyzing Entity Sentiment.
  • Improved quality for sentiment and entity analysis: As part of the continuous effort to improve quality of our base models, we're also launching improved models for sentiment and entity analysis as part of this release.

Early access users of this new functionality such as Wootric are already using the expanded language support and new entity sentiment analysis feature to better understand customer sentiment around brands and products. For example, for customer feedback such as “the phone is expensive but has great battery life,” users can now parse that the sentiment for phone is negative while the sentiment for battery life is positive.

As the API becomes more widely adopted, we're looking forward to seeing more interesting and useful applications of it.

Cloud Spanner enters GA

Announced in March at Google Cloud Next ‘17, Cloud Spanner is the world’s first fully managed, horizontally scalable relational database service for mission-critical online transaction processing (OLTP) applications. Cloud Spanner is specifically designed to meet customer requirements in this area for strong consistency, high availability and global scale qualities that make it unique as a service.

During the beta period, we were thrilled to see customers unlock new use cases in the cloud with Cloud Spanner, including:

  • Powering mission-critical applications like customer authentication and provisioning for multi-national businesses
  • Building consistent systems for business transactions and inventory management in the financials services and retail industries
  • Supporting incredibly high-volume systems that need low-latency and high-throughput in the advertising and media industries

As with all our other services, GCP handles all the performance, scalability and availability needs automatically in a pay-as-you-go way.

On May 16, Cloud Spanner will reach a further milestone by becoming generally available for the first time. Currently we're offering regional instances, with multi-regional instances coming later this year. We've been Spanner users ourselves for more than five years to support a variety of mission-critical global apps, and we can’t wait to see what new workloads you bring to the cloud, and which new ones you build next!

Join the first POSSE Workshop in Europe

We are excited to announce that the Professors’ Open Source Software Experience (POSSE) is expanding to Europe! POSSE is an event that brings together educators interested in providing students with experience in real-world projects through participation in humanitarian free and open source software (HFOSS) projects.

Over 100 faculty members have attended past workshops and there is a growing community of instructors teaching students through contributions to HFOSS. This three-stage faculty workshop will prepare you to support student participation in open source projects. During the workshop, you will:

  • Learn how to support student learning within real-world project environments
  • Motivate students and cultivate their appreciation of computing for social good
  • Collaborate with instructors who have similar interests and goals
  • Join a community of educators passionate about HFOSS

Workshop Format

Stage 1: Starts May 8, 2017 with online activities. Activities will take 2-3 hours per week and include interaction with workshop instructors and participants.
Stage 2: The face-to-face workshop will be held in Bologna, Italy, July 1-2, 2017 and is a pre-event for the ACM ITiCSE conference. Workshop participants include the workshop organizers, POSSE alumni, and members of the open source community.
Stage 3: Online activities and interactions in small groups immediately following the face-to-face workshop. Participants will have support while involving students in an HFOSS project in the classroom.

How to Apply

If you’re a full-time instructor at an academic institution outside of the United States, you can join the workshop being held in Bologna, Italy, July 1-2, 2017. Please complete and submit the application by May 1, 2017. Prior work with FOSS projects is not required. English is the official language of the workshop. The POSSE workshop committee will send an email notifying you of the status of your application by May 5, 2017.

Participant Support

The POSSE workshop in Europe is supported by Google. Attendees will be provided with funding for two nights lodging ($225 USD per night) and meals during the workshop. Travel costs will also be covered up to $450 USD. Participants are responsible for any charges above these limits. At this time, we can only support instructors at institutions of higher education outside of the U.S. For faculty at U.S. institutions, the next POSSE will be in fall 2017 on the east coast of the U.S.

We look forward to seeing you at the POSSE workshop in Italy!

By Helen Hu, Open Source Programs Office

Introducing Marketing Mix Model Partners: Helping brands better understand the impact of their marketing

The following was originally posted on the Google Agency Blog.

CMOs and marketing executives use marketing mix models to understand how their marketing investments are driving sales and how to optimize their spend across multiple brands, channels, and regions. With rising investment in digital and mobile advertising, marketers want to be sure the models they use correctly value the impact of these channels.

Today we’re excited to announce a program to help marketing mix model providers better incorporate Google media data into their services. The Marketing Mix Model Partners program is designed to ensure advertisers can accurately measure the ROI of their digital investments and confidently understand the digital drivers of ROI to improve returns year-over-year.

The Marketing Mix Model Partners program offers:
  • Data Access: Partners get access to accurate, granular campaign data across all relevant Google video, display, and search media in a standardized format. We’re also making the data easier to access by providing data from multiple properties, like Search and YouTube, in one centralized location. 
  • Expertise: Partners also get dedicated training, resources, and specialists to better understand Google advertising products and practices and incorporate digital data into their model methodologies. 
  • Actionability: We provide Google account and technical teams to help advise on results and strategies designed to understand the drivers of ROI and improve returns over time. 
Our partners

We’re excited to be working with the initial participants in the program, Marketing Management Analytics, Neustar MarketShare, and Nielsen. Google customers can talk with their Google representatives about working with one of these partners on using Google data in their marketing mix model engagements.

Here’s what our partners have to say about the program:

“The ability to collect and analyze digital data at extremely granular levels enables both marketers and their advertising partners to more successfully measure, predict and action the most effective and profitable means of optimizing each digital channel to achieve their business objectives. We are excited that Google has taken such a proactive approach in working with MMA and analytic companies within the marketplace in providing such a high level of objectivity and transparency."
— Patrick Cummings, CEO of Marketing Management Analytics 
“Today’s measurement solutions need to be connected, always on and incorporate the myriad of channels, as well as critical econometric externalities in order for marketers to truly get an accurate view of marketing’s impact. We are thrilled to be a Google launch partner as this signals our commitment to helping brands understand how their marketing investments are driving business results. Through this partnership our advanced analytics models will incorporate more accurate, granular data, giving marketers a more complete understanding of the effectiveness of their marketing and how best to optimize their spend to improve future outcomes.”
— Julie Fleischer, Vice President, Product Marketing, Marketing Solutions, Neustar 
"As the marketing landscape rapidly evolves, it is critical to use the most robust data-streams in our Marketing Mix models to ensure the highest standard of insight quality. Working with Google, we will have better input and better consultative output so that our advertiser clients can best understand what is driving their performance today and make informed decisions for tomorrow.”
 ‒ Jason Tate, VP of Global Analytics at Nielsen 

As part of our commitment to providing the industry with trusted, transparent, and independent third-party metrics, we’ll be expanding the program over the coming months. If your company provides marketing mix model services and you’re interested in learning more about the partner program, please sign up here.

Google Container Engine fires up Kubernetes 1.6



Today we started to make Kubernetes 1.6 available to Google Container Engine customers. This release emphasizes significant scale improvements and additional scheduling and security options, making the running of a Kubernetes clusters on Container Engine easier than ever before.

There were over 5,000 commits in Kubernetes 1.6 with dozens of major updates that are now available to Container Engine customers. Here are just a few highlights from this release:
  • Increase in number of supported nodes by 2.5 times: We’ve made great effort to support your workload no matter how large your needs. Container Engine now supports cluster sizes of up to 5,000 nodes, up from 2,000, while still maintaining our strict SLO for cluster performance. We've already had some of the world's most popular apps hosted on Container Engine (such as Pokémon GO) and the increase in scale can handle more of the largest workloads.
  • Fully Managed Nodes: Container Engine has always helped keep your Kubernetes master in a healthy state; we're now adding the option to fully manage your Kubernetes nodes as well. With Node Auto-Upgrade and Node Auto-Repair, you can optionally have Google automatically update your cluster to the latest version, and ensure your cluster’s nodes are always operating correctly. You can read more about both features here.
  • General Availability of Container-Optimized OS: Container Engine was designed to be a secure and reliable way to run Kubernetes. By using Container-Optimized OS, a locked down operating system specifically designed for running containers on Google Cloud, we provide a default experience that's more secure, highly performant and reliable, helping ensure your containerized workloads can run great. Read more details about Container-Optimized OS in this in-depth post here.
Over the past year, Kubernetes adoption has accelerated and we could not be more proud to host so many mission critical applications on the platform for our customers. Some recent highlights include:

Customers

  • eBay uses Google Cloud technologies including Container Engine, Cloud Machine Learning and AI for its ShopBot, a personal shopping bot on Facebook Messenger.
  • Smyte participated in the Google Cloud startup program and protects millions of actions a day on websites and mobile applications. Smyte recently moved from self-hosted Kubernetes to Container Engine.
  • Poki, a game publisher startup, moved to Google Cloud Platform (GCP) for greater flexibility, empowered by the openness of Kubernetes. A theme we covered at our Google Cloud Next conference, showing that open source technology gives customers the freedom to come and go as they choose. Read more about their decision to switch here.
While Kubernetes did nudge us in the direction of GCP, we’re more cloud agnostic than ever because Kubernetes can live anywhere.”  — Bas Moeys, Co-founder and Head of Technology at Poki

To help shape the future of Kubernetes — the core technology Container Engine is built on — join the open Kubernetes community and participate via the kubernetes-users-mailing list or chat with us on the kubernetes-users Slack channel.

We’re the first cloud to offer users the newest Kubernetes release, and with our generous 12 month free trial of $300 credits, it’s never been simpler to get started, try the latest release today.



Toward better node management with Kubernetes and Google Container Engine



Using our Google Container Engine managed service is a great way to run a Kubernetes cluster with a minimum of management overhead. Now, we’re making it even easier to manage Kubernetes clusters running in Container Engine, with significant improvements to upgrading and maintaining your nodes.

Automated Node Management

In the past, while we made it easy to spin up a cluster, keeping nodes up-to-date and healthy were still the user’s responsibility. To ensure your cluster was in a healthy, current state, you needed to track Kubernetes releases, set up your own tooling and alerting to watch nodes that drifted into an unhealthy node, and then develop a process for repairing that node. While we take care of keeping the master healthy, with the nodes that make up a cluster (particularly large ones), this could be a significant amount of work. Our goal is to provide an end-to-end automated management experience that minimizes how much you need to worry about common management tasks. To that end, we're proud to introduce two new features that help ease these management burdens.

Node Auto-Upgrades


Rather than having to manually execute node upgrades, you can choose to have the nodes automatically upgrade when the latest release has been tested and confirmed to be stable by Google engineers.

You can enable it in the UI during new Cluster and Node Pool creation by enabling the “Auto upgrades”.
To enable it in the CLI add the “--enable-autoupgrade” flag.

gcloud beta container clusters create CLUSTER --zone ZONE --enable-autoupgrade

gcloud beta container node-pools create NODEPOOL --cluster CLUSTER --zone ZONE --enable-autoupgrade

Once enabled, each node in the selected node pool will have its workloads gradually drained, shut down and a new node will be created and joined to the cluster. The node will be confirmed to be healthy before moving onto the next node.

To learn more see Node Auto-Upgrades on Container Engine.

Node Auto-Repairs

Like any production system, cluster resources must be monitored to detect issues (crashing Kubernetes binaries, workloads triggering kernel bugs and out-of-disk issues, etc.) and repair them if they're out of specification. A node that goes unhealthy will decrease the scheduling capacity of your cluster and as the capacity reduces your workloads will stop getting scheduled.

Google already monitors and repairs your Kubernetes master in case of these issues. With our new Node-Auto Repair feature, we'll also monitor to each node in the node pool.

You can enable Auto Repairs during new Cluster and Node Pool Creation.

To enable it in the UI:


To enable it in the CLI:

gcloud beta container clusters create CLUSTER --zone ZONE --enable-autorepair

gcloud beta container node-pools create NODEPOOL --cluster CLUSTER --zone ZONE --enable-autorepair

Once enabled, Container Engine will monitor several signals, including the node health status as seen by the cluster master and the VM state from the managed instance group backing the node. Too many consecutive health check failures (around 10 minutes) will trigger a re-creation of the node VM.

To learn more see Node Auto-Repair on Container Engine.

Improving Node Upgrades


In order to achieve both these features, we had to do some significant work under the hood. Previously, Container Engine node upgrades did not consider a node’s health status and did not ensure that it was ready to be upgraded. Ideally a node should be drained prior to taking it offline, and health-checked once the VM has successfully booted up. Without observing these signals, Container Engine could begin upgrading the next node in the cluster before the previous node was ready, potentially impacting workloads in smaller clusters.

In the process of building Auto Node Upgrades and Auto Node Repair, we’ve made several architectural improvements. We redesigned our entire upgrade logic with an emphasis on making upgrades as non-disruptive as possible. We also added proper support for cordoning and draining of nodes prior to taking them offline, controlled via podTerminationGracePeriod. If these pods are backed by a controller (e.g. ReplicaSet or Deployment) they're automatically rescheduled onto other nodes (capacity permitting). Finally, we added additional steps after each node upgrade to verify that the node is healthy and can be scheduled, and we retry upgrades if a node is unhealthy. These improvements have significantly reduced the disruptive nature of upgrades.


Cancelling, Continuing and Rolling Back Upgrades

Additionally, we wanted to make upgrades more than a binary operation. Frequently, particularly with large clusters, upgrades need to be halted, paused or cancelled altogether (and rolled back). We're pleased to announce that Container Engine now supports cancelling, rolling back and continuing upgrades.

If you cancel an upgrade, it impacts the process in the following way:

  • Nodes that have not been upgraded remain at their current version
  • Nodes that are in-flight proceed to completion
  • Nodes that have already been upgraded remain at the new version


An identical upgrade (roll-forward) issued after a cancellation or a failure will pick up the upgrade from where it left off. For example, if the initial upgrade completes three out of five nodes, the roll-forward will only upgrade the remaining two nodes; nodes that have been upgraded are not upgraded again.

Cancelled and failed node upgrades can also be rolled back to the previous state. Just like in a roll-forward, nodes that hadn’t been upgraded are not rolled-back. For example, if the initial upgrade completed three out of five nodes, a rollback is performed on the three nodes, and the remaining two nodes are not affected. This makes the upgrade significantly cleaner.

Note: A node upgrade still requires the VM to be recreated which destroys any locally stored data. Rolling back and rolling forward does not restore that local data.



Node Condition\Action
Cancellation
Rolling forward
Rolling back
In Progress
Proceed to completion
N/A
N/A
Upgraded
Untouched
Untouched
Rolled back
Not Upgraded
Untouched
Upgraded
Untouched


Try it

These improvements extend our commitment in making Container Engine the easiest way to use Kubernetes. With Container Engine you get pure open source Kubernetes experience along with the powerful benefits of Google Cloud Platform (GCP): friendly per-minute billing, a global load balancer, IAM integration, and all fully managed by Google reliability engineers ensuring your cluster is available and up-to-date.

With our new generous 12-month free trial that offers a $300 credit, it’s never been simpler to get started. Try Container Engine today.

Container-Optimized OS from Google is generally available


It's not news to anyone in IT that container technology has become one of the fastest growing areas of innovation. We're excited about this trend and are continuously enhancing Google Cloud Platform (GCP) to make it a great place to run containers.

There are many great OSes available today for hosting containers, and we’re happy that customers have so many choices. Many people have told us that they're also interested in using the same image that Google uses, even when they’re launching their own VMs, so they can benefit from all the optimizations that Google services receive.

Last spring, we released the beta version of Container-Optimized OS (formerly Container-VM Image), optimized for running containers on GCP. We use Container-Optimized OS to run some of our own production services (such as Google Cloud SQL, Google Container Engine, etc.) on GCP.

Today, we’re announcing the general availability of Container-Optimized OS. This means that if you're a Compute Engine user, you can now run your Docker containers “out of the box” when you create a VM instance with Container-Optimized OS (see the end of this post for examples).

Container-Optimized OS represents the best practices we've learned over the past decade running containers at scale:
  • Controlled build/test/release cycles: The key benefit of Container-Optimized OS is that we control the build, test and release cycles, providing GCP customers (including Google’s own services) enhanced kernel features and managed updates. Releases are available over three different release channels (dev, beta, stable), each with different levels of early access and stability, enabling rapid iterations and fast release cycles.
  • Container-ready: Container-Optimized OS comes pre-installed with the Docker container runtime and supports Kubernetes for large-scale deployment and management (also known as orchestration) of containers.
  • Secure by design: Container-Optimized OS was designed with security in mind. Its minimal read-only root file system reduces the attack surface, and includes file system integrity checks. We also include a locked-down firewall and audit logging.
  • Transactional updates: Container-Optimized OS uses an active/passive root partition scheme. This makes it possible to update the operating system image in its entirety as an atomic transaction, including the kernel, thereby significantly reducing update failure rate. Users can opt-in for automatic updates.
It’s easy to create a VM instance running Container-Optimized OS on Compute Engine. Either use the Google Cloud Console GUI or the gcloud command line tool as shown below:

gcloud compute instances create my-cos-instance \
    --image-family cos-stable \
    --image-project cos-cloud

Once the instance is created, you can run your container right away. For example, the following command runs an Nginx container in the instance just created:

gcloud compute ssh my-cos-instance -- "sudo docker run -p 80:80 nginx"

You can also log into your instance with the command:

gcloud compute ssh my_cos_instance --project my_project --zone us-east1-d

Here's another simple example that uses Container Engine (which uses Container-Optimized OS as its OS) to run your containers. This example comes from the Google Container Engine Quickstart page.

gcloud container clusters create example-cluster
kubectl run hello-node --image=gcr.io/google-samples/node-hello:1.0 \
   --port=8080
kubectl expose deployment hello-node --type="LoadBalancer"
kubectl get service hello-node
curl 104.196.176.115:8080

We invite you to setup your own Container-Optimized OS instance and run your containers on it. Documentation for Container-Optimized OS is available here, and you can find the source code on the Chromium OS repository. We'd love to hear about your experience with Container-Optimized OS; you can reach us at StackOverflow with questions tagged google-container-os.