Google+ will no longer support Internet Explorer 10 in October 2018

Beginning on October 23, 2018, Internet Explorer 10 will no longer be a supported browser for use with Google+. Before this time, we recommend referring to the Help Center to ensure you’re using a supported browser for uninterrupted access to Google+.

For more information on supported browsers for all G Suite apps, refer to the Help Center.

Deprecation Details
Impact:
All end users using Internet Explorer 10

Action:
Change management suggested/FYI

More Information
Help Center: Supported browsers for G Suite

Launch release calendar
Launch detail categories
Get these product update alerts by email
Subscribe to the RSS feed of these updates

12 must-see G Suite sessions at Google Cloud Next ‘18

Next week, IT and business leaders from across the globe will gather alongside Google experts in San Francisco to see what’s new in cloud at our annual conference, Google Cloud Next.

Last year, we introduced new solutions to best suit the needs of large organizations, including new ways for companies to work in real-time in Hangouts Meet and Chat, and more features in Google Drive. This year, we’ll hear from Google leadership about how organizations are reimagining work with G Suite. More and more, we see companies viewing G Suite as an investment in their people because employees are able to collaborate more quickly and effectively, spend more time working creatively and drive even greater impact.

In addition to keynotes, we’re also hosting a Spotlight Session on Wednesday, July 25th where we’ll showcase new capabilities in more depth, and nearly 50 deep-dive sessions specific to G Suite where AI experts, developers, customers, technology partners and others will cover a range of topics. If you’re planning to attend and need help narrowing down your list of sessions, here are some that I recommend.

If you want to hear best practices for deploying G Suite (straight from customers):

If you need to understand how G Suite can help secure business data:

If you want to build on G Suite to optimize work processes, or see how G Suite tools work easily with other enterprise apps:

If you want tips on how to be more productive in G Suite or how AI can help:

You can check out the full list of session content or register for the event on the Next ‘18 website. See you there.

Kubernetes wins OSCON Most Impact Award



Today at the Open Source Awards at OSCON 2018, Kubernetes won the inaugural Most Impact Award, which recognizes a project that has had a ‘significant impact on how software is being written and applications are built’ in the past year. Thank you O’Reilly OSCON for the recognition, and more importantly, thank you to the vast Kubernetes community that has driven the project to where it is today.

When we released Kubernetes just four years ago, we never quite imagined how successful the project would be. We designed Kubernetes from a decade of experience running production workloads at Google, but we didn’t know whether the outside world would adopt it. However we believed that if we remained open to new ideas and new voices, the community would provide feedback and contributions to move the project forward to meet the needs of users everywhere.

This openness led to Kubernetes’ rapid adoption—and it’s also one of the core pillars of Google Cloud: our belief in an open cloud, so that you can pick-up and move your app wherever you want. Whether it’s Tensorflow, an open source library for machine learning, Asylo, a framework for confidential computing, or Istio, an open platform to connect microservices, openness remains a core value here at Google Cloud.

To everyone who has helped make Kubernetes the success it is today, many thanks again.

If you haven’t tried Kubernetes, it’s easy to get started with using Google Kubernetes Engine. If you’re interested to learn more about Kubernetes and the ecosystem it spawned, then subscribe to the Kubernetes Podcast from Google to hear weekly insights from leaders in the community.

Managing and securing cloud workers with new updates to Chrome Enterprise

The new era of the cloud worker is here, bringing with it the inevitable shift to cloud-based technologies that facilitate the flexible and collaborative ways we now work.

For IT teams, cloud workers mean a fundamental rethink of security and management of devices, applications, and access. At Next ‘18, we’ll be discussing this cultural shift and showcasing Chrome Enterprise products and capabilities that can help.

Here are a few of the new features we’ll be highlighting at Next.

Adding additional password protection for corporate accounts

When employees reuse their corporate passwords it increases an organization’s risk. Almost 80 percent of organizations face third-party exploits through stolen account credentials on a monthly basis, which increases the risk of data loss. Whether a third-party site or password database is compromised, or a user is scammed through phishing into entering their business password into a malicious site, IT teams face the risk of corporate passwords getting into the wrong hands.

Chrome Browser is adding a new policy that enterprises can enable to better protect users’ corporate accounts. Based on a popular extension, the Password Alert Policy allows enterprises to set rules to prevent corporate password use on sites outside of the company’s control. Users will be notified when they use their corporate password on an unallowed site. IT can also apply this policy to warn only when users type their passwords into predicted phishingsite. The policy can be set for both Google and non-Google accounts.
Chrome Enterprise Password Policy

The new Password Alert Policy will be demoed at Next ‘18 and will be available to enterprises in September 2018.

Simplifying browser management in the cloud

Traditionally, IT has relied upon on-premise tools to manage their browser deployments. Chrome Browser has made that easier with its support for Active Directory and the growing number of Group Policies available for admins. But as users work from different devices, and spend more time using web and SaaS apps, IT can greatly benefit from managing their browser instances right in the cloud.

At Next ‘18, IT teams will get a preview of a cloud-based Chrome Browser management feature to support their cloud workers through the Google Admin console. With this new feature, it’ll be simple to enroll separate instances of Chrome Browser on company devices, and manage them from a single interface across different delivery platforms. From a single view, IT will be able to manage Chrome Browser running on Windows, Mac, Chrome OS and Linux.

Not only will IT be able to set and apply policies from the cloud, but they will also get better visibility into their Chrome Browser deployments. For example, IT admins will be able to see inventory information and drill down into reports, helping them to both better understand how workers are using their browsers and to troubleshoot issues.


Chrome Enterprise Browser Management

Through Chrome Browser management in the Google Admin console, IT teams will be able to assign different admins to manage the browser—even if they aren’t experts in Active Directory or other management tools. This delegation will give IT more flexibility.

Stop by the Cloud Worker installation at Next ‘18 for a preview. You can also see live demos during the main Chrome Browser session. If you want to be notified when you can start managing your browsers from the cloud, visit this page to sign up for updates.

Expanding Google Play for Chrome OS

We introduced Google Play support to Chromebooks back in 2016, bringing the familiarity, breadth, and security of Play to Chrome OS.

Today, we’re announcing that managed Google Play is out of beta for Chrome Enterprise customers. More than 50 Chromebook models now support Android apps, and popular enterprise developers like Cisco, Adobe, Atlassian, VMware, and Citrix have all optimized applications for Chromebooks.

With managed Google Play, admins can curate applications by user groups as well as customize a broad range of policies and functions like application blacklisting and remote uninstall. You can learn more about managed Google Play and deploying Android applications to your Chromebook fleet here.

Helping businesses save time and money with Grab and Go

Earlier this week we announced early access for our Grab and Go program. With Grab and Go, businesses can deploy self-service stations with Chrome devices where employees can quickly borrow and return devices, increasing productivity and decreasing downtime. We’ve seen great success deploying Grab and Go inside Google, and wanted to extend its benefits to others. Learn more by reading our blog, or registering your interest.

Learn more at Next ‘18

If you’re joining us at Next ‘18, please stop by the Mobility & Devices showcase to learn more about cloud workers, get the latest on new features for Chrome OS and Chrome Browser, and preview demos. Don’t forget to pick up a Chromebook at our Grab and Go Lounge, too. See you there.

Making healthcare better for everyone—including providers

In healthcare circles, there’s been a lot of talk over the years about the Institute for Healthcare Improvement’s Triple Aim, a framework with three broad goals: improving the patient experience of care; improving the health of populations; and reducing the per capita cost of health care.


These are extremely worthy goals, and moving to the cloud is one of the best ways to achieve them. For example, Google Cloud’s work with the Colorado Center for Personalized Medicine (CCPM) and Health Data Compass helps clinicians and researchers to quickly identify patterns in patient data, helping to lower costs and improve outcomes.


Technology and policy advances have enabled organizations to make progress toward the Triple Aim, but the new era of digitized medicine has also come with costs: increasing amounts of data to sift through and make sense of; depersonalized office visits as providers turn their attention away from patients and toward their screens; and for providers, countless hours spent meeting the administrative burden that digital medicine requires. All this has led to a spike in burnout among the providers themselves. According to the Annals of Internal Medicine, for every hour that a physician spends with a patient, they must spend two hours on related administration. That’s led some observers to suggest a Quadruple Aim: improving the work experience of clinicians and staff.


Here at Google Cloud, we firmly believe in the power of data to advance healthcare, but we also know how easy it is to be overwhelmed by it. The Google Cloud Healthcare and Life Sciences team relies on the expertise of both internal and external clinicians and other care providers to help balance the advances in digital health with the impact on those who provide care. As such, we’re pleased to announce the appointment of Dr. Toby Cosgrove as Executive Advisor to the Google Cloud Healthcare and Life Sciences team.


Prior to this appointment, Dr. Cosgrove was CEO at Cleveland Clinic, and is a widely respected thought leader in the healthcare space. Over the course of his career, he has seen firsthand how digitization has improved—and hampered—healthcare.


“Among practitioners, everyone talks about ‘pajama time’”—spending a couple of hours every night to complete their administrative duties, Dr. Cosgrove says. And while patients benefit from streamlined sharing of medical records and improved diagnoses that have resulted from the digitization of healthcare data, they miss the warmth and connection they used to have with their providers.


Technology may have been the cause of some of these challenges, but we believe that it can also be the cure. Machine learning and AI are particularly promising with their new and timely insights when it comes to improving the work experience of providers. Meanwhile, streamlining and automating workflows can reduce the time it takes to accomplish simple tasks like refilling a prescription, and can even help improve provider efficiency by scanning large, clinically complex data sets or images and flagging areas of concern—freeing up time to interact with patients.


We’re thrilled to have Dr. Cosgrove on board to help us tackle the Quadruple Aim, drawing on his several decades of experience at the forefront of American medicine. If you’re coming to Google Cloud Next ‘18 next week in San Francisco, be sure to attend “Healthcare and Life Sciences in the Cloud | AI in Healthcare and Biomedical Research,” where Dr. Cosgrove will join myself and Andrea Norris, NIH CIO on the stage, discussing how technology can help accelerate positive change in the practice of medicine and biomedical research.

10 must-see G Suite developer sessions at Google Cloud Next ‘18



Google Cloud Next '18 is less than a week away and this year, there are over 500 sessions, covering all aspects of cloud computing—IaaS, PaaS, and SaaS. This is your chance to hear from experts in artificial intelligence, as well as learn first-hand how to build custom solutions in G Suite alongside developers other Independent Software Vendors (ISVs), systems integrators (SIs) or industry enterprises.

G Suite’s intelligent productivity apps are secure, smart and simple to use, so why not integrate your apps with them? If you’re planning to attend the event and are wondering which sessions you should check out to enhance your skill set, here are some sessions to consider:

  • Power Your Apps with Gmail, Google Drive, Calendar, Sheets, Slides, and More!" on Tuesday, July 24th. Join me as I lead this session that provides a high-level technical overview of the various ways you can build with G Suite. This is a great place to start before attending deeper technical sessions. 
  • “Power your apps with Gmail, Google Drive, Calendar, Sheets, Slides and more” on Monday, July 23rd and Friday, July 27th. If you're already up-to-speed and want to leave NEXT with actual, working code you can use at school or on the job, join us for one of our bootcamps! Both are identical and bookend the conference—one on Monday and another on Friday. While named the same as the technical overview talk above, these dive a bit deeper, show more API usage examples and feature hands-on codelabs. Register today to ensure you get a seat.
  • Automating G Suite: Apps Script & Sheets Macro Recorder” or “Enhancing the Google Apps Script Developer Experience” on Tuesday, July 24th. Interested in Google Apps Script, our customized serverless JavaScript runtime used to automate, integrate, and extend G Suite apps and data? The first session introduces developers and ITDMs to new features as well as real business use cases while the other session dives into recent features that make Apps Script more friendly for the professional developer. 
  • G Suite + GCP: Building Serverless Applications with All of Google Cloud” on Wednesday, July 25th. This session is your chance to attend one of the few hybrid talks that look at how to you can build applications on both GCP and G Suite platforms. Learn about GCP and G Suite serverless products— a topic that’s become more and more popular over the past year—and see how it works firsthand with demos. I’m also leading this session and eager to show how you can leverage both platforms in the same application. 
  • Build apps your business needs, with App Maker” or “How to Build Enterprise Workflows with App Maker” on Tuesday, July 24th and Thursday, July 26th respectively. Google App Maker is a new low-code, development environment that makes it easy to build custom apps for work. It’s great for business analysts, technical managers or data scientists who may not have software engineering resources. With a drag & drop UI, built-in templates, and point-and-click data modeling, App Maker lets you go from idea to app in minutes! Learn all about it with our pair of App Maker talks featuring our Developer Advocate, Chris Schalk. 
  • The Google Docs, Sheets & Slides Ecosystem: Stronger than ever, and growing” or “Building on the Docs Editors: APIs and Apps Script” on Wednesday, July 25th and Thursday, July 26th respectively. Check out these pair of talks to learn more about how to write apps that integrate with Google Docs, Sheets, Slides and Forms. The first describes the G Suite productivity tools' growing interoperability in the enterprise with while the second focuses on the different options available to developers for integrating with the G Suite "editor" applications. 
  • Get Productive with Gmail Add-ons” on Tuesday, July 24th. We launched Gmail Add-ons less than a year ago (You can check out this video to learn more.) to help developers integrate their apps alongside Gmail. Come to this session to learn the latest from the Gmail Add-ons and API team.
I look forward to meeting you in person at Next '18. In the meantime, you can check out the entire session schedule to find out everything NEXT has to offer or this video where I talk about how I think technology will change the world. See you soon!

VMware and Google Cloud: building the hybrid cloud together with vRealize Orchestrator



Many of our customers with hybrid cloud environments rely on VMware software on-premises. They want to simplify provisioning and enable end-user self service. At the same time, they also want to make sure they’re complying with IT policies and following IT best practices. As a result, many use VMware vRealize Automation, a platform for automated self-service provisioning and lifecycle management of IT infrastructure, and are looking for ways to leverage it in the cloud.

Today, we’re announcing the preview of our plug-in for VMware vRealize Orchestrator and support for Google Cloud Platform (GCP) resources in vRealize Automation. With these resources, you can now deploy and manage GCPresources from within your vRealize Automation environment.

The GCP plug-in for VMware vRealize Orchestrator provides a consistent management and governance experience across on-premises and GCP-based IT environments. For example, you can use Google-provided blueprints or build your own blueprints for Google Compute Engine resources and publish to the vRealize service catalog. This means you can select and launch resources in a predictable manner that is similar to how you launch VMs in your on-premises VMware environment, using a tool you’re already familiar with.

This preview release allows you to:
  • Create vRealize Automation “blueprints” for Compute Engine VM Instances
  • Request and self-provision resources in GCP using vRA’s catalog feature
  • Gain visibility and reclaim resources in GCP to reduce operational costs
  • Enforce access and resource quota policies for resources in GCP
  • Initiate Day 2 operations (start, stop, delete, etc.) on Compute Engine VM Instances, Instance Groups and Disks
The GCP plug-in for vRealize makes it easy for you to unlock new hybrid scenarios. For example:

  1. Reach new regions to address global business needs. (Hello Finland, Mumbai and Singapore.)
  2. Define large-scale applications using vRA and deploy to Compute Engine to leverage GCP’s worldwide load balancing and automatic scaling.
  3. Save money by deploying VMs as Compute Engine Preemptible VM Instances and using Custom Machine Types to tailor the VM configuration to application needs.
  4. Accelerate the time it takes to train a machine learning model by using Compute Engine with NVIDIA® Tesla® P100 GPUs.
  5. Replicate your on premises-based applications to the cloud and scale up or down as your business dictates.
While this preview offers support for Compute Engine Virtual Machines in vRealize Automation, we’re working together with VMware to add support for additional GCP products such as Cloud TPUs—we’ll share more on that in the coming months. You can also find more information about this announcement by reading VMware’s blog.

In the meantime, to join the preview program, please submit a request using the preview intake form.

SRE fundamentals: SLIs, SLAs and SLOs



Next week at Google Cloud Next ‘18, you’ll be hearing about new ways to think about and ensure the availability of your applications. A big part of that is establishing and monitoring service-level metrics—something that our Site Reliability Engineering (SRE) team does day in and day out here at Google. Our SRE principles have as their end goal to improve services and in turn the user experience, and next week we’ll be discussing some new ways you can incorporate SRE principles into your operations.

In fact, a recent Forrester report on infrastructure transformation offers details on how you can apply these SRE principles at your company—more easily than you might think. They found that enterprises can apply most SRE principles either directly or with minor modification.

To learn more about applying SRE in your business, we invite you to join Ben Treynor, head of Google SRE, who will be sharing some exciting announcements and walking through real-life SRE scenarios at his Next ‘18 Spotlight session. Register now as seats are limited.

The concept of SRE starts with the idea that metrics should be closely tied to business objectives. We use several essential measurements—SLO, SLA and SLI—in SRE planning and practice.

Defining the terms of site reliability engineering

These measurements aren’t just useful abstractions. Without them, you cannot know if your system is reliable, available or even useful. If they don’t tie explicitly back to your business objectives, then you don’t have data on whether the choices you make are helping or hurting your business.

As a refresher, here’s a look at the key measurements of SRE, as discussed by AJ Ross, Adrian Hilton and Dave Rensin of our Customer Reliability Engineering team, in the January 2017 blog post, SLOs, SLIs, SLAs, oh my - CRE life lessons.


1. Service-Level Objective (SLO)

SRE begins with the idea that a prerequisite to success is availability. A system that is unavailable cannot perform its function and will fail by default. Availability, in SRE terms, defines whether a system is able to fulfill its intended function at a point in time. In addition to being used as a reporting tool, the historical availability measurement can also describe the probability that your system will perform as expected in the future.

When we set out to define the terms of SRE, we wanted to set a precise numerical target for system availability. We term this target the Service-Level Objective (SLO) of our system. Any discussion we have in the future about whether the system is running sufficiently reliably and what design or architectural changes we should make to it must be framed in terms of our system continuing to meet this SLO.

Keep in mind that the more reliable the service, the more it costs to operate. Define the lowest level of reliability that you can get away with for each service, and state that as your SLO. Every service should have an SLO—without it, your team and your stakeholders cannot make principled judgments about whether your service needs to be made more reliable (increasing cost and slowing development) or less reliable (allowing greater velocity of development). Excessive availability can become a problem because now it’s the expectation. Don’t make your system overly reliable if you don’t intend to commit to it to being that reliable.

Within Google, we implement periodic downtime in some services to prevent a service from being overly available. You might also try experimenting with planned-downtime exercises with front-end servers occasionally, as we did with one of our internal systems. We found that these exercises can uncover services that are using those servers inappropriately. With that information, you can then move workloads to somewhere more suitable and keep servers at the right availability level.

2. Service-Level Agreement (SLA)

At Google, we distinguish between an SLO and a Service-Level Agreement (SLA). An SLA normally involves a promise to someone using your service that its availability should meet a certain level over a certain period, and if it fails to do so then some kind of penalty will be paid. This might be a partial refund of the service subscription fee paid by customers for that period, or additional subscription time added for free. The concept is that going out of SLA is going to hurt the service team, so they will push hard to stay within SLA. If you’re charging your customers money, you will probably need an SLA.

Because of this, and because of the principle that availability shouldn’t be much better than the SLO, the SLA is normally a looser objective than the SLO. This might be expressed in availability numbers: for instance, an availability SLA of 99.9% over one month, with an internal availability SLO of 99.95%. Alternatively, the SLA might only specify a subset of the metrics that make up the SLO.

If you have an SLA that is different from your SLO, as it almost always is, it’s important for your monitoring to measure SLA compliance explicitly. You want to be able to view your system’s availability over the SLA calendar period, and easily see if it appears to be in danger of going out of SLA. You will also need a precise measurement of compliance, usually from logs analysis. Since we have an extra set of obligations (in the form of our SLA) to paying customers, we need to measure queries received from them separately from other queries. That’s another benefit of establishing an SLA—it’s an unambiguous way to prioritize traffic.

When you define your SLA, you need to be extra-careful about which queries you count as legitimate. For example, if a customer goes over quota because they released a buggy version of their mobile client, you may consider excluding all “out of quota” response codes from your SLA accounting.

3. Service-Level Indicator (SLI)

We also have a direct measurement of SLO conformance: the frequency of successful probes of our system. This is a Service-Level Indicator (SLI). When we evaluate whether our system has been running within SLO for the past week, we look at the SLI to get the service availability percentage. If it goes below the specified SLO, we have a problem and may need to make the system more available in some way, such as running a second instance of the service in a different city and load-balancing between the two. If you want to know how reliable your service is, you must be able to measure the rates of successful and unsuccessful queries; these will form the basis of your SLIs.

Since the original post was published, we’ve made some updates to Stackdriver that let you incorporate SLIs even more easily into your Google Cloud Platform (GCP) workflows. You can now combine your in-house SLIs with the SLIs of the GCP services that you use, all in the same Stackdriver monitoring dashboard. At Next ‘18, the Spotlight session with Ben Treynor and Snapchat will illustrate how Snap uses its dashboard to get insight into what matters to its customers and map it directly to what information it gets from GCP, for an in-depth view of customer experience.
Automatic dashboards in Stackdriver for GCP services enable you to group several ways: per service, per method and per response code any of the 50th, 95th and 99th percentile charts. You can also see latency charts on log scale to quickly find outliers.  

If you’re building a system from scratch, make sure that SLIs and SLOs are part of your system requirements. If you already have a production system but don’t have them clearly defined, then that’s your highest priority work. If you’re coming to Next ‘18, we look forward to seeing you there.

See related content:


Move Mirror: You move and 80,000 images move with you

There are a lot of impressive uses for machine learning these days, like detecting objects in images, helping to detect diseases, and even enabling cars to drive themselves. But AI can also be used in more playful ways.

That’s why we made Move Mirror—an AI Experiment that lets you explore pictures in a fun new way, just by moving around. Move in front of your webcam and Move Mirror will match your real-time movements to hundreds of images of people doing similar poses around the world. It feels like a magical mirror that reflects your moves with images of all kinds of human activity—from sports and dance to martial arts, acting and beyond. You can even capture the experience as a GIF and share it with your friends.

Move Mirror

With Move Mirror, we’re showing how computer vision techniques like pose estimation can be available to anyone with a computer and a webcam. We also wanted to make machine learning more accessible to coders and makers by bringing pose estimation into the browser—hopefully inspiring them to experiment with this technology.

To build this experiment, we used PoseNet, a model that can detect human figures in images and videos by identifying where key body joints are. Move Mirror takes the input from your camera feed and maps it to a database of more than 80,000 images to find the best match. It’s powered by Tensorflow.js—a library that runs machine learning models on-device, in your browser—which means the pose estimation happens directly in the browser, and your images are not being stored or sent to a server. For a deep dive into how we built this experiment, check out this Medium post.

We hope you’ll play around with Move Mirror and share your experience by making a GIF. Try it out now at g.co/movemirror.

Bringing GPU-accelerated analytics to GCP Marketplace with MapD




Editor’s note: Today, we hear from our partner MapD, whose data analytics platform uses GPUs to accelerate queries and visualizations. Read on to learn how MapD and Google Cloud are working together.

MapD and public cloud are a great fit. Combining cloud-based GPU infrastructure with MapD’s performance, interactivity and operational ease of use is a big win for our customers, allowing data scientists and analysts to visually explore billion-row datasets with fluidity and minimal hassle.

Our Community and Enterprise Edition images are available on AWS, MapD docker containers are available on NVIDIA GPU Cloud (NGC), as well as our own MapD Cloud. Today, we’re thrilled to announce the availability of MapD on Google Cloud Platform (GCP) Marketplace, helping us bring interactivity at scale to the widest possible audience. With services like Cloud DataFlow, Cloud BigTable and Cloud AI, GCP has emerged as a great platform for data-intensive workloads. Combining MapD and these services let us define scalable, high-performance visual analytics workflows for a variety of use cases.

On GCP, you’ll find both our Community and Enterprise editions for K80, Pascal and Volta GPU instances in the GCP Marketplace. Google’s flexible approach to attaching GPU dies to standard CPU-based instance types means you can dial up or down the necessary GPU capacity for your instances depending on the size of your datasets and your compute needs.

We’re confident that MapD’s availability on GCP marketplace will further accelerate the adoption of GPUs as a key part of enterprise analytics workloads, in addition to their obvious applicability to AI, graphics and general purpose computing. Click here to try out MapD on GCP.