Category Archives: Google for Work Blog

Work is going Google

Introducing the security center for G Suite—security analytics and best practices from Google

We want to make it easy for you to manage your organization’s data security. A big part of this is making sure you and your admins can access a bird’s eye view of your security—and, more importantly, that you can take action based on timely insights.

Today, we’re introducing the security center for G Suite, a tool that brings together security analytics, actionable insights and best practice recommendations from Google to empower you to protect your organization, data and users.

With the security center, key executives and admins can do things like:

1. See a snapshot of important security metrics in one place. 

Get insights into suspicious device activity, visibility into how spam and malware are targeting users within your organization and metrics to demonstrate security effectiveness—all in a unified dashboard.

Security Center GA - 1

2. Stay ahead of potential threats. 

Admins can now examine security analytics to flag threats. For example, your team can have visibility into which users are being targeted by phishing so that you can head off potential attacks, or when Google Drive files trigger DLP rules, you have a heads up to avoid risking data exfiltration.

Security Center - 2

3. Reduce risk by adopting security health recommendations.

Security health analyzes your existing security posture and gives you customized advice to secure your users and data. These recommendations cover issues ranging from how your data is stored, to how your files are shared, as well as recommendations on mobility and communications settings.  

Security Center GA - 3

Get started

More than 3.5 million organizations rely on G Suite to collaborate securely. If you’re a G Suite Enterprise customer, you’ll be able to access the security center within the Admin console automatically in the next few days. These instructions can help admins get started and here are some security best practices to keep in mind.

If you’re new to G Suite, learn more about about how you can collaborate, store and communicate securely.

Source: Google Cloud


Cloud AutoML: Making AI accessible to every business

When we both joined Google Cloud just over a year ago, we embarked on a mission to democratize AI. Our goal was to lower the barrier of entry and make AI available to the largest possible community of developers, researchers and businesses.

Our Google Cloud AI team has been making good progress towards this goal. In 2017, we introduced Google Cloud Machine Learning Engine, to help developers with machine learning expertise easily build ML models that work on any type of data, of any size. We showed how modern machine learning services, i.e., APIs—including Vision, Speech, NLP, Translation and Dialogflow—could be built upon pre-trained models to bring unmatched scale and speed to business applications. Kaggle, our community of data scientists and ML researchers, has grown to more than one million members. And today, more than 10,000 businesses are using Google Cloud AI services, including companies like Box, Rolls Royce Marine, Kewpie and Ocado.

But there’s much more we can do. Currently, only a handful of businesses in the world have access to the talent and budgets needed to fully appreciate the advancements of ML and AI. There’s a very limited number of people that can create advanced machine learning models. And if you’re one of the companies that has access to ML/AI engineers, you still have to manage the time-intensive and complicated process of building your own custom ML model. While Google has offered pre-trained machine learning models via APIs that perform specific tasks, there's still a long road ahead if we want to bring AI to everyone.

To close this gap, and to make AI accessible to every business, we’re introducing Cloud AutoML. Cloud AutoML helps businesses with limited ML expertise start building their own high-quality custom models by using advanced techniques like learning2learn and transfer learning from Google. We believe Cloud AutoML will make AI experts even more productive, advance new fields in AI and help less-skilled engineers build powerful AI systems they previously only dreamed of.

Our first Cloud AutoML release will be Cloud AutoML Vision, a service that makes it faster and easier to create custom ML models for image recognition. Its drag-and-drop interface lets you easily upload images, train and manage models, and then deploy those trained models directly on Google Cloud. Early results using Cloud AutoML Vision to classify popular public datasets like ImageNet and CIFAR have shown more accurate results with fewer misclassifications than generic ML APIs.

Here’s a little more on what Cloud AutoML Vision has to offer:

  • Increased accuracy: Cloud AutoML Vision is built on Google’s leading image recognition approaches, including transfer learning and neural architecture search technologies. This means you’ll get a more accurate model even if your business has limited machine learning expertise.

  • Faster turnaround time to production-ready models: With Cloud AutoML, you can create a simple model in minutes to pilot your AI-enabled application, or build out a full, production-ready model in as little as a day.

  • Easy to use: AutoML Vision provides a simple graphical user interface that lets you specify data, then turns that data into a high quality model customized for your specific needs.

AutoML

Urban Outfitters is constantly looking for new ways to enhance our customers’ shopping experience," says Alan Rosenwinkel, Data Scientist at URBN. "Creating and maintaining a comprehensive set of product attributes is critical to providing our customers relevant product recommendations, accurate search results and helpful product filters; however, manually creating product attributes is arduous and time-consuming. To address this, our team has been evaluating Cloud AutoML to automate the product attribution process by recognizing nuanced product characteristics like patterns and neckline styles. Cloud AutoML has great promise to help our customers with better discovery, recommendation and search experiences."

Mike White, CTO and SVP, for Disney Consumer Products and Interactive Media, says: “Cloud AutoML’s technology is helping us build vision models to annotate our products with Disney characters, product categories and colors. These annotations are being integrated into our search engine to enhance the impact on Guest experience through more relevant search results, expedited discovery and product recommendations on shopDisney.”

And Sophie Maxwell, Conservation Technology Lead at the Zoological Society of London, tells us: "ZSL is an international conservation charity devoted to the worldwide conservation of animals and their habitats. A key requirement to deliver on this mission is to track wildlife populations to learn more about their distribution and better understand the impact humans are having on these species. In order to achieve this, ZSL has deployed a series of camera traps in the wild that take pictures of passing animals when triggered by heat or motion. The millions of images captured by these devices are then manually analysed and annotated with the relevant species, such as elephants, lions and giraffes, etc., which is a labour-intensive and expensive process. ZSL’s dedicated Conservation Technology Unit has been collaborating closely with Google’s Cloud ML team to help shape the development of this exciting technology, which ZSL aims to use to automate the tagging of these images—cutting costs, enabling wider-scale deployments and gaining a deeper understanding of how to conserve the world’s wildlife effectively."

If you’re interested in trying out AutoML Vision, you can request access via this form.

AutoML Vision is the result of our close collaboration with Google Brain and other Google AI teams, and is the first of several Cloud AutoML products in development. While we’re still at the beginning of our journey to make AI more accessible, we’ve been deeply inspired by what our 10,000+ customers using Cloud AI products have been able to achieve. We hope the release of Cloud AutoML will help even more businesses discover what’s possible through AI.

References

Source: Google Cloud


Expanding our global infrastructure with new regions and subsea cables

At Google, we've spent $30 billion improving our infrastructure over three years, and we’re not done yet. From data centers to subsea cables, Google is committed to connecting the world and serving our Cloud customers, and today we’re excited to announce that we’re adding three new submarine cables, and five new regions.

We’ll open our Netherlands and Montreal regions in the first quarter of 2018, followed by Los Angeles, Finland, and Hong Kong – with more to come. Then, in 2019 we’ll commission three subsea cables: Curie, a private cable connecting Chile to Los Angeles; Havfrue, a consortium cable connecting the U.S. to Denmark and Ireland; and the Hong Kong-Guam Cable system (HK-G), a consortium cable interconnecting major subsea communication hubs in Asia.  

Together, these investments further improve our network—the world’s largest—which by some accounts delivers 25% of worldwide internet traffic. Companies like PayPal leverage our network and infrastructure to run their businesses effectively.

“At PayPal, we process billions of transactions across the globe, and need to do so securely, instantaneously and economically. As a result, security, networking and infrastructure were key considerations for us when choosing a cloud provider,” said Sri Shivananda, PayPal’s Senior Vice President and Chief Technology Officer. “With Google Cloud, we have access to the world’s largest network, which helps us reach our infrastructure goals and best serve our millions of users.”

infrastructure-1
Figure 1. Diagram shows existing GCP regions and upcoming GCP regions
infrastructure-2
Figure 2. Diagram shows three new subsea cable investments, expanding capacity to Chile, Asia Pacific and across the Atlantic

Curie cable

Our investment in the Curie cable (named after renowned scientist Marie Curie) is part of our ongoing commitment to improve global infrastructure. In 2008, we were the first tech company to invest in a subsea cable as a part of a consortium. With Curie, we become the first major non-telecom company to build a private intercontinental cable.

By deploying our own private subsea cable, we help improve global connectivity while providing value to our customers. Owning the cable ourselves has some distinct benefits. Since we control the design and construction process, we can fully define the cable’s technical specifications, streamline deployment and deliver service to users and customers faster. Also, once the cable is deployed, we can make routing decisions that optimize for latency and availability.

Curie will be the first subsea cable to land in Chile in almost 20 years. Once deployed, Curie will be Chile’s largest single data pipe. It will serve Google users and customers across Latin America.

Havfrue cable

To increase capacity and resiliency in our North Atlantic systems, we’re working with Facebook, Aqua Comms and Bulk Infrastructure to build a direct submarine cable system connecting the U.S. to Denmark and Ireland. This cable, called Havfrue (Danish for “mermaid”), will be built by TE SubCom and is expected to come online by the end of 2019. The marine route survey, during which the supplier determines the specific route the cable will take, is already underway.

HK-G cable

In the Pacific, we’re working with RTI-C and NEC on the Hong Kong-Guam cable system. Together with Indigo and other existing subsea systems, this cable creates multiple scalable, diverse paths to Australia, increasing our resilience in the Pacific. As a result, customers will experience improved capacity and latency from Australia to major hubs in Asia. It will also increase our network capacity at our new Hong Kong region.
infrastructure-3

Figure 3. A complete list of Google’s subsea cable investments. New cables in this announcement are highlighted yellow. Google subsea cables provide reliability, speed and security not available from any other cloud.

Google has direct investment in 11 cables, including those planned or under construction. The three cables highlighted in yellow are being announced in this blog post. (In addition to these 11 cables where Google has direct ownership, we also lease capacity on numerous additional submarine cables.)

What does this mean for our customers?

These new investments expand our existing cloud network. The Google network has over 100 points of presence (map) and over 7,500 edge caching nodes (map). This investment means faster and more reliable connectivity for all our users.

Simply put, it wouldn’t be possible to deliver products like Machine Learning Engine, Spanner, BigQuery and other Google Cloud Platform and G Suite services at the quality of service users expect without the Google network. Our cable systems provide the speed, capacity and reliability Google is known for worldwide, and at Google Cloud, our customers are able to to make use of the same network infrastructure that powers Google’s own services.

While we haven’t hastened the speed of light, we have built a superior cloud network as a result of the well-provisioned direct paths between our cloud and end-users, as shown in the figure below.

infrastructure-4

Figure 4. The Google network offers better reliability, speed and security performance as compared with the nondeterministic performance of the public internet, or other cloud networks. The Google network consists of fiber optic links and subsea cables between 100+ points of presence, 7500+ edge node locations, 90+ Cloud CDN  locations, 47 dedicated interconnect locations and 15 GCP regions.

We’re excited about these improvements. We're increasing our commitment to ensure users have the best connections in this increasingly connected world.

Source: Google Cloud


Protecting our Google Cloud customers from new vulnerabilities without impacting performance

If you’ve been keeping up on the latest tech news, you’ve undoubtedly heard about the CPU security flaw that Google’s Project Zero disclosed last Wednesday. On Friday, we answered some of your questions and detailed how we are protecting Cloud customers. Today, we’d like to go into even more detail on how we’ve protected Google Cloud products against these speculative execution vulnerabilities, and what we did to make sure our Google Cloud customers saw minimal performance impact from these mitigations.

Modern CPUs and operating systems protect programs and users by putting a “wall" around them so that one application, or user, can’t read what’s stored in another application’s memory. These boundaries are enforced by the CPU.

But as we disclosed last week, Project Zero discovered techniques that can circumvent these protections in some cases, allowing one application to read the private memory of another, potentially exposing sensitive information.

The vulnerabilities come in three variants, each of which must be protected against individually. Variant 1 and Variant 2 have also been referred to as “Spectre.” Variant 3 has been referred to as “Meltdown.” Project Zero described these in technical detail, the Google Security blog described how we’re protecting users across all Google products, and we explained how we’re protecting Google Cloud customers and provided guidance on security best practices for customers who use their own operating systems with Google Cloud services.

Surprisingly, these vulnerabilities have been present in most computers for nearly 20 years. Because the vulnerabilities exploit features that are foundational to most modern CPUs—and were previously believed to be secure—they weren’t just hard to find, they were even harder to fix. For months, hundreds of engineers across Google and other companies worked continuously to understand these new vulnerabilities and find mitigations for them.

In September, we began deploying solutions for both Variants 1 and 3 to the production infrastructure that underpins all Google products—from Cloud services to Gmail, Search and Drive—and more-refined solutions in October. Thanks to extensive performance tuning work, these protections caused no perceptible impact in our cloud and required no customer downtime in part due to Google Cloud Platform’s Live Migration technology. No GCP customer or internal team has reported any performance degradation.

While those solutions addressed Variants 1 and 3, it was clear from the outset that Variant 2 was going to be much harder to mitigate. For several months, it appeared that disabling the vulnerable CPU features would be the only option for protecting all our workloads against Variant 2. While that was certain to work, it would also disable key performance-boosting CPU features, thus slowing down applications considerably.

Not only did we see considerable slowdowns for many applications, we also noticed inconsistent performance, since the speed of one application could be impacted by the behavior of other applications running on the same core. Rolling out these mitigations would have negatively impacted many customers.

With the performance characteristics uncertain, we started looking for a “moonshot”—a way to mitigate Variant 2 without hardware support. Finally, inspiration struck in the form of “Retpoline”—a novel software binary modification technique that prevents branch-target-injection, created by Paul Turner, a software engineer who is part of our Technical Infrastructure group. With Retpoline, we didn't need to disable speculative execution or other hardware features. Instead, this solution modifies programs to ensure that execution cannot be influenced by an attacker.

With Retpoline, we could protect our infrastructure at compile-time, with no source-code modifications. Furthermore, testing this feature, particularly when combined with optimizations such as software branch prediction hints, demonstrated that this protection came with almost no performance loss.

We immediately began deploying this solution across our infrastructure. In addition to sharing the technique with industry partners upon its creation, we open-sourced our compiler implementation in the interest of protecting all users.

By December, all Google Cloud Platform (GCP) services had protections in place for all known variants of the vulnerability. During the entire update process, nobody noticed: we received no customer support tickets related to the updates. This confirmed our internal assessment that in real-world use, the performance-optimized updates Google deployed do not have a material effect on workloads.

We believe that Retpoline-based protection is the best-performing solution for Variant 2 on current hardware. Retpoline fully protects against Variant 2 without impacting customer performance on all of our platforms. In sharing our research publicly, we hope that this can be universally deployed to improve the cloud experience industry-wide.

This set of vulnerabilities was perhaps the most challenging and hardest to fix in a decade, requiring changes to many layers of the software stack. It also required broad industry collaboration since the scope of the vulnerabilities was so widespread. Because of the extreme circumstances of extensive impact and the complexity involved in developing fixes, the response to this issue has been one of the few times that Project Zero made an exception to its 90-day disclosure policy.

While these vulnerabilities represent a new class of attack, they're just a few among the many different types of threats our infrastructure is designed to defend against every day. Our infrastructure includes mitigations by design and defense-in-depth, and we’re committed to ongoing research and contributions to the security community and to protecting our customers as new vulnerabilities are discovered.

Source: Google Cloud


Reflecting on 2017: a year in review for G Suite

Before we get into the swing of the new year—which is sure to bring new projects, new teammates and new challenges—let’s take a moment to reflect on highlights from 2017.

Here’s a look at what happened in G Suite last year.

1. Bringing you the power of Google’s artificial intelligence.

Smart Reply GIF

Technology continues to change the way we work. This year, we further integrated Google’s artificial intelligence into G Suite so that you can accomplish more in less time. Using machine learning, Gmail suggests email responses. Sheets builds charts, creates pivot tables and suggests formulas. And you can also ask questions in full sentences and get instant answers in Sheets and Cloud Search (in addition to Docs and Slides) thanks to natural language processing.

2. Helping businesses secure their data.

OAuth

Protecting sensitive data and assets is a constant challenge that businesses face. Now, using contextual intelligence, Gmail can warn you if you’re responding to someone outside of your company domain. We also extended DLP to Google Drive to make it easier to secure sensitive data and control sharing. Google Vault for Drive helps surface information to support legal and compliance requirements. And we made it easier for you to manage which third-party apps can access your G Suite data.

Check out the G Suite website for more information on how you can transform your business to be security-first (or, try passing along these tips to help prevent phishing attempts).

Hangouts

3. Going all in on meetings.

We spend a lot of time on conference calls—for some, 30 percent of their day is spent in meetings—but meetings don’t often reflect how we actually like to work together. To help teams transform how they collaborate, we created a new Hangouts experience for the enterprise, designed cost-effective hardware built for the meeting room, reimagined the traditional whiteboard and introduced an intelligent communication app. Plus, Google Calendar got a makeover and you can use it on your iPad now.

4. Providing enterprise-grade solutions for collaboration and storage.

Large enterprises are often drowning in files—files that represent a company’s collective knowledge. Every strategic plan, brainstorm or financial plan is an opportunity to learn more about your business, which is why you need tools to find, organize, understand and act on that knowledge.

For years, we’ve been working to ensure that Google Drive meets enterprise needs and last year Google was recognized by Gartner as a Leader in the July 2017 Gartner Magic Quadrant for Content Collaboration Platforms. We were also recognized by Forrester as a Leader in The Forrester Wave™: for Enterprise File Sync and Share (EFSS) -  Cloud Solutions, Q4 2017 report, which published in December.

5. Building tools for marketing and sales organizations, even more integrations.

Image 4 - 2017 recap for G Suite

We built tools to help marketing and sales organizations create their best work and collaborate effectively, even with other tools that teams rely on. We launched Jamboard, announced a strategic partnership with Salesforce, opened up Gmail to your favorite business apps and integrated Hire with G Suite.

These are just some of the ways we’re helping businesses transform the way they work everyday. We’re excited to see what 2018 has to offer.


Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Source: Google Cloud


Answering your questions about “Meltdown” and “Spectre”

This week, security vulnerabilities dubbed “Spectre” and “Meltdown” made news headlines. On Wednesday, we explained what these vulnerabilities are and how we're protecting you against them.

Since then, there's been considerable discussion about what this means for Google Cloud and the industry at large. Today, we’d like to clear up some confusion and highlight several key considerations for our customers.

What are “Spectre” and “Meltdown”?


Last year, Google’s Project Zero team discovered serious security flaws caused by “speculative execution,” a technique used by most modern processors (CPUs) to optimize performance.

Independent researchers separately discovered and named these vulnerabilities “Spectre” and “Meltdown.” 

Project Zero described three variants of this new class of speculative execution attack. Variant 1 and Variant 2 have been referred to as “Spectre.” Variant 3 has been referred to as “Meltdown.” Most vendors are referring to them by Common Vulnerabilities and Exposures aka “CVE” labels, which are an industry standard way of identifying vulnerabilities.

security-1

There's no single fix for all three attack variants; each requires protection individually.

Here's an overview of each variant:

  • Variant 1 (CVE-2017-5753), “bounds check bypass.” This vulnerability affects specific sequences within compiled applications, which must be addressed on a per-binary basis. This variant is currently the basis for concern around browser attacks, Javascript exploitation and vulnerabilities within individual binaries.

  • Variant 2 (CVE-2017-5715), “branch target injection.” This variant may either be fixed by a CPU microcode update from the CPU vendor, or by applying a software protection called “Retpoline” to binaries where concern about information leakage is present. This variant is currently the basis for concern around Cloud Virtualization and “Hypervisor Bypass” concerns that affect entire systems.

  • Variant 3 (CVE-2017-5754), “rogue data cache load.”  This variant is the basis behind the discussion around “KPTI,” or “Kernel Page Table Isolation.” When an attacker already has the ability to run code on a system, they can access memory which they do not have permission to access.

For more information on these variants, please read this week’s Google Security post.

Am I protected from Spectre and Meltdown?  


Google’s engineering teams began working to protect our customers from these vulnerabilities upon our learning of them in June 2017. We applied solutions across the entire suite of Google products, and we collaborated with the industry at large to help protect users across the web.

G Suite and Google Cloud Platform (GCP) are updated to protect against all known attack vectors. Some customers may worry that they have not been protected since they were not asked to reboot their instance. Google Cloud is architected in a manner that enables us to update the environment while providing operational continuity for our customers. Via live migration we can patch our infrastructure without requiring customers to reboot their instances.

Customers who use their own operating systems with Google Cloud services should continue to follow security best practices and apply security updates to their images just as they would for any other operating system vulnerability. We're providing an up-to-date reference on the availability of vendor patches for common operating systems on our GCE Security Bulletin page.


I’ve heard that Spectre is nearly impossible to protect against. Is this true?


There has been significant concern in particular about “Spectre.” The use of the name “Spectre” to refer to both Variants 1 and 2 has caused some confusion over whether it's “fixed” or not.

Google Cloud instances are protected against all known inter-VM attacks, regardless of the patch status of the guest environments, and attackers do not have access to any other customers’ data as a result of these vulnerabilities. Google Cloud and other public clouds use virtualization technology to isolate neighboring customer workloads. A virtualization component known as a hypervisor connects the physical machine to virtual machines. This hypervisor can be updated to address Variant 2 threats. Google Cloud has updated its hypervisor using “Retpoline,” which addresses all currently known Variant 2 attack methods.

Variant 1 is the basis behind claims that Spectre is nearly impossible to protect against. The difficulty is that Variant 1 affects individual software binaries, so it must be handled by discovering and addressing exploits within each binary.

Risks that Variant 1 would pose to the infrastructure underpinning Google Cloud are addressed by the multiple security controls that make up our layered “defense in depth” security posture. Because Google is in full control of our infrastructure from the hardware up to our secure software development practices, our infrastructure is protected against Variant 1. You can read more about the security foundations of our infrastructure in our whitepaper.

We work continuously to stay ahead of the constantly-evolving threat landscape and will continue to roll out additional protections to address potential risks.

As a user of the public cloud, am I more vulnerable to Spectre and Meltdown than others?

In many respects, public cloud users are better-protected from security vulnerabilities than are users of traditional datacenter-hosted applications. Security best practices rely on discovering vulnerabilities early, and patching them promptly and completely. Each of these activities is aided by the scale and automation that top public cloud providers can offer — for example, few companies maintain a several-hundred-person security research team to find vulnerabilities and patch them before they're discovered by others or disclosed. Having the ability to update millions of servers in days, without causing user disruption or requiring maintenance windows, is difficult technology to develop but it allows patches and updates to be deployed quickly after they become available, and without user disruption that can damage productivity.

Spectre and Meltdown are new and troubling vulnerabilities, but it’s important to remember that there are many different types of threats that Google (and other cloud providers) protect against every single day. Google’s cloud infrastructure doesn’t rely on any single technology to make it secure. Our stack builds security through progressive layers that deliver defense in depth. From the physical premises to the purpose-built servers, networking equipment, and custom security chips to the low-level software stack running on every machine, our entire hardware infrastructure is Google-controlled, -secured, -built and -hardened.

Is performance impacted?

On most of Google’s workloads, including our cloud infrastructure, we've seen negligible impact on performance after applying remediations. This was explained further in our follow-up Security blog post on January 4.

There are many conflicting reports about patch impacts being publicly discussed. In some cases, people have published results of tests that focus solely on making API calls to the operating system, which does not represent the real-world scenario that customer software will encounter. There's no substitute for testing to determine for yourself what performance you can expect in your actual situation. We believe solutions exist that introduce minimal performance impact, and expect such techniques will be adopted by software vendors over time. We designed and tested our mitigations for this issue to have minimal performance impact, and the rollout has been uneventful.

Where can I get additional information?

  • Our Support page offers a list of affected Google products and will be updated with their current status of mitigation against these risks

  • Our GCP Security Bulletins page will provide notifications as other operating system maintainers publish patches for this vulnerability and as Compute Engine releases updated OS images

Source: Google Cloud


What Google Cloud, G Suite and Chrome customers need to know about the industry-wide CPU vulnerability

Last year, Google’s Project Zero security team discovered a vulnerability affecting modern microprocessors. Since then, Google engineering teams have been working to protect our customers from the vulnerability across the entire suite of Google products, including Google Cloud Platform (GCP), G Suite applications, and the Google Chrome and Chrome OS products. We also collaborated with hardware and software manufacturers across the industry to help protect their users and the broader web.

All G Suite applications have already been updated to prevent all known attack vectors. G Suite customers and users do not need to take any action to be protected from the vulnerability.

GCP has already been updated to prevent all known vulnerabilities. Google Cloud is architected in a manner that enables us to update the environment while providing operational continuity for our customers. We used our VM Live Migration technology to perform the updates with no user impact, no forced maintenance windows and no required restarts.

Customers who use their own operating systems with GCP services may need to apply additional updates to their images; please refer to the GCP section of the Google Security blog post concerning this vulnerability for additional details. As more updates become available, they will be tracked on the the Compute Engine Security Bulletins page.

Finally, customers using Chrome browser—including for G Suite or GCP—can take advantage of Site Isolation as an additional hardening feature across desktop platforms, including Chrome OS. Customers can turn on Site Isolation for a specific set of websites, or all websites.

The Google Security blog includes more detailed information about this vulnerability and mitigations across all Google products.  

Source: Google Cloud


NAB 2017: How AI is remaking Hollywood

Greetings from Las Vegas, where the National Association of Broadcasters is having its annual conference. At NAB, 1,700 exhibitors and more than 100,000 attendees take over the Las Vegas Convention Center, representing a dozen industries including TV, movies, radio — and now, virtual reality.

And everybody here agrees. This is a big year for media.

Media/entertainment and cloud technologies are coming together. This changes the economics of the business, the ways people make and distribute content and how they relate to their audience. As the NAB put it introducing this year’s show, “It’s redesigning the very nature of how we live, work and play.”

Large-scale computing systems, next gen software and ubiquitous networks simplify and enable the recording, editing and transmission of content to billions of personal devices. Companies now broadcast more content than ever, in a direct relationship with each audience member. The quality of this relationship relies heavily on the seamlessness and personalization of the experience. The cost benefits and ease of use of the cloud-based model is driving change in all aspects of the business.

NAB-gif2

As president of the customer team at Google Cloud, this is a familiar and exciting story. In media, our customers are seeing cost and time to market reductions of 90 percent or better, with substantial performance improvements, by taking advantage of Google Cloud. Spotify, has seen up to 35x improvement in analytic performance, allowing them to greatly improve their personalization experience. For example, on-premises, their algorithms to identify top tracks took five hours; on BigQuery in Google Cloud it takes eight minutes.

Lead VFX studio on Disney's The Jungle Book, MPC artists built a complex photo-real world

Scripps Networks Interactive saw its livestream TV Everywhere video plays grow by 844 percent in 2016. 

NAB-scripps

They use the cloud to not only run their multiscreen video experiences on mobile and connected devices, but also deliver personalized ads targeted to each and every user.

What excites me most is not simply that our customers have new ways to create, personalize or monetize their content, or that they have a new level of agility in their business, with storage and network charges below what they're paying just for the real estate where they keep their own servers.

These are both important, but most exciting is the way their digital assets are, like all data-rich businesses, coming into the age of artificial intelligence, particularly through machine learning.

NAB-gif1

In the case of media, machine learning allows customers to greatly scale activities that have historically been time-consuming and hard — for example, high quality translation and captioning to make content accessible to more audiences everywhere. It also enables completely new experiences — for example, companies can automatically create and deliver highlight reels of multi-hour sports matches for consumption on mobile devices, and build recommendation systems to ensure that their vast unmonetized long tail of content gets discovered by eager fans.    

This isn't science fiction or a long-term research project. It's here now. Those examples are just a few of the ways our customers already use machine learning.

We look forward to doing much, much more, and hope you'll join us on the journey.

Source: Google Cloud


NCAA teams up with Google Cloud

Sports have the power to bring friends and family together, unite communities and inspire future generations. That’s why we’re so excited to be partnering with the NCAA® to make Google Cloud its official public cloud provider.

As part of its journey to the cloud, the NCAA is migrating 80+ years of historical and play-by-play data, from 90 championships and 24 sports, to Google Cloud Platform (GCP). To start, the NCAA will tap into decades of historical basketball data using BigQuery, Cloud Spanner, Datalab, Cloud Machine Learning and Cloud Dataflow, to power the analysis of team and player performance. In partnership with Turner Sports, our team will build a data-driven bracketology competition using historic NCAA data that will be integrated with public datasets, and data captured from live broadcasts. Fans and NCAA members will be able to search, compare and analyze team and player performance, as well as receive near real-time simulations for tournament analysis and forecasting. This will all kick off ahead of March Madness in 2018.

The NCAA also plans to use this data to create analysis workflows to build descriptive, predictive and diagnostic outputs that will help objectively determine and analyze the selection and seeding process across men’s and women’s sports. As part of this collaboration, we’ve also become the official NCAA Cloud Partner, in partnership with Turner Sports and CBS Sports, starting with the 2017-18 NCAA Division I men’s and women’s basketball seasons.

The mission of the NCAA has long been about serving the needs of schools, their teams and students. We’re proud to support that mission by helping the NCAA use data and machine learning to better engage with its millions of fans, nearly half-million college athletes and more than 19,000 teams. Game on!

Source: Google Cloud


5 ways to improve your hiring process in 2018

Editor’s note: Senior Product Manager Berit Hoffmann leads Hire, a recruiting application Google launched earlier this year. In this post, she shares five ways businesses can improve their hiring process and secure great talent.

With 2018 quickly approaching, businesses are evaluating their hiring needs for the new year.

According to a recent survey of 2,200 hiring managers, 46 percent of U.S. companies need to hire more people but have issues filling open positions with the right candidates. If your company lacks great hiring processes and tools, it can be easy to make sub-optimal hiring decisions, which can have negative repercussions.

We built Hire to help businesses hire the right talent more efficiently, and integrated it with G Suite to help teams collaborate more effectively throughout the process. As your business looks to invest in talent next year, here are five ways to positively impact your hiring outcomes.

1. Define the hiring process for each role.

Take time to define each stage of the hiring process, and think about if and how the process may need to differ. This will help you better tailor your evaluation of each candidate to company expectations, as well as the qualifications of a particular role.

Mobility best practice in connected workspaces: tiered access at Google

Earlier this year, Google reviewed a subset of its own interview data to discover the optimal number of interviews needed in the hiring process to evaluate whether a candidate is right for Google. Statistical analysis showed that four interviews was enough to predict with 86 percent confidence whether someone should be hired. Of course, every company’s hiring process varies according to size, role or industry—some businesses require double that number of interviews, whereas others may only need one interview.

Using Hire to manage your recruiting activities allows you to configure as many hiring process “templates” as you’d like, as well as use different ones for different roles. For example, you might vary the number of interview rounds based on department. Whatever process you define, you can bring all candidate activity and interactions together within Hire. Plus, Hire integrates with G Suite apps, like Gmail and Calendar, to help you coordinate the process.

2. Make jobs discoverable on Google Search.

For many businesses, sourcing candidates is one of the most time-consuming parts of the hiring process, so Google launched Job Search to help employers better showcase job opportunities in search. Since launching, 60 percent more employers show jobs in search in the United States.

Making your open positions discoverable where people are searching is an important part of attracting the best talent. If you use Hire to post a job, the app automatically formats your public job posting so it is discoverable by job seekers in Google search.

3. Make sure you get timely feedback from interviewers.

The sooner an interviewer provides feedback, the faster your hiring team can reach a decision, which improves the candidate’s experience. To help speed up feedback submissions, some companies like Genius.com use a “silent process” approach. This means interviewers are not allowed to discuss a candidate until they submit written feedback first.

Hire supports this “silent process” approach by hiding other people’s feedback from interviewers until they submit their own. We’ve found that this can incentivize employees to submit feedback faster because they want to see what their colleagues said. 63 percent of Hire interviewers leave feedback within 24 hours of an interview and 75 percent do so within 48 hours.

4. Make sure their feedback is thoughtful, too.

Beyond speedy feedback delivery, it’s perhaps more important to receive quality evaluations. Make sure your interviewers know how to write clear feedback and try to avoid common mistakes such as:

  1. Writing vague statements or summarizing a candidate’s resume.
  2. Restating information from rubrics or questionnaires rather than giving specific  examples.
  3. Getting distracted by personality or evaluating attributes unrelated to the job.

One way you can encourage employees to stay focused when they interview a candidate. is to assign them a specific topic to cover in the interview. In Hire, topics are included in each interviewer’s Google Calendar invitation for easy reference without having to log into the app.

Maintaining a high standard for written feedback helps your team not only make hiring decisions today, but also helps you track candidates for future consideration. Even if you don’t hire someone for a particular role, the person might be a better fit for another position down the road. In Hire, you can find candidates easily with Google’s powerful search technology. Plus, Hire takes past interview feedback into account and ranks previous candidates higher if they’ve had positive feedback.

5. Stop letting internal processes slow you down.

If you don’t manage your hiring process effectively, it can be a huge time sink, especially as employers take longer and longer to hire talent. If your business lags on making a decision, it can mean losing a great candidate.

Implementing a solution like Hire can make it a lot easier for companies to move quickly through the hiring process. Native integrations with the G Suite apps you’re already using can help you cut down on copy-pasting or having to jump between multiple tabs. If you email a candidate in Gmail, it’s automatically synced in Hire so the rest of the hiring team can follow the conversation. And if you need to schedule a multi-slot interview, you can do so easily in Hire which lets you access interviewer availability or even book conference rooms. Since launching in July, we’ve seen the average time between posting a position and hiring a candidate decrease from 128 days to just 21 days (3 weeks!).

Hiring doesn’t have to be hard. Request a demo of Hire to see how you can speed up talent acquisition. Or learn more about how G Suite can help your teams transform the way they work.

Source: Google Cloud