Manage audio and video in Chrome with one click

We’ve all been there: You have lots of tabs open and one of them starts playing a video, but you can’t figure out which one. Or you’re listening to music in your browser in the background and want to change the song without stopping your work to find the right tab. 

With Chrome’s latest update, it’s now easier to control audio and video in your browser. Just click the icon in the top right corner of Chrome on desktop, open the new media hub and manage what’s playing from there. 

Chrome Media Controls

The new media hub in Chrome

Designed to minimize any disruptions to whatever you need to get done in your browser, the new media hub helps you to be more productive by bringing all your media notifications to one place and letting you manage each audio and video playback, without having to navigate any tabs. We first brought these media controls to Chromebooks in August, and today we rolled out the media hub in Chrome for Windows, Mac and Linux.


These new controls are the latest in a series of updates to enhance your media experience in Chrome, including support for media hardware keys for easy access to your media, and the Picture-in-Picture extension and API to help you with multitasking in your browser. We'll continue to add more functionality for you to control media in Chrome over time.

A new certificate to help people grow careers in IT

When Grow with Google launched the IT Support Professional Certificate, we aimed to equip learners around the world with the fundamentals to kickstart careers in information technology. Now, on the program’s two-year anniversary, we’re expanding our IT training offering with the new Google IT Automation with Python Professional Certificate. Python is now the most in-demand programming language, and more than 530,000 U.S. jobs, including 75,000 entry-level jobs require Python proficiency. With this new certificate, you can learn Python, Git and IT automation within six months. The program includes a final project where learners will use their new skills to solve a problem they might encounter on the job, like building a web service using automation. 

With over 100,000 people now enrolled in our original certificate program, we’ve seen how it can aid aspiring IT professionals. While working as a van driver in Washington, D.C., Yves Cooper took the course through Merit America, a Google.org-funded organization that helps working adults find new skills. Within five days of completing the program, he was offered a role as an IT helpdesk technician—a change that’s set him on a career path he’s excited about. All over the world, people like Yves are using this program to change their lives. In fact, 84 percent of people who take the program report a career impact—like getting a raise, finding a new job, or starting a business—within six months. 

Among the many people who’ve enrolled in the IT certificate, 60 percent identify as female, Black, Latino, or veteran—backgrounds that have historically been underrepresented in the tech industry. To ensure learners from underserved backgrounds have access to both IT Professional Certificates, Google.org will fund 2,500 need-based scholarships through nonprofits like Goodwill, Merit America, Per Scholas and Upwardly Global. Along with top employers like Walmart, Hulu and Sprint, Google considers program completers when hiring for IT roles. 

Self-paced and continuous education is one way we’re helping expand opportunity for all Americans. Our Grow with Google trainings and workshops have helped more than 3 million Americans grow their businesses and careers. With this new professional certificate, even more people can continue to grow their careers through technology. 

Get Ready for New SameSite=None; Secure Cookie Settings

This is a cross-post from the Chromium developer blog and is specific to how changes to Chrome may affect how your website works for your users in the future.

In May, Chrome announced a secure-by-default model for cookies, enabled by a new cookie classification system (spec). This initiative is part of our ongoing effort to improve privacy and security across the web.
Chrome plans to implement the new model with Chrome 80 in February 2020. Mozilla and Microsoft have also indicated intent to implement the new model in Firefox and Edge, on their own timelines. While the Chrome changes are still a few months away, It’s important that developers who manage cookies assess their readiness today. This blog post outlines high level concepts; please see SameSite Cookies Explained on web.dev for developer guidance.

Understanding Cross-Site and Same-Site Cookie Context


Websites typically integrate external services for advertising, content recommendations, third party widgets, social embeds and other features. As you browse the web, these external services may store cookies in your browser and subsequently access those cookies to deliver personalized experiences or measure audience engagement. Every cookie has a domain associated with it. If the domain associated with a cookie matches an external service and not the website in the user’s address bar, this is considered a cross-site (or “third party”) context.

Less obvious cross-site use cases include situations where an entity that owns multiple websites uses a cookie across those properties. Although the same entity owns the cookie and the websites, this still counts as cross-site or “third party” context when the cookie’s domain does not match the site(s) from which the cookie is accessed.
When an external resource on a web page accesses a cookie that does not match the site domain, this is cross-site or “third-party” context.

In contrast, cookie access in a same-site (or “first party”) context occurs when a cookie’s domain matches the website domain in the user’s address bar. Same-site cookies are commonly used to keep people logged into individual websites, remember their preferences and support site analytics.


When a resource on a web page accesses a cookie that matches the site the user is visiting, this is same-site or “first party” context.

A New Model for Cookie Security and Transparency


Today, if a cookie is only intended to be accessed in a first party context, the developer has the option to apply one of two settings (SameSite=Lax or SameSite=Strict) to prevent external access. However, very few developers follow this recommended practice, leaving a large number of same-site cookies needlessly exposed to threats such as Cross-Site Request Forgery attacks.

To safeguard more websites and their users, the new secure-by-default model assumes all cookies should be protected from external access unless otherwise specified. Developers must use a new cookie setting, SameSite=None, to designate cookies for cross-site access. When the SameSite=None attribute is present, an additional Secure attribute must be used so cross-site cookies can only be accessed over HTTPS connections. This won’t mitigate all risks associated with cross-site access but it will provide protection against network attacks.

Beyond the immediate security benefits, the explicit declaration of cross-site cookies enables greater transparency and user choice. For example, browsers could offer users fine-grained controls to manage cookies that are only accessed by a single site separately from cookies accessed across multiple sites.

Chrome Enforcement Starting in February 2020


With Chrome 80 in February, Chrome will treat cookies that have no declared SameSite value as SameSite=Lax cookies. Only cookies with the SameSite=None; Secure setting will be available for external access, provided they are being accessed from secure connections. The Chrome Platform Status trackers for SameSite=None and Secure will continue to be updated with the latest launch information.

Mozilla has affirmed their support of the new cookie classification model with their intent to implement the SameSite=None; Secure requirements for cross-site cookies in Firefox. Microsoft recently announced plans to begin implementing the model starting as an experiment in Microsoft Edge 80.

How to Prepare; Known Complexities


If you manage cross-site cookies, you will need to apply the SameSite=None; Secure setting to those cookies. Implementation should be straightforward for most developers, but we strongly encourage you to begin testing now to identify complexities and special cases, such as the following:

  • Not all languages and libraries support the None value yet, requiring developers to set the cookie header directly. This Github repository provides instructions for implementing SameSite=None; Secure in a variety of languages, libraries and frameworks.
  • Some browsers, including some versions of Chrome, Safari and UC Browser, might handle the  None value in unintended ways, requiring developers to code exceptions for those clients. This includes Android WebViews powered by older versions of Chrome. Here’s a list of known incompatible clients.
  • App developers are advised to declare the appropriate SameSite cookie settings for Android WebViews based on versions of Chrome that are compatible with the  None value, both for cookies accessed via HTTP(S) headers and via Android WebView's CookieManager API, although the new model will not be enforced on Android WebView until later.
  • Enterprise IT administrators may need to implement special policies to temporarily revert Chrome Browser to legacy behavior if some services such as single sign-on or internal applications are not ready for the February launch.
  • If you have cookies that you access in both a first and third-party context, you might consider using separate cookies to get the security benefits of SameSite=Lax in the first-party context.
SameSite Cookies Explained offers specific guidance for the situations above, and channels for raising issues and questions.

To test the effect of the new Chrome behavior on your site or cookies you manage, you can go to chrome://flags in Chrome 76+ and enable the “SameSite by default cookies” and “Cookies without SameSite must be secure” experiments. In addition, these experiments will be automatically enabled for a subset of Chrome 79 Beta users. Some Beta users with the experiments enabled could experience incompatibility issues with services that do not yet support the new model; users can opt out of the Beta experiments by going to chrome://flags and disabling them.

If you manage cookies that are only accessed in a same-site context (same-site cookies) there is no required action on your part; Chrome will automatically prevent those cookies from being accessed by external entities, even if the SameSite attribute is missing or no value is set. However we strongly recommend you apply an appropriate SameSite value (Lax or Strict) and not rely on default browser behavior since not all browsers protect same-site cookies by default.

Finally, if you’re concerned about the readiness of vendors and others who provide services to your website, you can check for Developer Tools console warnings in Chrome 77+ when a page contains cross-site cookies that are missing the required settings:

A cookie associated with a cross-site resource at (cookie domain) was set without the `SameSite` attribute. A future release of Chrome will only deliver cookies with cross-site requests if they are set with `SameSite=None` and `Secure`. You can review cookies in developer tools under Application Storage Cookies and see more details at https://www.chromestatus.com/feature/5088147346030592 and https://www.chromestatus.com/feature/5633521622188032.
Some providers (including some Google services) will implement the necessary changes in the months leading up to Chrome 80 in February; you may wish to reach out to your partners to confirm their readiness.

Chrome Beta for Android Update

Hi everyone! We've just released Chrome Beta 80 (80.0.3987.58) for Android: it's now available on Google Play.

You can see a partial list of the changes in the Git log. For details on new features, check out the Chromium blog, and for details on web platform updates, check here.

If you find a new issue, please let us know by filing a bug.

Krishna Govind
Google Chrome

Support for bushfire relief efforts

Australians and people around the world have watched in horror as the recent bushfire crisis unfolded. Communities right across Australia have been impacted, with more than 10 million hectares burned, thousands of homes damaged, wildlife injured or killed and 27 lives tragically lost to date. Our thoughts are with all those suffering.

Throughout the bushfire crisis, Australians have searched for updates on fire conditions near them, as well as safety information. In 2019, “fires near me” was the highest Search query in Australia, highlighting the demand for accurate and timely information. We’ve provided support to help ensure people can access information from fire and emergency services authorities when they need it most. 

In December, we announced Google staff in Australia had led a fundraising effort for the Australian Red Cross for bushfire relief. This campaign with staff donations, matched contributions and a grant from Google’s charitable arm Google.org has raised more than $3 million (AUD) to date for the Australian Red Cross, WWF and bushfire response and long-term recovery efforts. We hope this can play some part in helping affected communities.

More immediately, we’ve worked with Infoxchange to add a bushfire services section to the Ask Izzy website, which lists over 370,000 support services across Australia to connect people with help in times of need.

Australians are renowned for helping each other in times of need and our thoughts are with everyone impacted and those still at risk.

In coming months we will offer Grow with Google digital skills training for small businesses in impacted communities to help them get back on their feet and connect with customers. Our first free training for small businesses and nonprofits will be in Shellharbour on 10 March 2010.

More details to come about the training available near you. And stay tuned for updates as we continue to work with fire and emergency services to connect Australians with information when they need it most.

My Path to Google – Nada Elawad, Software Engineer

Welcome to the 42nd installment of our blog series “My Path to Google.” These are real stories from Googlers, interns, and alumni highlighting how they got to Google, what their roles are like, and even some tips on how to prepare for interviews.

Today’s post is all about Nada Elawad. Read on!

What’s your role at Google?
I am a Software Engineer at YouTube Knowledge, which is the part of YouTube that focuses on building a platform for classifiers and features that increase satisfaction and support our responsibility to viewers, creators and society.


What I like most about it is how I can see the impact we are making on the world in actual measurable numbers. Also, at YouTube, we get to be in touch with creators (who have thousands and millions of subscribers). These creators have some of the loudest voices in our society today.
Nada at Google Zürich shortly after joining Google.
Can you tell us a bit about yourself?
I was born and raised in Cairo, Egypt. I received a Bachelor's degree in Computer Science from Ain Shams University. During college and before joining Google, I developed a passion for competitive programming that really made my years in college much more interesting. That passion I owe to the ACM (Association for Computing Machinery) community at my university, which was very challenging, yet fun, and pushed me forward.

On the leisure side, I love 3D Puzzles, video games, boats, and electric micromobility vehicles. I am also a huge fan of F.R.I.E.N.D.S and Tarantino movies.

What inspires you to come in every day?
What I like most about Google is how much they care about diversity and inclusion, and how much they care about their employees in general, from providing resources for them to learn and grow to making sure they are having fun and are happy at work.

From a user perspective, what I like most is how they keep all kinds of users from all places and backgrounds in mind when designing or launching a new product, and the way they always act on a global scale, so that everyone can use their products.
Nada conducts a fireside chat with Google Senior Fellow Jeff Dean at the opening of our new Engineering office in Paris.
Can you tell us about your decision to enter the process?
During college, Google was always that magical place that everyone talked about. It was very famous for being the coolest place to work and also the hardest to get into, which made it seem like the recruiting process would be very difficult. 

I had applied for every intern position during my first two years at college, and I was not at all confident I'd get a chance—I didn't at first. My first successful step towards Google was when I applied to attend Inside Look in Zürich, an event that gives university students an inside view at working as a Software Engineer at Google. My application was accepted, but unfortunately my visa was rejected a week before the event. 


Nada at the Googleplex in Mountain View, CA.



How did the recruitment process go for you?
As I was about to start my senior year of college, I was contacted by a Google recruiter following my previous visa rejection, to ask if I would be interested in applying for a full-time position this time — I definitely would! 

Due to travel issues, my recruiter worked with me to conduct the interviews online, for which I was very grateful, and yet worried it might not go as well as if it was onsite. However, my recruiter was amazingly reassuring. I decided to go ahead with my interviews online during final exams of my last semester. A week later I received the most incredible news—and two things got marked off my to-do list: (1) Travel and (2) Get a job at Google.

Nada relocated from Cairo to Google Paris!
What do you wish you’d known when you started the process?
I wish I had known that Google is not just looking for code-geniuses. Interviewers don’t expect you to go in and solve everything optimally in the first few minutes because that’s not how real problems are solved, but they do care about your thought process, how you approach a problem with a simple solution and move to a better, more optimal solution. This would have made me worry much less about getting everything right during the interviews and increased my confidence during the process.


Can you tell us about the resources you used to prepare for your interview or role?
I mainly used online judges, like CodeForces and TopCoder, on a daily basis to keep a problem-solving mindset. I refreshed my knowledge of data structures and algorithms using various blogs and online resources about getting hired at Google. These helped me get an overview of what I should focus on and not get overwhelmed by all the things I didn’t know. 

Since I had to do my interviews online I mainly used Pramp to practice more effective communication. Also, I remember reading almost every question about working at Google and their recruitment process on Quora, which gave me a sufficiently comprehensive idea of every step along the way.


Nada at the FIFA World Cup semi finals, which she attended after working on a project related to the World Cup.
Do you have any tips you’d like to share with aspiring Googlers?
Take your time honing your problem-solving skills. Keep an open mind, as Google is a fast-growing, changing, and flexible place, where you can definitely find something to work on that interests you. Don't get discouraged if you don’t make it at first; many great Googlers didn’t get the job on their first few tries.


Grant SAML app access to specific groups

Quick launch summary 

You can now enable SAML apps for specific groups of users in your organization. You could previously only enable them by organizational unit (OU). This provides extra flexibility, as you can now turn apps on or off for sets of users without changing your organizational structure.

SAML apps enable users to access enterprise cloud applications after signing in just once through Single-Sign-On (SSO). You can easily enable SAML with many pre-integrated applications in our third-party apps catalog, or you can set up custom SAML applications.

Use our Help Center to find out how to configure SAML applications.

Getting started 


  • Admins: This feature will be available by default and can be controlled at the group level. Visit the Help Center to learn more about how to configure SAML apps for G Suite
  • End users: There is no end-user setting for this feature. 

Control SAML apps by groups 

Rollout pace 


Availability 



  • Available to all G Suite customers

Can You Trust Your Model’s Uncertainty?



In an ideal world, machine learning (ML) methods like deep learning are deployed to make predictions on data from the same distribution as that on which they were trained. But the practical reality can be quite different: camera lenses becoming blurry, sensors degrading, and changes to popular online topics can result in differences between the distribution of data on which the model was trained and to which a model is applied, leading to what is known as covariate shift. For example, it was recently observed that deep learning models trained to detect pneumonia in chest x-rays would achieve very different levels of accuracy when evaluated on previously unseen hospitals’ data, due in part to subtle differences in image acquisition and processing.

In “Can you trust your model’s uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift,presented at NeurIPS 2019, we benchmark the uncertainty of state-of-the-art deep learning models as they are exposed to both shifting data distributions and out-of-distribution data. In this work we consider a variety of input modalities, including images, text and online advertising data, exposing these deep learning models to increasingly shifted test data while carefully analyzing the behavior of their predictive probabilities. We also compare a variety of different methods for improving model uncertainty to see which strategies perform best under distribution shift.

What is Out-of-Distribution Data?
Deep learning models provide a probability with each prediction, representing the model confidence or uncertainty. As such, they can express what they don’t know and, correspondingly, abstain from prediction when the data is outside the realm of the original training dataset. In the case of covariate shift, uncertainty would ideally increase proportionally to any decrease in accuracy. A more extreme case is when data are not at all represented in the training set, i.e., when the data are out-of-distribution (OOD). For example, consider what happens when a cat-versus-dog image classifier is shown an image of an airplane. Would the model confidently predict incorrectly or would it assign a low probability to each class? In a related post we recently discussed methods we developed to identify such OOD examples. In this work we instead analyze the predictive uncertainty of models given out-of-distribution and shifted examples to see if the model probabilities reflect their ability to predict on such data.

Quantifying the Quality of Uncertainty
What does it mean for one model to have better representation of its uncertainty than another? While this can be a nuanced question that often is defined by a downstream task, there are ways to quantitatively assess the general quality of probabilistic predictions. For example, the meteorological community has carefully considered this question and developed a set of proper scoring rules that a comparison function for probabilistic weather forecasts should satisfy in order to be well-calibrated, while still rewarding accuracy. We applied several of these proper scoring rules, such as the Brier Score and Negative Log Likelihood (NLL), along with more intuitive heuristics, such as the expected calibration error (ECE), to understand how different ML models dealt with uncertainty under dataset shift.

Experiments
We analyze the effect of dataset shift on uncertainty across a variety of data modalities, including images, text, online advertising data and genomics. As an example, we illustrate the effect of dataset shift on the ImageNet dataset, a popular image understanding benchmark. ImageNet involves classifying over a million images into 1000 different categories. Some now consider this challenge mostly solved, and have developed harder variants, such as Corrupted Imagenet (or Imagenet-C), in which the data are augmented according to 16 different realistic corruptions, each at 5 different intensities.
We explore how model uncertainty behaves under changes to the data distribution, such as increasing intensities of the image perturbations used in Corrupted Imagenet. Shown here are examples of each type of image corruption, at intensity level 3 (of 5).
We used these corrupted images as examples of shifted data and examined the predictive probabilities of deep learning models as they were exposed to shifts of increasing intensity. Below we show box plots of the resulting accuracy and the ECE for each level of corruption (including uncorrupted test data), where each box aggregates across all corruption types in ImageNet-C. Each color represents a different type of model — a “vanilla” deep neural network used as a baseline, four uncertainty methods (dropout, temperature scaling and our last layer approaches), and an ensemble approach.
Accuracy (top) and expected calibration error (bottom; lower is better) for increasing intensities of dataset shift on ImageNet-C. We observe that the decrease in accuracy is not reflected by an increase in uncertainty of the model, indicated by both accuracy and ECE getting worse.
As the shift intensity increases, the deviation in accuracy across corruption methods for each model increases (increasing box size), as expected, and the accuracy on the whole decreases. Ideally this would be reflected in increasing uncertainty of the model, thus leaving the expected calibration error (ECE) unchanged. However, looking at the lower plot of the ECE, one sees that this is not the case and that calibration generally suffers as well. We observed similar worsening trends for Brier score and NLL indicating that the models are not becoming increasingly unsure with shift, but instead are becoming confidently wrong.

One popular method to improve calibration is known as temperature scaling, a variant of Platt scaling, which involves smoothing the predictions after training, using performance on a held-out validation set. We observed that while this improved calibration on the standard test data, it often made things worse on shifted data! Thus, practitioners applying this technique should be wary of distributional shift.

Fortunately, one method degrades in uncertainty much more gracefully than others. Deep ensembles (green), which average the predictions of a selection of models, each of which have different initializations, is a simple strategy that significantly improves robustness to shift and outperforms all other methods tested.

Summary and Recommended Best Practices
In our paper, we explored the behavior of state-of-the-art models under dataset shift across images, text, online advertising data and genomics. Our findings were mostly consistent across these different kinds of data. The quality of uncertainty degrades under dataset shift, but there are promising avenues of research to mitigate this. We hope that deep learning users take home the following messages from our study:
  1. Uncertainty under dataset shift is a real concern that needs to be considered when training models.
  2. Improving calibration and accuracy on an in-distribution test set often does not translate to improved calibration on shifted data.
  3. Out of all the methods we considered, deep ensembles are the most robust to dataset shift, and a relatively small ensemble size (e.g., 5) is sufficient. The effectiveness of ensembles presents interesting avenues for improving other approaches.
Improving the predictive uncertainty of deep learning models remains an active area of research in ML. We have released all of the code and model predictions from this benchmark in the hope that it will be useful to the community to drive and evaluate future work on this important topic.

Source: Google AI Blog


Can You Trust Your Model’s Uncertainty?



In an ideal world, machine learning (ML) methods like deep learning are deployed to make predictions on data from the same distribution as that on which they were trained. But the practical reality can be quite different: camera lenses becoming blurry, sensors degrading, and changes to popular online topics can result in differences between the distribution of data on which the model was trained and to which a model is applied, leading to what is known as covariate shift. For example, it was recently observed that deep learning models trained to detect pneumonia in chest x-rays would achieve very different levels of accuracy when evaluated on previously unseen hospitals’ data, due in part to subtle differences in image acquisition and processing.

In “Can you trust your model’s uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift,presented at NeurIPS 2019, we benchmark the uncertainty of state-of-the-art deep learning models as they are exposed to both shifting data distributions and out-of-distribution data. In this work we consider a variety of input modalities, including images, text and online advertising data, exposing these deep learning models to increasingly shifted test data while carefully analyzing the behavior of their predictive probabilities. We also compare a variety of different methods for improving model uncertainty to see which strategies perform best under distribution shift.

What is Out-of-Distribution Data?
Deep learning models provide a probability with each prediction, representing the model confidence or uncertainty. As such, they can express what they don’t know and, correspondingly, abstain from prediction when the data is outside the realm of the original training dataset. In the case of covariate shift, uncertainty would ideally increase proportionally to any decrease in accuracy. A more extreme case is when data are not at all represented in the training set, i.e., when the data are out-of-distribution (OOD). For example, consider what happens when a cat-versus-dog image classifier is shown an image of an airplane. Would the model confidently predict incorrectly or would it assign a low probability to each class? In a related post we recently discussed methods we developed to identify such OOD examples. In this work we instead analyze the predictive uncertainty of models given out-of-distribution and shifted examples to see if the model probabilities reflect their ability to predict on such data.

Quantifying the Quality of Uncertainty
What does it mean for one model to have better representation of its uncertainty than another? While this can be a nuanced question that often is defined by a downstream task, there are ways to quantitatively assess the general quality of probabilistic predictions. For example, the meteorological community has carefully considered this question and developed a set of proper scoring rules that a comparison function for probabilistic weather forecasts should satisfy in order to be well-calibrated, while still rewarding accuracy. We applied several of these proper scoring rules, such as the Brier Score and Negative Log Likelihood (NLL), along with more intuitive heuristics, such as the expected calibration error (ECE), to understand how different ML models dealt with uncertainty under dataset shift.

Experiments
We analyze the effect of dataset shift on uncertainty across a variety of data modalities, including images, text, online advertising data and genomics. As an example, we illustrate the effect of dataset shift on the ImageNet dataset, a popular image understanding benchmark. ImageNet involves classifying over a million images into 1000 different categories. Some now consider this challenge mostly solved, and have developed harder variants, such as Corrupted Imagenet (or Imagenet-C), in which the data are augmented according to 16 different realistic corruptions, each at 5 different intensities.
We explore how model uncertainty behaves under changes to the data distribution, such as increasing intensities of the image perturbations used in Corrupted Imagenet. Shown here are examples of each type of image corruption, at intensity level 3 (of 5).
We used these corrupted images as examples of shifted data and examined the predictive probabilities of deep learning models as they were exposed to shifts of increasing intensity. Below we show box plots of the resulting accuracy and the ECE for each level of corruption (including uncorrupted test data), where each box aggregates across all corruption types in ImageNet-C. Each color represents a different type of model — a “vanilla” deep neural network used as a baseline, four uncertainty methods (dropout, temperature scaling and our last layer approaches), and an ensemble approach.
Accuracy (top) and expected calibration error (bottom; lower is better) for increasing intensities of dataset shift on ImageNet-C. We observe that the decrease in accuracy is not reflected by an increase in uncertainty of the model, indicated by both accuracy and ECE getting worse.
As the shift intensity increases, the deviation in accuracy across corruption methods for each model increases (increasing box size), as expected, and the accuracy on the whole decreases. Ideally this would be reflected in increasing uncertainty of the model, thus leaving the expected calibration error (ECE) unchanged. However, looking at the lower plot of the ECE, one sees that this is not the case and that calibration generally suffers as well. We observed similar worsening trends for Brier score and NLL indicating that the models are not becoming increasingly unsure with shift, but instead are becoming confidently wrong.

One popular method to improve calibration is known as temperature scaling, a variant of Platt scaling, which involves smoothing the predictions after training, using performance on a held-out validation set. We observed that while this improved calibration on the standard test data, it often made things worse on shifted data! Thus, practitioners applying this technique should be wary of distributional shift.

Fortunately, one method degrades in uncertainty much more gracefully than others. Deep ensembles (green), which average the predictions of a selection of models, each of which have different initializations, is a simple strategy that significantly improves robustness to shift and outperforms all other methods tested.

Summary and Recommended Best Practices
In our paper, we explored the behavior of state-of-the-art models under dataset shift across images, text, online advertising data and genomics. Our findings were mostly consistent across these different kinds of data. The quality of uncertainty degrades under dataset shift, but there are promising avenues of research to mitigate this. We hope that deep learning users take home the following messages from our study:
  1. Uncertainty under dataset shift is a real concern that needs to be considered when training models.
  2. Improving calibration and accuracy on an in-distribution test set often does not translate to improved calibration on shifted data.
  3. Out of all the methods we considered, deep ensembles are the most robust to dataset shift, and a relatively small ensemble size (e.g., 5) is sufficient. The effectiveness of ensembles presents interesting avenues for improving other approaches.
Improving the predictive uncertainty of deep learning models remains an active area of research in ML. We have released all of the code and model predictions from this benchmark in the hope that it will be useful to the community to drive and evaluate future work on this important topic.

Source: Google AI Blog


Android Enterprise security whitepaper details defenses

Enterprises regularly contend with evolving security threats. Their mobile devices and operating systems must create trust so IT teams, managers, and employees have confidence that their information is backed by strong security measures.

To assist our enterprise partners and customers with accurate and timely information about the Android approach to security, we’ve published a new update to the Android Enterprise Security Whitepaper. This document serves as a comprehensive overview of how Android enables best-in-class security by using multi-layered protections, Google-powered artificial intelligence and the collective contributions of the wider community.

The newest edition of this whitepaper includes the latest Android 10 security enhancements, which make Android even more secure and helpful for businesses. Learn about how Android has made it simpler to distribute updates and security patches through Google Play System updates, new VPN capabilities, and how Google Play Protect works to help protect enterprise devices. Android 10 also has a number of improvements that provide better security and privacy for employees, whether they are bringing their own devices to work or using ones issued by their employer. 

Additionally, the paper outlines key updates to the personal and corporate data separation of the Android work profile; details on device and profile management; and how the Android team continues to enhance and extend our defenses with initiatives like the Android Security Rewards program and the App Defense Alliance.

Check out the latest Android Enterprise Security Whitepaper for further details on our ongoing work to provide best-in class security for the demanding needs of today’s enterprises.