Schneider Electric secures its teams through Android Enterprise

Editor's note: Today’s post is by Simon Hardy-Bistagne, Director of Solution Architecture for Schneider Electric. The global company specializes in energy management and automation, with operations in more than 100 countries.

At Schneider Electric, we are responsible for providing sustainability and energy management systems for a global customer base. As the Director of Solution Architecture for our digital workplace, I lead a team that ensures our employees have access to all of the collaborative tools they need from wherever they’re working.

Android Enterprise is key to securely and flexibly managing Schneider Electric’s global workforce devices. We support a wide range of device-use scenarios for our employees, from fully-managed devices to personal smartphones securely enrolled with the Android work profile. The extensive, customizable and secure controls available with Android Enterprise ensure we are giving our teams the resources they need no matter where they’re working while protecting critical corporate applications and data.

Flexibility for every use case

We manage devices in over 117 countries. Android Enterprise has helped us shift to new working styles and embrace employee choice and work-life balance with powerful controls that meet our security needs. By enrolling personal devices with the Android work profile, we know that we are not only protecting our data and services, but we can prove to our employees that, with the work profile, “What you have here is your work life, what you have here is your personal life.” And that has revolutionized the way our teams use their mobile devices.

Security is at the core of everything we do, both from the perspective of servicing our customers and protecting our own corporate resources. So when we talk about implementing security and management services through Android Enterprise, it’s fundamental to get those basics right. Through Android Enterprise, we have powerful tools for safeguarding devices — like preventing the installation of unknown applications, disabling debug mode and preventing devices from being rooted. Putting these requirements and other key security configurations in place for both personal and company-owned devices is essential for our global business.

Thanks to the flexibility of Android Enterprise, we can also support a wide range of device use cases. For some employees, we use fully-managed mode for devices dedicated to specific tasks. Others who only want one phone for work and personal use can use a device with the work profile. And with managed Google Play, we can make both internal and public apps available on devices.

Ready for a hybrid work reality

Enrollment choice is important as well. We use devices from a variety of vendors, and we can set up those devices with the method that works best for each situation — like zero-touch enrollment or Samsung Knox Mobile Enrollment. With these options, end users can get the applications they need on their corporate devices and use them right away.

We also value the flexibility of allowing our end users to purchase their own Android device, or ask our IT team to enroll a personal device they’ve used for a couple of years. They can bring their device and easily enroll it into our managed estate with Android Enterprise.

Hybrid work is our present and future, and Android Enterprise is helping us navigate that. It gives our employees the flexibility in device choice and management mode, and it gives my team comprehensive and effortless management tools that meet the security needs of our global operations.

To hear more about our mobility strategy, watch my discussion with Android Enterprise Security Specialist Mike Burr from The Art of Control digital event.

So you got new gear for the holidays. Now what?

The new year is here, and the holidays are (officially) over. If you were gifted a new Google gadget, that means it’s time to get your new gear out of the box and into your home or pocket.

We talked to the experts here at Google and asked for a few of their quick setup tips, so you can get straight to using your new…whatever you got...right away.

So you got a Pixel 6 Pro…

  1. Begin by setting up fingerprint unlock for quick and easy access.
  2. Prepare for future emergencies and turn on the extreme battery saver feature in the settings app. Extreme battery saver can extend your Pixel 6 Pro’s battery life by intelligently pausing apps and slowing processes, and you can preselect when you want to enable the feature — and what your priority apps are.
  3. Create a personal aesthetic with Material You, and express character by customizing wallpaper and interface designs that will give your Pixel 6 Pro’s display a more uniform look.

So you got a Nest Hub Max…

  1. First, set up Face Match to ensure your Nest Hub Max can quickly identify you as the user and share a more personal experience. Then, when you walk up to the device it can do things like present your daily schedule, play your favorite playlist or suggest recommended videos, news and podcasts.
  2. Set up a Duo account for video calling and messaging with your friends and family. From there, you can ask Nest Hub Max to call anyone in your Google contacts who has Duo — just say, “Hey Google, call (your contact name).” For family members or friends who don't already have Duo, the app is free and available for download on both Android and iOS.
  3. Be sure to connect your Nest Hub Max to any other Google gear, such as the Chromecast and Nest Mini for a smart home experience.
The Nest Hub Max in front of a white background.

The Nest Hub Max.

So you got the new Nest Thermostat…

  1. Use Quick Schedule to easily and quickly get your thermostat programmed. You can go with its recommended presets or adjust the settings further to create a custom schedule. You can make changes to your schedule anytime from the Home app.
  2. Then you can opt in Home and Away Routines, which can help you avoid heating or cooling an empty house by using motion sensing and your phone’s location to know when nobody’s home and adjust the temperature accordingly to save energy.
  3. Make sure you’ve enabled notifications and Savings Finder will proactively suggest small tweaks to your schedule that you can accept from the Home app. For example, it might suggest a small change to your sleep temperature to save you energy.

So you got the new Pixel Buds A-Series…

  1. Check out the Pixel Buds A-Series’ latest feature, the bass customization option, to find your perfect sound. This addition doubles the bass range when connected to an Android 6.0 device, and can be adjusted on a scale from -1 to 4 by using the Pixel Buds App.
  2. Here’s a hardware tip: Try out the three different ear tip fit options to find the most comfortable fit for you.
  3. Start listening to your favorite podcasts and music right away by using Fast Pair to immediately connect your Pixel Buds to your phone.

Learning to Route by Task for Efficient Inference

Scaling large language models has resulted in significant quality improvements natural language understanding (T5), generation (GPT-3) and multilingual neural machine translation (M4). One common approach to building a larger model is to increase the depth (number of layers) and width (layer dimensionality), simply enlarging existing dimensions of the network. Such dense models take an input sequence (divided into smaller components, called tokens) and pass every token through the full network, activating every layer and parameter. While these large, dense models have achieved state-of-the-art results on multiple natural language processing (NLP) tasks, their training cost increases linearly with model size.

An alternative, and increasingly popular, approach is to build sparsely activated models based on a mixture of experts (MoE) (e.g., GShard-M4 or GLaM), where each token passed to the network follows a separate subnetwork by skipping some of the model parameters. The choice of how to distribute the input tokens to each subnetwork (the “experts”) is determined by small router networks that are trained together with the rest of the network. This allows researchers to increase model size (and hence, performance) without a proportional increase in training cost.

While this is an effective strategy at training time, sending tokens of a long sequence to multiple experts, again makes inference computationally expensive because the experts have to be distributed among a large number of accelerators. For example, serving the 1.2T parameter GLaM model requires 256 TPU-v3 chips. Much like dense models, the number of processors needed to serve an MoE model still scales linearly with respect to the model size, increasing compute requirements while also resulting in significant communication overhead and added engineering complexity.

In “Beyond Distillation: Task-level Mixture-of-Experts for Efficient Inference”, we introduce a method called Task-level Mixture-of-Experts (TaskMoE), that takes advantage of the quality gains of model scaling while still being efficient to serve. Our solution is to train a large multi-task model from which we then extract smaller, stand-alone per-task subnetworks suitable for inference with no loss in model quality and with significantly reduced inference latency. We demonstrate the effectiveness of this method for multilingual neural machine translation (NMT) compared to other mixture of experts models and to models compressed using knowledge distillation.

Training Large Sparsely Activated Models with Task Information
We train a sparsely activated model, where router networks learn to send tokens of each task-specific input to different subnetworks of the model associated with the task of interest. For example, in the case of multilingual NMT, every token of a given language is routed to the same subnetwork. This differs from other recent approaches, such as the sparsely gated mixture of expert models (e.g., TokenMoE), where router networks learn to send different tokens in an input to different subnetworks independent of task.

Inference: Bypassing Distillation by Extracting Subnetworks
A consequence of this difference in training between TaskMoE and models like TokenMoE is in how we approach inference. Because TokenMoE follows the practice of distributing tokens of the same task to many experts at both training and inference time, it is still computationally expensive at inference.

For TaskMoE, we dedicate a smaller subnetwork to a single task identity during training and inference. At inference time, we extract subnetworks by discarding unused experts for each task. TaskMoE and its variants enable us to train a single large multi-task network and then use a separate subnetwork at inference time for each task without using any additional compression methods post-training. We illustrate the process of training a TaskMoE network and then extracting per-task subnetworks for inference below.

During training, tokens of the same language are routed to the same expert based on language information (either source, target or both) in task-based MoE. Later, during inference we extract subnetworks for each task and discard unused experts.

To demonstrate this approach, we train models based on the Transformer architecture. Similar to GShard-M4 and GLaM, we replace the feedforward network of every other transformer layer with a Mixture-of-Experts (MoE) layer that consists of multiple identical feedforward networks, the “experts”. For each task, the routing network, trained along with the rest of the model, keeps track of the task identity for all input tokens and chooses a certain number of experts per layer (two in this case) to form the task-specific subnetwork. The baseline dense Transformer model has 143M parameters and 6 layers on both the encoder and decoder. The TaskMoE and TokenMoE that we train are also both 6 layers deep but with 32 experts for every MoE layer and have a total of 533M parameters. We train our models using publicly available WMT datasets, with over 431M sentences across 30 language pairs from different language families and scripts. We point the reader to the full paper for further details.

Results
In order to demonstrate the advantage of using TaskMoE at inference time, we compare the throughput, or the number of tokens decoded per second, for TaskMoE, TokenMoE, and a baseline dense model. Once the subnetwork for each task is extracted, TaskMoE is 7x smaller than the 533M parameter TokenMoE model, and it can be served on a single TPUv3 core, instead of 64 cores required for TokenMoE. We see that TaskMoE has a peak throughput twice as high as that of TokenMoE models. In addition, on inspecting the TokenMoE model, we find that 25% of the inference time has been spent in inter-device communication, while virtually no time is spent in communication by TaskMoE.
Comparing the throughput of TaskMoE with TokenMoE across different batch sizes. The maximum batch size for TokenMoE is 1024 as opposed to 4096 for TaskMoE and the dense baseline model. Here, TokenMoE has one instance distributed across 64 TPUv3 cores, while TaskMoE and the baseline model have one instance on each of the 64 cores.

A popular approach to building a smaller network that still performs well is through knowledge distillation, in which a large teacher model trains a smaller student model with the goal of matching the teacher’s performance. However, this method comes at the cost of additional computation needed to train the student from the teacher. So, we also compare TaskMoE to a baseline TokenMoE model that we compress using knowledge distillation. The compressed TokenMoE model has a size comparable to the per-task subnetwork extracted from TaskMoE.

We find that in addition to being a simpler method that does not need any additional training, TaskMoE improves upon a distilled TokenMoE model by 2.1 BLEU on average across all languages in our multilingual translation model. We note that distillation retains 43% of the performance gains achieved from scaling a dense multilingual model to a TokenMoE, whereas extracting the smaller subnetwork from the TaskMoE model results in no loss of quality.

BLEU scores (higher is better) comparing a distilled TokenMoE model to the TaskMoE and TokenMoE models with 12 layers (6 on the encoder and 6 on the decoder) and 32 experts. While both approaches improve upon a multilingual dense baseline, TaskMoE improves upon the baseline by 3.1 BLEU on average while distilling from TokenMoE improves upon the baseline by 1.0 BLEU on average.

Next Steps
The quality improvements often seen with scaling machine learning models has incentivized the research community to work toward advancing scaling technology to enable efficient training of large models. The emerging need to train models capable of generalizing to multiple tasks and modalities only increases the need for scaling models even further. However, the practicality of serving these large models remains a major challenge. Efficiently deploying large models is an important direction of research, and we believe TaskMoE is a promising step towards more inference friendly algorithms that retain the quality gains of scaling.

Acknowledgements
We would like to first thank our coauthors - Yanping Huang, Ankur Bapna, Maxim Krikun, Dmitry Lepikhin and Minh-Thang Luong. We would also like to thank Wolfgang Macherey, Yuanzhong Xu, Zhifeng Chen and Macduff Richard Hughes for their helpful feedback. Special thanks to the Translate and Brain teams for their useful input and discussions, and the entire GShard development team for their foundational contributions to this project. We would also like to thank Tom Small for creating the animations for the blog post.

Source: Google AI Blog


This talking Doogler deserves a round of a-paws

Ever wonder what your dog is thinking? You’re not alone. Over the last year, dog “talking” buttons have taken the pet world by storm. With the push — er paw — of a button, dogs are now “telling” their humans what they need, whether that’s water, food or to go outside. Some pups have even become social media famous for their impressive vocabulary, inspiring dog owners everywhere to pick up a set of buttons. Or in the case of Rutledge Chin Feman, a software engineer for Google Nest, to try building their own DIY versions.

“I know I’m biased, but Cosmo is obviously the best dog in the world,” says Rutledge of his pup. When he and his wife adopted Cosmo, a German Shepherd mix and the first dog for both of them, they noticed right away he was skittish. “He was afraid of everything and would do a lot of lunging and barking. It was kind of a forcing function to learn a lot about positive reinforcement training techniques and desensitization…which is how I stumbled on all of this.”

Cosmo stands on a sidewalk next to a table, chairs and a chalkboard sign. A person is sitting next to him, holding his leash and wearing blue jeans and a brown pair of shoes.

After Rutledge saw a video of dog-talking buttons that blew his mind, he started to build his own set for Cosmo — the perfect hobby to blend his passions for engineering and animals.

He used an electronics prototyping board (a “breadboard”) to hold the buttons, and a small computer (a “Raspberry Pi”) to activate them with light, sound and notifications. The first model, made from a wooden wine box, had just three buttons: “food,” “water” and “outside,” which Rutledge recorded with his own voice. Now, Cosmo’s up to seven buttons — with “ball,” “later,” “love you” and “scritches” (otherwise known as “belly rubs”) added to the mix.

At one point, Rutledge even set the board up so that he received text messages whenever Cosmo pressed a button. “I’ve been in meetings where I’m like, ‘Hold on a sec, I need to silence my phone. My dog is blowing me up right now.’”

Rutledge and his wife will add a “baby” button to Cosmo’s board next, now that their newest family member has arrived. And the wheels are already turning for Rutledge: “I think it will only be a few months before a baby could push buttons and say things — ask for more food or whatever. I think that will be a fun experiment.”

For now, he’s focused on Cosmo and continuing to strengthen their bond using the button board. “I think it’s a really powerful way to see your pet. It really reminds you that they’re intelligent beings who are capable of profound thought, in a way. And they’re constantly observing their world in a way that we don’t usually really give them credit for.”

“It’s a really simple device, it’s all just for fun,” he adds of the process. “And obviously, Cosmo’s a very good boy.”

Cosmo is looking up and putting his paw on a rectangular wooden board. He is sitting on a blue patterned rug with a white blanket next to him.

DeepNull: an open-source method to improve the discovery power of genetic association studies

In our paper “DeepNull models non-linear covariate effects to improve phenotypic prediction and association power,” we proposed a new method, DeepNull, to model the complex relationship between covariate effects on phenotypes to improve Genome-wide association studies (GWAS) results. We have released DeepNull as open source software, with a Colab notebook tutorial for its use.

Human Genetics 101

Each individual’s genetic data carries health information such as why certain individuals have a lower risk of developing skin cancer compared to others or why certain drugs differ in effectiveness between individuals. Genetic data is encoded in the human genome—a DNA sequence—composed of a 3 billion long chain built from four possible nucleotides (A, C, G, and T). Only a small subset of the genome (~4-5 million positions) varies between two individuals. One of the goals of genetic studies is to detect variants that are associated with different phenotypes (e.g., risk of diseases such as Glaucoma or observed phenotypic values such as high-density lipoprotein (HDL), low-density lipoproteins (LDL), height, etc).

Genome-wide association studies

GWAS are used to associate genetic variants with complex traits and diseases. To more accurately determine an association strength between genotype and phenotype, the interactions between phenotypes (such as age and sex) and principal components (PCs) of genotypes, must be adjusted for as covariates. Covariate adjustment in GWAS can increase precision and correct for confounding. In the linear model setting, adjustment for a covariate will improve precision (i.e., statistical power) if the distribution of the phenotype differs across levels of the covariate. For example, when performing GWAS on height, males and females have different means. All state of the art methods (e.g., BOLT-LMM, regenie) perform GWAS assuming that the effect of genotypes and covariates to phenotype is linear and additive. However, we know that the assumption of linear and additive contributions of covariates often does not reflect underlying biology, so we sought a method to more comprehensively model and adjust for the interactions between phenotypes for GWAS.

DeepNull method overview

We proposed a new method, DeepNull, to relax the linear assumption of covariate effects on phenotypes. DeepNull trains a deep neural network (DNN) to predict phenotype using all covariates in a 5-fold cross-validation. After training the DeepNull model, we make phenotype predictions for all individuals and add this prediction as one additional covariate in the association test. Major advantages of DeepNull are its simplicity to use and that it requires only a minimal change to existing GWAS pipeline implementations. In other words, to use DeepNull, we just need to add one additional covariate, which is computed by DeepNull, to the existing pipeline to perform GWAS.

DeepNull improves statistical power

We simulated data under different genetic architectures (genetic conditions) to first check that DeepNull controls type I error and then compare DeepNull statistical power with current state of the art methods (hereafter referred to as “Baseline”). First, we simulated data under genetic architectures where covariates have a linear effect on phenotype and observed that both Baseline and DeepNull have tight control of type I error. It is interesting that DeepNull power does not decrease compared to Baseline under a setting in which covariates have only a linear effect on phenotype. Next, we simulated data under genetic architectures where covariates have non-linear effects on phenotype. Both Baseline and DeepNull have tight control of type I error while DeepNull increases the statistical power depending on the genetic architecture. We observed that for certain genetic architectures, DeepNull increases the statistical power up to 20%. Below, we compare the -log p-value of test statistics computed from DeepNull versus Baseline for Apolipoprotein B (ApoB) levels obtained from UK Biobank:
Figure 1. Significance level comparison of DeepNull vs Baseline. X-axis is the -log p-value of Baseline and Y-axis is the -log p-value of DeepNull. The orange dots indicate variants that are significant for Baseline but not significant for DeepNull and green dots indicate variants that are significant for DeepNull but not significant for Baseline.

DeepNull improves phenotype prediction

We applied DeepNull to predict phenotypes by utilizing polygenic risk score (PRS) and existing covariates such as age and sex. We considered 10 phenotypes obtained from UK Biobank. We observed that DeepNull on average increased the phenotype prediction (R2 where R is Pearson correlation) by 23%. More strikingly, in the case of Glaucoma, referral probability that is computed from the fundus images (Phene et al. Ophthalmology 2019, Alipanahi et al AJHG 2021), DeepNull improves the phenotype prediction by 83.4% and in the case of LDL, DeepNull improves the phenotype prediction by 40.3%. The summary of DeepNull results versus Baseline are shown in figure 2 below:

 
 

Figure 2. DeepNull improves phenotype prediction compared to Baseline. The Y-axis is the R2 where R is the Pearson’s correlation between true and predicted value of phenotypes. Phenotypic abbreviations: alkaline phosphatase (ALP), alanine aminotransferase (ALT), aspartate aminotransferase (AST), apolipoprotein B (ApoB), glaucoma referral probability (GRP), LDLcholesterol (LDL), sex hormone-binding globulin (SHBG), and triglycerides (TG).

Conclusion

We proposed a new framework, DeepNull, that can model the nonlinear effect of covariates on phenotypes when such nonlinearity exists. We show that DeepNull can substantially improve phenotype prediction. In addition, we show that DeepNull achieves results similar to a standard GWAS when the effect of covariate on the phenotype is linear and can significantly outperform a standard GWAS when the covariate effects are nonlinear. DeepNull is open source and is available for download from GitHub or installation via PyPI.

By Farhad Hormozdiari and Andrew Carroll – Genomics team in HealthAI

Acknowledgments

This blog summarizes the work of the following Google contributors, who we would like to thank: Zachary R. McCaw, Thomas Colthurst, Ted Yun, Nick Furlotte, Babak Alipanahi, and Cory Y. McLean. In addition, we would like to thank Alkes Price, Babak Behsaz, and Justin Cosentino for their invaluable comments and suggestions.

Increasing Google’s investment in the UK

Image credit: Pollitt & Partners 2015

For almost two decades Google has been proud to have a home in the UK. Today, we have more than 6,400 employees and last year we added nearly 700 new people. We also strengthened our commitment to the UK in 2021 with the laying of a new subsea cable — Grace Hopper — which runs between the United States and the UK.

Building on our long-term commitment to the UK, we are purchasing the Central Saint Giles development — the site many Googlers have long called home — for $1 billion. Based in London’s thriving West End, our investment in this striking Renzo Piano-designed development represents our continued confidence in the office as a place for in-person collaboration and connection.

Across all our UK sites, Google will have capacity for 10,000 employees, as we continue to commit to the UK’s growth and success. This includes our new King’s Cross development, which is currently under construction.

Investing in the future flexible workplace

We believe that the future of work is flexibility. Whilst the majority of our UK employees want to be on-site some of the time, they also want the flexibility of working from home a couple of days a week. Some of our people will want to be fully remote. Our future UK workplace has room for all of those possibilities.

Over the next few years, we’ll be embarking on a multi-million pound refurbishment of our offices within Central Saint Giles to ensure that they are best equipped to meet the needs of our future workplace.

We'll be introducing new types of collaboration spaces for in-person teamwork, as well as creating more overall space to improve wellbeing. We’ll introduce team pods, which are flexible new space types that can be reconfigured in multiple ways, supporting focused work, collaboration or both, based on team needs. The new refurbishment will also feature outdoor covered working spaces to enable work in the fresh air.

Supporting digital growth across the UK

More than ever, technology is enabling people and businesses across the UK. In 2021, we achieved our target to help one million small British businesses stay open by helping them be found online.

It’s important that everyone is able to take advantage of the increasing innovation in the UK and grow skill sets to prepare for the jobs of the present and the future. Since we launched our Digital Garage programme in Leeds in 2015, we have provided free digital skills training to more than 700,000 people across the UK .

Thousands more UK jobseekers will also be helped to upgrade their digital skills in 2022 thanks to our expanded partnership with the Department for Work and Pensions (DWP). Nearly 10,000 job-seekers are able to gain access to free scholarships to earn a Google Careers Certificate in high-growth, high-demand career fields including IT support, data analysis, project management and UX design.

We’re optimistic about the potential of digital technology to drive an inclusive and sustainable future in the UK. We’re excited to be making this investment in January as a fitting way to start the new year.

Beta Channel Update for Chrome OS

The Beta channel is being updated to 98.0.4758.51 (Platform version: 14388.27.0) for most Chrome OS devices.

If you find new issues, please let us know by visiting our forum or filing a bug. Interested in switching channels Find out how. You can submit feedback using ‘Report an issue...’ in the Chrome menu (3 vertical dots in the upper right corner of the browser).  

Matt Nelson,

Google Chrome OS 

Some facts about Google Analytics data privacy

The web has to work for users, advertisers, and publishers of all sizes — but users first. And with good reason: people are using the internet in larger numbers for more daily needs than ever. They don’t want privacy as an afterthought; they want privacy by design.

Understanding this is core to how we think about building Google Analytics, a set of everyday tools that help organizations in the commercial, public, and nonprofit sectors understand how visitors use their sites and apps — but never by identifying individuals or tracking them across sites or apps.

Because some of these organizations lately have faced questions about whether an analytics service can be compatible with user privacy and the rules for international transfers of personal data, we wanted to explain what Google Analytics does, and just as important, what it does not do.

Fact: Google Analytics is a service used by organizations to understand how their sites and apps are used, so that they can make them work better. It does not track people or profile people across the internet.

  • Google Analytics cannot be used to track people across the web or apps. It does not create user profiles.
  • Google Analytics helps owners of apps and websites understand how their users are engaging with their sites and apps (and only their site or app). For example, it can help them understand which sections of an online newspaper have the most readers, or how often shopping carts are abandoned for an online store. This is what helps them improve the experience for their customers by better understanding what’s working or not working.
  • This kind of information also includes things like the type of device or browser used; how long, on average, visitors spend on their site or app; or roughly where in the world their visitors are coming from. These data points are never used to identify the visitor or anyone else in Google Analytics.

Google Analytics customers are prohibited from uploading information that could be used by Google to identify a person. We provide our customers with data deletion tools to help them promptly remove data from our servers if they inadvertently do so.

Fact: Organizations control the data they collect using Google Analytics.

  • Organizations use Google Analytics because they choose to do so. They, not Google, control what data is collected and how it is used.
  • They retain ownership of the data they collect using Google Analytics, and Google only stores and processes this data per their instructions — for example, to provide them with reports about how visitors use their sites and apps.
  • These organizations can, separately, elect to share their Analytics data with Google for one of a few specific purposes, including technical support, benchmarking, and sales support.
  • Organizations must take explicit action to allow Google to use their analytics data to improve or create new products and services. Such settings are entirely optional and require explicit opt-in.

Fact: Google Analytics helps customers with compliance by providing them with a range of controls and resources.

Fact: Google Analytics helps put usersin control of their data.

  • Google makes products and features that are secure by default, private by design, and put users in control. That’s why we have long offered a browser add-on that enables users to disable measurement by Google Analytics on any site they visit.
  • Along with providing strong default protections, we aim to give people accessible, intuitive and useful controls so they can make choices that are right for them. For example, visitors can choose if and how Analytics cookies are used by websites they visit, or block all cookies on all or some websites.
  • In addition, organizations are required to give visitors proper notice about the implementations and features of Google Analytics that they use, and whether this data can be connected to other data they have about them.
  • These customers are also required to obtain consent from users for each visit, as required by applicable laws in their country.

Fact: Google Analytics cannot be used to show advertisements to people based on sensitive information like health, ethnicity, sexual orientation, etc.

  • Google Analytics does not serve ads at all. It is a web and app analytics tool. (You can read all about it here.)
  • Some organizations do use insights they’ve garnered via Google Analytics about their own sites and apps to inform their own advertising campaigns.
  • If a business also uses Google’s advertising platforms, it’s strictly required to follow Google’s advertising guidelines preventing the use of sensitive information to personalize ads — like health, race, religion, or sexual orientation. We never allow sensitive information to be used for personalized advertising. It’s simply off limits.

Fact: An organization’s Google Analytics data can only be transferred when specific and rigorous privacy conditions are met.

  • Google Analytics operates data centers globally, including in the United States, to maximize service speed and reliability. Before data is transferred to any servers in the United States, it is collected in local servers, where users’ IP addresses are anonymized (when the feature is enabled by customers).
  • The GDPR and European Court of Justice say that data can be transferred outside of the European Union for just this sort of reason, provided conditions are met.
  • In order to meet those conditions, we apply numerous measures, including:
    • Using data transfer agreements like EU Standard Contractual Clauses, which have been affirmed as a valid mechanism for transferring data to the United States, together with additional safeguards that keep data secure: industry-leading data encryption, physical security in our data centers and robust policies for handling government requests for user information.
    • Maintaining widely recognized, internationally accepted independent security standards like ISO 27001, which provides independent accreditation of our systems, applications, people, technology, processes and data centers.
    • Offering website owners a wide range of controls that they can use to keep their website visitors’ data safe and secure.
  • Our infrastructure and encryption is designed to protect data, and safeguard it from any government access.

And we use robust technical measures (such as Application Layer Transport Security and HTTPS encryption) to protect against interception in transit within Google’s infrastructure, between data centers, and between users and websites, including surveillance attempts by government authorities around the world.

Flow and Redacted: Check out these new options for wireframes and other early-stage designs



Give your simulated text a realistic look while making it easy to add copy later on with Dan Ross’s Flow Fonts and Christian Naths’s Redacted.



Showing text in an early-stage wireframe can be distracting, even if it’s just Lorem ipsum placeholder copy. After all, a successful wireframe is clean and simple, with just enough information to communicate an idea. But how do you convey “this is text” without showing text? 


One popular technique is to draw shapes that resemble a block of redacted text. (Redacted text is usually used as a security or privacy measure in a document to make certain words unreadable.)


Another technique is to use handwritten scribbles. This creates a sketch-like look that’s especially suited to quick concepting.



Images of handwritten scribbles and a block of redacted text.


Examples of text substitution styles used in wireframing. Left: Redacted Script, a handwritten scribble style. Right: Redacted text style. 



But instead of simulating redacted text with scribbles or shapes, now you can use a typeface to achieve the same effect. 

Image of an app UI with Flow Rounded


Flow Rounded in use


Flow Circular, Flow Block, and Flow Rounded from Dan Ross and Redacted from Christian Naths are four redacted text options. For a handwritten scribble style, try Nath’s Redacted Script, which is available in Light, Regular, and Bold.


Flow and Redacted not only make it easier to give your wireframes the look you want, they also make it easier to drop in copy later on (since you won’t have to replace shapes with text or switch out components). Plus, since fonts don’t destroy the underlying text data, all it takes is a single click to go from text to redacted text—and back again. 


All five fonts are available now on fonts.google.com.


Posted by Sarah Daily, Brand and Content Consultant





Making Open Source software safer and more secure

We welcomed the opportunity to participate in the White House Open Source Software Security Summit today, building on our work with the Administration to strengthen America’s collective cybersecurity through critical areas like open source software.

Industries and governments have been making strides to tackle the frequent security issues that plague legacy, proprietary software. The recent log4j open source software vulnerability shows that we need the same attention and commitment to safeguarding open source tools, which are just as critical.

Open source software code is available to the public, free for anyone to use, modify, or inspect. Because it is freely available, open source facilitates collaborative innovation and the development of new technologies to help solve shared problems. That’s why many aspects of critical infrastructure and national security systems incorporate it. But there’s no official resource allocation and few formal requirements or standards for maintaining the security of that critical code. In fact, most of the work to maintain and enhance the security of open source, including fixing known vulnerabilities, is done on an ad hoc, volunteer basis.

For too long, the software community has taken comfort in the assumption that open source software is generally secure due to its transparency and the assumption that “many eyes” were watching to detect and resolve problems. But in fact, while some projects do have many eyes on them, others have few or none at all.

At Google, we’ve been working to raise awareness of the state of open source security. We’ve invested millions in developing frameworks and new protective tools. We’ve also contributed financial resources to groups and individuals working on securing foundational open source projects like Linux. Just last year, as part of our $10 billion commitment to advancing cybersecurity, we pledged to expand the application of our Supply chain Levels for Software Artifacts (SLSA or “Salsa”) framework to protect key open source components. That includes $100 million to support independent organizations, like the Open Source Security Foundation (OpenSSF), that manage open source security priorities and help fix vulnerabilities.

But we know more work is needed across the ecosystem to create new models for maintaining and securing open source software. During today’s meeting, we shared a series of proposals for how to do this:

Identifying critical projects

We need a public-private partnership to identify a list of critical open source projects — with criticality determined based on the influence and importance of a project — to help prioritize and allocate resources for the most essential security assessments and improvements.

Longer term, we need new ways of identifying software that might pose a systemic risk — based on how it will be integrated into critical projects — so that we can anticipate the level of security required and provide appropriate resourcing.

Establishing security, maintenance & testing baselines

Growing reliance on open source means that it’s time for industry and government to come together to establish baseline standards for security, maintenance, provenance, and testing — to ensure national infrastructure and other important systems can rely on open source projects. These standards should be developed through a collaborative process, with an emphasis on frequent updates, continuous testing, and verified integrity.

Fortunately, the software community is off to a running start. Organizations like the OpenSSF are already working across industry to create these standards (including supporting efforts like our SLSA framework).

Increasing public and private support

Many leading companies and organizations don’t recognize how many parts of their critical infrastructure depend on open source. That’s why it’s essential that we see more public and private investment in keeping that ecosystem healthy and secure. In the discussion today, we proposed setting up an organization to serve as a marketplace for open source maintenance, matching volunteers from companies with the critical projects that most need support. Google stands ready to contribute resources to this effort.

Given the importance of digital infrastructure in our lives, it’s time to start thinking of it in the same way we do our physical infrastructure. Open source software is a connective tissue for much of the online world — it deserves the same focus and funding we give to our roads and bridges. Today’s meeting at the White House was both a recognition of the challenge and an important first step towards addressing it. We applaud the efforts of the National Security Council, the Office of the National Cyber Director, and DHS CISA in leading a concerted response to cybersecurity challenges and we look forward to continuing to do our part to support that work.