Copy your client-side encrypted Google Docs, Sheets, and Slides files

Quick summary 

If you have client-side encryption enabled for Docs, Sheets and Slides, you can now make a copy of an existing encrypted document, spreadsheet or presentation. Encryption will be preserved when copies of the file are made. This feature makes it easier to leverage existing content as a baseline for new encrypted Docs, Sheets, or Slides. 



Getting started 


Rollout pace 


Availability 

  • Available to Google Workspace Enterprise Plus, Education Standard and Education Plus customers 
  • Not available to Google Workspace Essentials, Business Starter, Business Standard, Business Plus, Enterprise Essentials, Education Fundamentals, Frontline, and Nonprofits, as well as legacy G Suite Basic and Business customers 

Resources 

How Unni’s passion for social impact led him to Google

Welcome to the latest edition of “My Path to Google,” where we talk to Googlers, interns, apprentices and alumni about how they got to Google, what they do in their roles and how they prepared for their interviews.

In celebration of Asian Pacific American Heritage Month, today’s post features Unni Nair, a senior research strategist on Google’s Responsible Innovation team. As a second-generation Indian American, Unni’s background has helped shape his passion for sustainability and responsible artificial intelligence (AI).

What’s your role at Google?

I’m a senior research strategist on the Responsible Innovation team. In this role, I use Google’s AI Principles to help our teams build products that are both helpful and socially responsible. More specifically, I’m passionate about how we can proactively incorporate responsible AI into emerging technologies to drive sustainable development priorities. For example, I’ve been working with the Google Earth Engine team to align their work with our AI Principles, which we spoke about in a workshop at Google I/O. I helped the team develop a data set — used by governments, companies and researchers — to efficiently display information related to conservation, biodiversity, agriculture and forest management efforts.

Can you tell us a bit about yourself?

I was born in Scranton, Pennsylvania, but I lived in many different parts of the U.S., and often traveled internationally, throughout my childhood. Looking back, I realize how fortunate I was to live in and learn from so many different communities at such a young age. As a child of Indian immigrants, I was exposed to diverse ways of life and various forms of inequity. These experiences gave me a unique perspective on the world, helping me see the potential in every human being and nurturing a sense of duty to uplift others. It took dabbling in fields from social work to philosophy, and making lots of mistakes along the way, to figure out how to turn this passion into impact.

In honor of Asian Pacific American Heritage Month, how else has your background influenced your work?

I’m grateful for having roots in the 5,000+ year-old Indian civilization and am constantly reminded of its value working in Silicon Valley. One notable example that’s influenced my professional life is the concept of Ahimsa — the ethical principle of not causing harm to other living things. While its historical definition has been more spiritually related, in modern day practice I’ve found it’s nurtured a respect for nature and a passion for sustainability and human rights in business. This contemporary interpretation of Ahimsa also encourages me to consider the far-reaching impacts — for better or for worse — that technology can have on people, the environment or society at large.

How did you ultimately end up at Google?

I was itching to work on more technology-driven solutions to global sustainability issues. I started to see that many of the world’s challenges are in part driven by macro forces like rapid globalization and technology growth. However, the sustainability field and development sector were slow to adapt from analog problem solving. I wanted to explore unconventional solutions like artificial intelligence, which is why I taught myself the Python programming language and learned more about AI. I started hearing about Google’s AI-first approach to help users and society, with an emphasis on the need to develop that technology responsibly. So I applied to the Responsible Innovation team for the chance to create helpful technology with social benefit in mind.

Any advice for aspiring Googlers?

Google is one of those rare places where the impact you’re making isn’t just on a narrow band of users — it’s on society at large. So, take the time to reflect on what sort of impact you want to make in the world. Knowing your answer to that question will allow you to weave your past experiences into a cohesive narrative during the interview process. And more importantly, it will also serve as your personal guide when making important decisions throughout your career.

Celebrating 10 years of Google for Startups in the UK

I remember clearly the palpable sense of excitement at the Google for Startups Campus in London’s ‘Silicon Roundabout’ when I first visited in 2012. My first startup, back in Krakow Poland, had shut down after three years of solid early traction, and I moved to London in pursuit of bigger opportunities, a community and capital to fuel growth. The UK quickly became home, and my London Campus experience was so positive I ended up joining Google six years later.

As we celebrate the 10 year anniversary of Google for Startups UK, we’re taking a moment to celebrate the entrepreneurs and teams who have blazed a trail, and looking ahead to ensure we’re helping create the right conditions for future founders.

The industry has grown exponentially since Google for Startups UK launched 10 years ago – this year, we’ve already seen UK tech startups and scaleups cumulatively valued at more than $1 trillion (£794bn); up from $53.6 billion (£46bn), ten years ago.

One area of the UK tech startup community that has flourished in particular is impact tech - defined as . companies founded to help address global challenges like climate change and help transform health, education and financial inclusion. Our new report created in partnership with Tech Nation, A Decade of UK Tech, shows that funding for impact tech startups has soared. In fact, since 2011, funding for impact tech companies addressing UN Sustainable Development Goals has risen 43-fold from just $74 million (£59 million) to $3.5 billion (£2.8 billion).

Graph: Investment into impact tech scaleups (2011-2021)

Graph 1: Investment into impact tech scaleups (2011-2021)

Source: Tech Nation, Dealroom, 2022

Startups are helping to solve global challenges, like climate change, education, health, food and sanitation, with agility, innovation and determination. And at Google for Startups, we’re proud to be supporting these businesses along the way by connecting founders with the right people, products and practices to help them grow. Because their continued success is vital not just for the UK’s future, but that of the world.

Enduring market barriers and perceptions of high risk can slow private sector investment. But even such challenges create a multitude of new opportunities for tech startups to leverage the UK's position as a financial services powerhouse. Elizabeth Nyeko
Founder of Modularity Grid - A deep tech startup

Google for Startups was launched in the UK with a mission to support a thriving, diverse and inclusive startup community. Here’s where we are a decade later:

  • Startups in our community have created more than 24,000 jobs
  • Startups in our network have raised £358 million
  • We supported 20 UK-based Black-led startups with the Google for Startups Black Founders Fund in Europe. Last year's European cohort went on to raise £64 million in subsequent funding and increase their headcount by 21%

Our work at Google for Startups is far from over. We’re committed to levelling the playing field for all founders, and closing the disproportionate gap in access to capital and support networks for underrepresented communities. For the impact tech sector to continue to grow and succeed, we must ensure funding is channeled towards the most innovative startups - no matter their valuation, funding stage or background.

Find out more at Google for Startups.

Chrome Beta for Android Update

Hi everyone! We've just released Chrome Beta 103 (103.0.5060.22) for Android. It's now available on Google Play.

You can see a partial list of the changes in the Git log. For details on new features, check out the Chromium blog, and for details on web platform updates, check here.

If you find a new issue, please let us know by filing a bug.

Krishna Govind
Google Chrome

Deep Learning with Label Differential Privacy

Over the last several years, there has been an increased focus on developing differential privacy (DP) machine learning (ML) algorithms. DP has been the basis of several practical deployments in industry — and has even been employed by the U.S. Census — because it enables the understanding of system and algorithm privacy guarantees. The underlying assumption of DP is that changing a single user’s contribution to an algorithm should not significantly change its output distribution.

In the standard supervised learning setting, a model is trained to make a prediction of the label for each input given a training set of example pairs {[input1,label1], …, [inputn, labeln]}. In the case of deep learning, previous work introduced a DP training framework, DP-SGD, that was integrated into TensorFlow and PyTorch. DP-SGD protects the privacy of each example pair [input, label] by adding noise to the stochastic gradient descent (SGD) training algorithm. Yet despite extensive efforts, in most cases, the accuracy of models trained with DP-SGD remains significantly lower than that of non-private models.

DP algorithms include a privacy budget, ε, which quantifies the worst-case privacy loss for each user. Specifically, ε reflects how much the probability of any particular output of a DP algorithm can change if one replaces any example of the training set with an arbitrarily different one. So, a smaller ε corresponds to better privacy, as the algorithm is more indifferent to changes of a single example. However, since smaller ε tends to hurt model utility more, it is not uncommon to consider ε up to 8 in deep learning applications. Notably, for the widely used multiclass image classification dataset, CIFAR-10, the highest reported accuracy (without pre-training) for DP models with ε = 3 is 69.3%, a result that relies on handcrafted visual features. In contrast, non-private scenarios (ε = ∞) with learned features have shown to achieve >95% accuracy while using modern neural network architectures. This performance gap remains a roadblock for many real-world applications to adopt DP. Moreover, despite recent advances, DP-SGD often comes with increased computation and memory overhead due to slower convergence and the need to compute the norm of the per-example gradient.

In “Deep Learning with Label Differential Privacy”, presented at NeurIPS 2021, we consider a more relaxed, but important, special case called label differential privacy (LabelDP), where we assume the inputs (input1, …, inputn) are public, and only the privacy of the training labels (label1, …, labeln) needs to be protected. With this relaxed guarantee, we can design novel algorithms that utilize a prior understanding of the labels to improve the model utility. We demonstrate that LabelDP achieves 20% higher accuracy than DP-SGD on the CIFAR-10 dataset. Our results across multiple tasks confirm that LabelDP could significantly narrow the performance gap between private models and their non-private counterparts, mitigating the challenges in real world applications. We also present a multi-stage algorithm for training deep neural networks with LabelDP. Finally, we are excited to release the code for this multi-stage training algorithm.

LabelDP
The notion of LabelDP has been studied in the Probably Approximately Correct (PAC) learning setting, and captures several practical scenarios. Examples include: (i) computational advertising, where impressions are known to the advertiser and thus considered non-sensitive, but conversions reveal user interest and are thus private; (ii) recommendation systems, where the choices are known to a streaming service provider, but the user ratings are considered sensitive; and (iii) user surveys and analytics, where demographic information (e.g., age, gender) is non-sensitive, but income is sensitive.

We make several key observations in this scenario. (i) When only the labels need to be protected, much simpler algorithms can be applied for data preprocessing to achieve LabelDP without any modifications to the existing deep learning training pipeline. For example, the classic Randomized Response (RR) algorithm, designed to eliminate evasive answer biases in survey aggregation, achieves LabelDP by simply flipping the label to a random one with a probability that depends on ε. (ii) Conditioned on the (public) input, we can compute a prior probability distribution, which provides a prior belief of the likelihood of the class labels for the given input. With a novel variant of RR, RR-with-prior, we can incorporate prior information to reduce the label noise while maintaining the same privacy guarantee as classical RR.

The figure below illustrates how RR-with-prior works. Assume a model is built to classify an input image into 10 categories. Consider a training example with the label “airplane”. To guarantee LabelDP, classical RR returns a random label sampled according to a given distribution (see the top-right panel of the figure below). The smaller the targeted privacy budget ε is, the larger the probability of sampling an incorrect label has to be. Now assume we have a prior probability showing that the given input is “likely an object that flies” (lower left panel). With the prior, RR-with-prior will discard all labels with small prior and only sample from the remaining labels. By dropping these unlikely labels, the probability of returning the correct label is significantly increased, while maintaining the same privacy budget ε (lower right panel).

Randomized response: If no prior information is given (top-left), all classes are sampled with equal probability. The probability of sampling the true class (P[airplane] ≈ 0.5) is higher if the privacy budget is higher (top-right). RR-with-prior: Assuming a prior distribution (bottom-left), unlikely classes are “suppressed” from the sampling distribution (bottom-right). So the probability of sampling the true class (P[airplane] ≈ 0.9) is increased under the same privacy budget.

A Multi-stage Training Algorithm
Based on the RR-with-prior observations, we present a multi-stage algorithm for training deep neural networks with LabelDP. First, the training set is randomly partitioned into multiple subsets. An initial model is then trained on the first subset using classical RR. Finally, the algorithm divides the data into multiple parts, and at each stage, a single part is used to train the model. The labels are produced using RR-with-prior, and the priors are based on the prediction of the model trained so far.

An illustration of the multi-stage training algorithm. The training set is partitioned into t disjoint subsets. An initial model is trained on the first subset using classical RR. Then the trained model is used to provide prior predictions in the RR-with-prior step and in the training of the later stages.

Results
We benchmark the multi-stage training algorithm’s empirical performance on multiple datasets, domains, and architectures. On the CIFAR-10 multi-class classification task for the same privacy budget ε, the multi-stage training algorithm (blue in the figure below) guaranteeing LabelDP achieves 20% higher accuracy than DP-SGD. We emphasize that LabelDP protects only the labels while DP-SGD protects both the inputs and labels, so this is not a strictly fair comparison. Nonetheless, this result demonstrates that for specific application scenarios where only the labels need to be protected, LabelDP could lead to significant improvements in the model utility while narrowing the performance gap between private models and public baselines.

Comparison of the model utility (test accuracy) of different algorithms under different privacy budgets.

In some domains, prior knowledge is naturally available or can be built using publicly available data only. For example, many machine learning systems have historical models which could be evaluated on new data to provide label priors. In domains where unsupervised or self-supervised learning algorithms work well, priors could also be built from models pre-trained on unlabeled (therefore public with respect to LabelDP) data. Specifically, we demonstrate two self-supervised learning algorithms in our CIFAR-10 evaluation (orange and green traces in the figure above). We use self-supervised learning models to compute representations for the training examples and run k-means clustering on the representations. Then, we spend a small amount of privacy budget (ε ≤ 0.05) to query a histogram of the label distribution of each cluster and use that as the label prior for the points in each cluster. This prior significantly boosts the model utility in the low privacy budget regime (ε < 1).

Similar observations hold across multiple datasets such as MNIST, Fashion-MNIST and non-vision domains, such as the MovieLens-1M movie rating task. Please see our paper for the full report on the empirical results.

The empirical results suggest that protecting the privacy of the labels can be significantly easier than protecting the privacy of both the inputs and labels. This can also be mathematically proven under specific settings. In particular, we can show that for convex stochastic optimization, the sample complexity of algorithms privatizing the labels is much smaller than that of algorithms privatizing both labels and inputs. In other words, to achieve the same level of model utility under the same privacy budget, LabelDP requires fewer training examples.

Conclusion
We demonstrated that both empirical and theoretical results suggest that LabelDP is a promising relaxation of the full DP guarantee. In applications where the privacy of the inputs does not need to be protected, LabelDP could reduce the performance gap between a private model and the non-private baseline. For future work, we plan to design better LabelDP algorithms for other tasks beyond multi-class classification. We hope that the release of the multi-stage training algorithm code provides researchers with a useful resource for DP research.

Acknowledgements
This work was carried out in collaboration with Badih Ghazi, Noah Golowich, and Ravi Kumar. We also thank Sami Torbey for valuable feedback on our work.

Source: Google AI Blog


Embed content as a full page in new Google Sites

Quick summary 

Site editors using new Google Sites can now add content as a full page from the following sources: custom code, other websites, and Google apps, such as Maps and Docs. Previously, editors could only add these elements as part of a page. This update provides more flexibility to organize and display embedded content on your site. 

Embed content as a full page in new Google Sites


Getting started 

  • Admins: There is no admin control for this feature. 
  • End users: In a new Site, navigate to Pages > New page > Full page embed. Name the page and then add a URL, embed code, or embed content from another Google app. Visit the Help Center to learn more about adding Google files, video & more. 

Rollout pace 


Availability 

  • Available to all Google Workspace customers, as well as legacy G Suite Basic and Business customers 
  • Available to users with personal Google accounts 

Resources 

Experts.Anyone.Anywhere

Posted by Janelle Kuhlman, Developer Relations Program Manager

Click above to meet our community of Experts

The Google Developer Experts program is a global network of highly experienced technology experts, developers and thought leaders. GDEs share their expertise with other developers and tech communities through a variety of ways such as speaking engagements, mentorship and content writing. The community has access to an exclusive network of experts that span across different Google technologies including Android, Cloud, Machine Learning and more.

Get to know our diverse community and subscribe to the Google Developers YouTube Channel to stay informed on the latest updates across our products and platforms!

Add shared drives to specific organizational units

What’s changing 

For select Google Workspace editions, admins can now place shared drives into sub organizational units (OUs). Doing so enables admins to configure sharing policies, data regions, access management, and more at a granular level. 


This feature is available now as an open beta, which means you can use the feature without opting-in to a specific program. 


Who’s impacted 

Admins and end users 


Why it matters 

Currently, all shared drives reside in the “root” OU. As such, all shared drives are subject to the same policies. This update gives admins the option to move shared drives to sub OUs within their organizations, such as Marketing or Legal, which allows for more control over the privacy and security of the shared drive's contents on a case-by-case basis. For example, admins can restrict sharing of a shared drive belonging to the legal department because it contains highly confidential information. Additionally, this also gives admins more flexibility over applying default sub OUs to newly-created shared drives, assuring each new shared drive subject to appropriate security policies. 


With this update admins will have greater control and more options to control how their data is accessed and shared on a case by case basis. 

Getting started 

  • Admins: Admins can assign shared drives to various OUs using the new “Organizational Unit” column found in Apps > Google Workspace > Drive and Docs > Manage Shared Drives. Visit the Help Center to learn more about shared drives and managing shared drive users and activity.





  • End users: There is no end user setting for this feature — the ability to access or share certain files contained in a shared drive will vary. Visit the Help Center to learn more about sharing files in Google Drive

Availability 

  • Available to Google Workspace Essentials, Business Standard, Business Plus, Enterprise Standard, Enterprise Plus, Education Fundamentals, Education Standard, Education Plus, the Teaching and Learning Upgrade, and Nonprofits customers 
  • Not available to Google Workspace Business Starter, Enterprise Essentials, Frontline, as well as legacy G Suite Basic and Business customers Not available to users with personal Google Accounts 

Dev Channel Update for Desktop

  The Dev channel has been updated to 103.0.5060.24 for Windows, Mac and Linux.

A partial list of changes is available in the log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.

Prudhvikumar Bommana

Google Chrome

Expanded Text Ads (ETA) Creation and Editing will Cease on June 30, 2022

As previously announced, starting June 30, 2022:
  • You will no longer be able to create or edit Expanded Text Ads
  • If you attempt to create an ETA you will receive the error CANNOT_CREATE_DEPRECATED_ADS
  • If you attempt to modify an ETA you will receive the error CANNOT_MODIFY_AD
  • Expanded text ads will continue to serve, and you will still see reports on their performance going forward
  • You will be able to pause and resume your expanded text ads, or remove them if needed
We encourage you to transition to Responsive Search Ads (RSA).

If you have any questions, please contact us on the forum.