Monthly Archives: April 2023

Google Workspace Updates Weekly Recap – April 28, 2023

4 New updates

Unless otherwise indicated, the features below are fully launched or in the process of rolling out (rollouts should take no more than 15 business days to complete), launching to both Rapid and Scheduled Release at the same time (if not, each stage of rollout should take no more than 15 business days to complete), and available to all Google Workspace and G Suite customers.


Updates to image insertion in Google Sheets on Android 
You can now drag and drop or copy/paste an image into Google Sheets on Android as an over-grid image rather than an in-cell image. You’re also now able to convert over-grid images into in-cell images via the context menu. | Rolling out to Rapid Release domains now; launch to Scheduled Release domains planned for May 8, 2023. | Learn more about adding an image to a spreadsheet.
Updates to image insertion in Google Sheets on Android
Replace images quicker in Google Slides with new drag and drop feature
Previously, to replace an image in Google Slides, you could either use the menu toolbar or right-click on the image you wanted to replace and select “Replace image.” Starting this week, you’ll have the additional option to easily drag and drop images from anywhere to replace images in your Slides presentations. | Rolling out to Rapid Release domains now; launch to Scheduled Release domains planned for May 9, 2023. | Learn more about inserting or deleting images and videos
Replace images quicker in Google Slides with new drag and drop feature
Attach a document, spreadsheet, or presentation to a Google Calendar event directly from Meet in Docs, Sheets, and Slides
You can already share a file in a Google Meet chat when using Meet in Docs, Sheets, and Slides. With this launch, you can easily attach that file to the associated Google Calendar event, allowing meeting attendees to access the file more easily. | Available to Google Workspace Business Standard, Business Plus, Enterprise Essentials, Enterprise Standard, Enterprise Plus, Education Standard, Education Plus, the Teaching and Learning Upgrade, and Nonprofits customers only. | Learn more about using Google Meet with Google Docs, Sheets, Slides, & Jamboard


Add emoji reactions to existing comments in Google Docs
Last year, we introduced an emoji reaction feature that provides a less formal alternative to comments in Google Docs. We’re building upon this by giving you the ability to add emoji reactions to existing comments in Docs. This new feature increases collaboration by enabling you to quickly and creatively express your opinions about document content. | Rolling out to Rapid Release domains now; launch to Scheduled Release domains planned for May 3, 2023. | Learn more about using comments, action items, & emoji reactions
Add emoji reactions to existing comments in Google Docs



Previous announcements

The announcements below were published on the Workspace Updates blog earlier this week. Please refer to the original blog posts for complete details.


Enhancing tool discovery in Google Docs, Sheets, and Slides
We’ve rolled out an enhanced tool finder at the top of Google Docs, Sheets, and Slides to make it easier for users to discover commonly used tools and features. These refined tool-finding capabilities aim to help you quickly locate relevant features or functionality using your own words. | Learn more about enhanced tool discovery in Google Docs, Sheets, and Slides


Full HD in Google Meet video calls
For select Google Workspace editions, you can set your Google Meet video resolution to 1080p. This resolution is available on the web when using a computer with a 1080p camera and enough computing power in meetings with two participants. | Available to Google Workspace Business Standard, Business Plus, Enterprise Starter, Enterprise Standard, Enterprise Plus, the Teaching and Learning Upgrade, Education Plus, Enterprise Essentials and Frontline customers only. Also available to Google One subscribers with 2TB or more storage space with eligible devices. | Learn more about full HD in Google Meet video calls


Introducing additional smart chip functionality in Google Sheets
We’re expanding YouTube chips to Sheets to help you more easily manage YouTube content. You can also now insert multiple smart chips and text into a single cell using the @ menu. | Learn more about additional smart chip functionality in Google Sheets.


New Alert Center notifications for Apple push certificates
The Apple Push Notification Service (APNS) certificate is a critical component for advanced mobile management for iOS devices. This certificate expires yearly and requires manual renewal. | Available to Google Workspace Business Plus, Enterprise Essentials, Enterprise Standard, Enterprise Plus, Education Standard, Education Plus, The Teaching and Learning Upgrade, Education Fundamentals, Frontline, and Cloud Identity Premium customers only. | Learn more about new Alert Center notifications for Apple push certificates.


Set a custom time period for messages to automatically be deleted in Google Chat
For select Google Workspace editions, admins can now define a duration after which their users' messages in Google Chat will be deleted automatically. | Available to Google Workspace Business Plus, Enterprise Standard, Enterprise Plus, Education Standard, and Education Plus customers only. | Learn more about setting a custom time period for messages to automatically be deleted in Google Chat


Set Full HD in Google Meet live streams
For select Google Workspace editions, we’re adding 1080p as a resolution option for live streaming video, an increase from 720p. | Available to Google Workspace Enterprise Essentials, Enterprise Plus, Enterprise Standard, Education Plus, and Teaching and Learning Upgrade customers only. | Learn more about full HD in Google Meet live streams.


Completed rollouts

The features below completed their rollouts to Rapid Release domainsScheduled Release domains, or both. Please refer to the original blog post for additional details.


Full HD in Google Meet live streams

What’s changing

Recently, we announced full HD Google Meet video calls, allowing you to set your Google Meet video resolution to 1080p. Similarly, we’re adding 1080p as a resolution option for live streaming video, an increase from 720p. If there is not enough bandwidth, the video feed will automatically revert to the best quality possible. This update delivers a higher quality, crisp video experience for those viewing a live stream.


Getting started


Rollout pace


Availability

  • Available for Google Workspace Enterprise Starter, Enterprise Plus, Enterprise Standard, Education Plus, and Teaching and Learning Upgrade


Resources


Set a custom time period for messages to automatically be deleted in Google Chat

What’s changing

For select Google Workspace editions, admins can now define a duration after which their users' messages in Google Chat will be deleted automatically. This can apply to messages in 1:1 conversations, group conversations, and spaces — time periods can be assigned for each message type. Note that this retention period only applies to messages sent when history is enabled. The auto-deletion timeframe can range from 30 days to several years.





Who’s impacted

Admins and end users


Why it’s important

Currently, admins have limited control over the history duration of conversations in Google Chat: with history off, messages are deleted after 24 hours; with history on, messages remain visible for indefinite time, unless proactively deleted by Vault Retention policy or proactively deleted by user by user.

This update gives admins more granular control over how long their users can see messages in conversations. For end user practicality, this helps unclutter conversations, while complying with retention requirements (if a retention policy is applied). If you’re using the Auto Deletion policy combined with a Vault Retention Policy, the Vault policy prevails. For more information, see this article in our Help Center.


Getting started


Rollout pace


Availability

  • Available to Google Workspace Business Plus, Enterprise Standard, Enterprise Plus, Education Standard, and Education Plus customers.

Resources


New Alert Center notifications for Apple push certificates

What’s changing 

The Apple Push Notification Service (APNS) certificate is a critical component for advanced mobile management for iOS devices. This certificate expires yearly and requires manual renewal. If you don't renew the certificate, your organization’s iOS devices will not be able to access Google Workspace applications after the certificate expires. To help you stay on top of their renewal period and take action in a timely manner, we will: 

Notify you via the Alert Center and email when: 
  • Your certificate is 30, 10, and 1 day from the date of expiration. 
  • Your certificate has expired. 








Getting started 

  • Admins: 
  • End users: There is no end user impact or action required.


Rollout pace 


Availability 

  • Google Workspace Business Plus, Enterprise Essentials, Enterprise Standard, Enterprise Plus, Education Standard, Education Plus, The Teaching and Learning Upgrade, Education Fundamentals, Frontline, and Cloud Identity Premium customers 

Resources 

A Tool for Exploring and Testing Google Workspace APIs

Posted by Chanel Greco, Developer Advocate Google Workspace

We recently launched the Google Workspace APIs Explorer, a new tool to help streamline developing on the Google Workspace Platform. What is this handy tool and how can you start using it?

The Google Workspace APIs Explorer is a tool that allows you to explore and test Google Workspace APIs without having to write any code. It's a great way to get familiar with the capabilities of the many Google Workspace APIs.

The Google Workspace APIs Explorer is a web-based tool that allows you to interact with Google Workspace APIs in a visual way.

Screenshot of Google Workspace API Explorer

How to use the Google Workspace APIs Explorer

To use this tool, simply navigate to the Google Workspace APIs Explorer page and select the API that you want to explore. The Google Workspace APIs Explorer will then display a list of all the methods available for that API. You can click on any method to see more information about it, including its parameters, responses, and examples.

To test an API method, simply enter the required parameters and click on the "Execute" button. The Google Workspace APIs Explorer will then send the request to the API and return the response. Please note, the tool acts on real data and authenticates with your Google Account, so use caution when trying methods that create, modify, or delete data.

Screenshot of Google Sheets API get in Google workspace API Explorer
Click to enlarge

How you can benefit from using the Google Workspace APIs Explorer

These are some of the benefits of using the Google Workspace APIs Explorer:

  • You can browse and discover the 25+ different Google Workspace APIs.
  • The tool can help you create code samples for your integrations or add-ons.
  • It can assist with troubleshooting problems with Google Workspace APIs.
  • It is a neat way to see the results of API requests in real time.

Getting started

You can access the Google Workspace APIs Explorer tool on the Google Workspace for Developers documentation, either through the navigation (Resources > API Explorer), or on its dedicated page. You will need a Google account to use the tool. This account can either be a Google Workspace account or the Google account you use for accessing tools like Gmail, Drive, Docs, Calendar, and more.

We also have a video showing how you can get started using the Google Workspace APIs Explorer – check it out here!

Secure mobile payment transactions enabled by Android Protected Confirmation

Unlike other mobile OSes, Android is built with a transparent, open-source architecture. We firmly believe that our users and the mobile ecosystem at-large should be able to verify Android’s security and safety and not just take our word for it.

We’ve demonstrated our deep belief in security transparency by investing in features that enable users to confirm that what they expect is happening on their device is actually happening.

The Assurance of Android Protected Confirmation

One of those features is Android Protected Confirmation, an API that enables developers to utilize Android hardware to provide users even more assurance that a critical action has been executed securely. Using a hardware-protected user interface, Android Protected Confirmation can help developers verify a user’s action intent with a very high degree of confidence. This can be especially useful in a number of user moments – like during mobile payment transactions - that greatly benefit from additional verification and security.

We’re excited to see that Android Protected Confirmation is now gaining ecosystem attention as an industry-leading method for confirming critical user actions via hardware. Recently, UBS Group AG and the Bern University of Applied Sciences, co-financed by Innosuisse and UBS Next, announced they’re working with Google on a pilot project to establish Protected Confirmation as a common application programmable interface standard. In a pilot planned for 2023, UBS online banking customers with Pixel 6 or 7 devices can use Android Protected Confirmation backed by StrongBox, a certified hardware vault with physical attack protections, to confirm payments and verify online purchases through a hardware-based confirmation in their UBS Access App.


Demonstrating Real-World Use for Android Protection Confirmation

We’ve been working closely with UBS to bring this pilot to life and ensure they’re able to test it on Google Pixel devices. Demonstrating real-world use cases that are enabled by Android Protected Confirmation unlocks the promise of this technology by delivering improved and innovative experiences for our users. We’re seeing interest in Android Protected Confirmation across the industry and OEMs are increasingly looking at how to build even more hardware-based confirmation into critical security user moments. We look forward to forming an industry alliance that will work together to strengthen mobile security and home in on protecting confirmation support.

Sensenmann: Code Deletion at Scale

By Phil Norman

Code at Scale

At Google, tens of thousands of software engineers contribute to a multi-billion-line mono-repository. This repository, stored in a system called Piper, contains the source code of shared libraries, production services, experimental programs, diagnostic and debugging tools: basically anything that's code-related.


This open approach can be very powerful. For example, if an engineer is unsure how to use a library, they can find examples just by searching. It also allows kind-hearted individuals to perform important updates across the whole repository, be that migrating to newer APIs, or following language developments such as Python 3, or Go generics.


Code, however, doesn't come for free: it's expensive to produce, but also costs real engineering time to maintain. Such maintenance cannot easily be skipped, at least if one wants to avoid larger costs later on.


But what if there were less code to maintain? Are all those lines of code really necessary?


Deletion at Scale

Any large project accumulates dead code: there's always some module that is no longer needed, or a program that was used during early development but hasn't been run in years. Indeed, entire projects are created, function for a time, and then stop being useful. Sometimes they are cleaned up, but cleanups require time and effort, and it's not always easy to justify the investment.


However, while this dead code sits around undeleted, it's still incurring a cost: the automated testing system doesn't know it should stop running dead tests; people running large-scale cleanups aren't aware that there's no point migrating this code, as it is never run anyway.


So what if we could clean up dead code automatically? That was exactly what people started thinking several years ago, during the Zürich Engineering Productivity team's annual hackathon. The Sensenmann project, named after the German word for the embodiment of Death, has been highly successful. It submits over 1000 deletion changelists per week, and has so far deleted nearly 5% of all C++ at Google.


Its goal is simple (at least, in principle): automatically identify dead code, and send code review requests ('changelists') to delete it.


What to Delete?

Google's build system, Blaze (the internal version of Bazel) helps us determine this: by representing dependencies between binary targets, libraries, tests, source files and more, in a consistent and accessible way, we're able to construct a dependency graph. This allows us to find libraries that are not linked into any binary, and propose their deletion.


That's only a small part of the problem, though: what about all those binaries? All the one-shot data migration programs, and diagnostic tools for deprecated systems? If they don't get removed, all the libraries they depend on will be kept around too.


The only real way to know if programs are useful is to check whether they're being run, so for internal binaries (programs run in Google's data centres, or on employee workstations), a log entry is written when a program runs, recording the time and which specific binary it is. By aggregating this, we get a liveness signal for every binary used in Google. If a program hasn't been used for a long time, we try sending a deletion changelist.


What Not to Delete?

There are, of course, exceptions: some program code is there simply to serve as an example of how to use an API; some programs run only in places we can't get a log signal from. There are many other exceptions too, where removing the code would be deleterious. For this reason, it's important to have a blocklisting system so that exceptions can be marked, and we can avoid bothering people with spurious changelists.


The Devel's in the Details

Consider a simple case. We have two binaries, each depending on its own library, and also on a third, shared library. Drawing this (ignoring the source files and other dependencies), we find this kind of structure:


If we see that main1 is in active use, but main2 was last used over a year ago, we can propagate the liveness signal through the build tree, marking main1 as alive along with everything it depends upon. What is left can be removed; as main2 depends on lib2, we want to delete these two targets in the same change:


So far so good, but real production code has unit tests, whose build targets depend upon the libraries they test. This immediately makes the graph traversal a lot more complicated:


The testing infrastructure is going to run all those tests, including lib2_test, despite lib2 never being executed 'for real'. This means we cannot use test runs as a 'liveness' signal: if we did, we'd consider lib2_test to be alive, which would keep lib2 around forever. We would only be able to clean up untested code, which would severely hamper our efforts.


What we really want is for each test to share the fate of the library it is testing. We can do this by making the library and its test interdependent, thus creating loops in the graph:


This turns each library and its test into a strongly connected component. We can use the same technique as before, marking the 'live' nodes and then hunting for collections of 'dead' nodes to be deleted, but this time using Tarjan's strongly connected components algorithm to deal with the loops.


Simple, right? Well, yes, if it's easy to identify the relationships between tests and the libraries they're testing. Sadly, that is not always the case. In the examples above, there's a simple naming convention which allows us to match tests to libraries, but we can't rely on that heuristic in general.


Consider the following two cases:



On the left, we have an implementation of the LZW compression algorithm, as separate compressor and decompressor libraries. The test is actually testing both of them, to ensure data isn't corrupted after being compressed and then decompressed. On the right, we have a web_test that is testing our web server library; it uses a URL encoder library for support, but isn't actually testing the URL encoder itself. On the left, we want to consider the LZW test and both LZW libraries as one connected component, but on the right, we'd want to exclude the URL encoder and consider web_test and web_lib as the connected component.


Despite requiring different treatment, these two cases have identical structures. In practice, we can encourage engineers to mark libraries like url_encoder_lib as being 'test only' (ie. only for use to support unit testing), which can help in the web-test case; otherwise our current approach is to use the edit distance between test and library names to pick the most likely library to match to a given test. Being able to identify cases like the LZW example, with one test and two libraries, is likely to involve processing test coverage data, and has not yet been explored.


Focus on the User...

While the ultimate beneficiaries of dead code deletion are the software engineers themselves, many of whom appreciate the help in keeping their projects tidy, not everyone is happy to receive automated changelists trying to delete code they wrote. This is where the social engineering side of the project comes in, which is every bit as important as the software engineering.


Automatic code deletion is an alien concept to many engineers, and just as with the introduction of unit testing 20 years ago, many are resistent to it. It takes time and effort to change people's minds, along with a good deal of careful communication.


There are three main parts to Sensenmann's communication strategy. Of primary importance are the change descriptions, as they are the first thing a reviewer will see. They must be concise, but must provide enough background for all reviewers to be able to make a judgement. This is a difficult balance to achieve: too short, and many people will fail to find the information they need; too long, and one ends up with a wall of text no one will bother to read. Well-labelled links to supporting documentation and FAQs can really help here.


The second part is the supporting documentation. Concise and clear wording is vital here, too, as is a good navigable structure. Different people will need different information: some need reassurance that in a source control system, deletions can be rolled back; some will need guidance in how best to deal with a bad change, for example by fixing a misuse of the build system. Through careful thought, and iterations of user feedback, the supporting documentation can become a useful resource.


The third part is dealing with user feedback. This can be the hardest part at times: feedback is more frequently negative than positive, and can require a cool head and a good deal of diplomacy at times. However, accepting such feedback is the best way to improve the system in general, make users happier, and thus avoid negative feedback in the future.


Onwards and Upwards

Automatically deleting code may sound like a strange idea: code is expensive to write, and is generally considered to be an asset. However, unused code costs time and effort, whether in maintaining it, or cleaning it up. Once a code base reaches a certain size, it starts to make real sense to invest engineering time in automating the clean-up process. At Google's scale, it is estimated that automatic code deletion has paid for itself tens of times over, in saved maintenance costs.


The implementation requires solutions to problems both technical and social in nature. While a lot of progress has been made in both these areas, they are not entirely solved yet. As improvements are made, though, the rate of acceptance of the deletions increases and automatic deletion becomes more and more impactful. This kind of investment will not make sense everywhere, but if you have a huge mono-repository, maybe it'd make sense for you too. At least at Google, reducing the C++ maintenance burden by 5% is a huge win.