
A tribute to Alan Rickman’s life and roles

Recently, we announced full HD Google Meet video calls, allowing you to set your Google Meet video resolution to 1080p. Similarly, we’re adding 1080p as a resolution option for live streaming video, an increase from 720p. If there is not enough bandwidth, the video feed will automatically revert to the best quality possible. This update delivers a higher quality, crisp video experience for those viewing a live stream.
For select Google Workspace editions, admins can now define a duration after which their users' messages in Google Chat will be deleted automatically. This can apply to messages in 1:1 conversations, group conversations, and spaces — time periods can be assigned for each message type. Note that this retention period only applies to messages sent when history is enabled. The auto-deletion timeframe can range from 30 days to several years.
Posted by Chanel Greco, Developer Advocate Google Workspace
We recently launched the Google Workspace APIs Explorer, a new tool to help streamline developing on the Google Workspace Platform. What is this handy tool and how can you start using it?
The Google Workspace APIs Explorer is a tool that allows you to explore and test Google Workspace APIs without having to write any code. It's a great way to get familiar with the capabilities of the many Google Workspace APIs.
The Google Workspace APIs Explorer is a web-based tool that allows you to interact with Google Workspace APIs in a visual way.
![]() |
To use this tool, simply navigate to the Google Workspace APIs Explorer page and select the API that you want to explore. The Google Workspace APIs Explorer will then display a list of all the methods available for that API. You can click on any method to see more information about it, including its parameters, responses, and examples.
To test an API method, simply enter the required parameters and click on the "Execute" button. The Google Workspace APIs Explorer will then send the request to the API and return the response. Please note, the tool acts on real data and authenticates with your Google Account, so use caution when trying methods that create, modify, or delete data.
These are some of the benefits of using the Google Workspace APIs Explorer:
- You can browse and discover the 25+ different Google Workspace APIs.
- The tool can help you create code samples for your integrations or add-ons.
- It can assist with troubleshooting problems with Google Workspace APIs.
- It is a neat way to see the results of API requests in real time.
You can access the Google Workspace APIs Explorer tool on the Google Workspace for Developers documentation, either through the navigation (Resources > API Explorer), or on its dedicated page. You will need a Google account to use the tool. This account can either be a Google Workspace account or the Google account you use for accessing tools like Gmail, Drive, Docs, Calendar, and more.
We also have a video showing how you can get started using the Google Workspace APIs Explorer – check it out here!
Unlike other mobile OSes, Android is built with a transparent, open-source architecture. We firmly believe that our users and the mobile ecosystem at-large should be able to verify Android’s security and safety and not just take our word for it.
We’ve demonstrated our deep belief in security transparency by investing in features that enable users to confirm that what they expect is happening on their device is actually happening.
The Assurance of Android Protected Confirmation
One of those features is Android Protected Confirmation, an API that enables developers to utilize Android hardware to provide users even more assurance that a critical action has been executed securely. Using a hardware-protected user interface, Android Protected Confirmation can help developers verify a user’s action intent with a very high degree of confidence. This can be especially useful in a number of user moments – like during mobile payment transactions - that greatly benefit from additional verification and security.
We’re excited to see that Android Protected Confirmation is now gaining ecosystem attention as an industry-leading method for confirming critical user actions via hardware. Recently, UBS Group AG and the Bern University of Applied Sciences, co-financed by Innosuisse and UBS Next, announced they’re working with Google on a pilot project to establish Protected Confirmation as a common application programmable interface standard. In a pilot planned for 2023, UBS online banking customers with Pixel 6 or 7 devices can use Android Protected Confirmation backed by StrongBox, a certified hardware vault with physical attack protections, to confirm payments and verify online purchases through a hardware-based confirmation in their UBS Access App.
Demonstrating Real-World Use for Android Protection Confirmation
We’ve been working closely with UBS to bring this pilot to life and ensure they’re able to test it on Google Pixel devices. Demonstrating real-world use cases that are enabled by Android Protected Confirmation unlocks the promise of this technology by delivering improved and innovative experiences for our users. We’re seeing interest in Android Protected Confirmation across the industry and OEMs are increasingly looking at how to build even more hardware-based confirmation into critical security user moments. We look forward to forming an industry alliance that will work together to strengthen mobile security and home in on protecting confirmation support.
At Google, tens of thousands of software engineers contribute to a multi-billion-line mono-repository. This repository, stored in a system called Piper, contains the source code of shared libraries, production services, experimental programs, diagnostic and debugging tools: basically anything that's code-related.
This open approach can be very powerful. For example, if an engineer is unsure how to use a library, they can find examples just by searching. It also allows kind-hearted individuals to perform important updates across the whole repository, be that migrating to newer APIs, or following language developments such as Python 3, or Go generics.
Code, however, doesn't come for free: it's expensive to produce, but also costs real engineering time to maintain. Such maintenance cannot easily be skipped, at least if one wants to avoid larger costs later on.
But what if there were less code to maintain? Are all those lines of code really necessary?
Any large project accumulates dead code: there's always some module that is no longer needed, or a program that was used during early development but hasn't been run in years. Indeed, entire projects are created, function for a time, and then stop being useful. Sometimes they are cleaned up, but cleanups require time and effort, and it's not always easy to justify the investment.
However, while this dead code sits around undeleted, it's still incurring a cost: the automated testing system doesn't know it should stop running dead tests; people running large-scale cleanups aren't aware that there's no point migrating this code, as it is never run anyway.
So what if we could clean up dead code automatically? That was exactly what people started thinking several years ago, during the Zürich Engineering Productivity team's annual hackathon. The Sensenmann project, named after the German word for the embodiment of Death, has been highly successful. It submits over 1000 deletion changelists per week, and has so far deleted nearly 5% of all C++ at Google.
Its goal is simple (at least, in principle): automatically identify dead code, and send code review requests ('changelists') to delete it.
Google's build system, Blaze (the internal version of Bazel) helps us determine this: by representing dependencies between binary targets, libraries, tests, source files and more, in a consistent and accessible way, we're able to construct a dependency graph. This allows us to find libraries that are not linked into any binary, and propose their deletion.
That's only a small part of the problem, though: what about all those binaries? All the one-shot data migration programs, and diagnostic tools for deprecated systems? If they don't get removed, all the libraries they depend on will be kept around too.
The only real way to know if programs are useful is to check whether they're being run, so for internal binaries (programs run in Google's data centres, or on employee workstations), a log entry is written when a program runs, recording the time and which specific binary it is. By aggregating this, we get a liveness signal for every binary used in Google. If a program hasn't been used for a long time, we try sending a deletion changelist.
There are, of course, exceptions: some program code is there simply to serve as an example of how to use an API; some programs run only in places we can't get a log signal from. There are many other exceptions too, where removing the code would be deleterious. For this reason, it's important to have a blocklisting system so that exceptions can be marked, and we can avoid bothering people with spurious changelists.
Consider a simple case. We have two binaries, each depending on its own library, and also on a third, shared library. Drawing this (ignoring the source files and other dependencies), we find this kind of structure:
If we see that main1 is in active use, but main2 was last used over a year ago, we can propagate the liveness signal through the build tree, marking main1 as alive along with everything it depends upon. What is left can be removed; as main2 depends on lib2, we want to delete these two targets in the same change:
So far so good, but real production code has unit tests, whose build targets depend upon the libraries they test. This immediately makes the graph traversal a lot more complicated:
The testing infrastructure is going to run all those tests, including lib2_test, despite lib2 never being executed 'for real'. This means we cannot use test runs as a 'liveness' signal: if we did, we'd consider lib2_test to be alive, which would keep lib2 around forever. We would only be able to clean up untested code, which would severely hamper our efforts.
What we really want is for each test to share the fate of the library it is testing. We can do this by making the library and its test interdependent, thus creating loops in the graph:
This turns each library and its test into a strongly connected component. We can use the same technique as before, marking the 'live' nodes and then hunting for collections of 'dead' nodes to be deleted, but this time using Tarjan's strongly connected components algorithm to deal with the loops.
Simple, right? Well, yes, if it's easy to identify the relationships between tests and the libraries they're testing. Sadly, that is not always the case. In the examples above, there's a simple naming convention which allows us to match tests to libraries, but we can't rely on that heuristic in general.
Consider the following two cases:
On the left, we have an implementation of the LZW compression algorithm, as separate compressor and decompressor libraries. The test is actually testing both of them, to ensure data isn't corrupted after being compressed and then decompressed. On the right, we have a web_test that is testing our web server library; it uses a URL encoder library for support, but isn't actually testing the URL encoder itself. On the left, we want to consider the LZW test and both LZW libraries as one connected component, but on the right, we'd want to exclude the URL encoder and consider web_test and web_lib as the connected component.
Despite requiring different treatment, these two cases have identical structures. In practice, we can encourage engineers to mark libraries like url_encoder_lib as being 'test only' (ie. only for use to support unit testing), which can help in the web-test case; otherwise our current approach is to use the edit distance between test and library names to pick the most likely library to match to a given test. Being able to identify cases like the LZW example, with one test and two libraries, is likely to involve processing test coverage data, and has not yet been explored.
While the ultimate beneficiaries of dead code deletion are the software engineers themselves, many of whom appreciate the help in keeping their projects tidy, not everyone is happy to receive automated changelists trying to delete code they wrote. This is where the social engineering side of the project comes in, which is every bit as important as the software engineering.
Automatic code deletion is an alien concept to many engineers, and just as with the introduction of unit testing 20 years ago, many are resistent to it. It takes time and effort to change people's minds, along with a good deal of careful communication.
There are three main parts to Sensenmann's communication strategy. Of primary importance are the change descriptions, as they are the first thing a reviewer will see. They must be concise, but must provide enough background for all reviewers to be able to make a judgement. This is a difficult balance to achieve: too short, and many people will fail to find the information they need; too long, and one ends up with a wall of text no one will bother to read. Well-labelled links to supporting documentation and FAQs can really help here.
The second part is the supporting documentation. Concise and clear wording is vital here, too, as is a good navigable structure. Different people will need different information: some need reassurance that in a source control system, deletions can be rolled back; some will need guidance in how best to deal with a bad change, for example by fixing a misuse of the build system. Through careful thought, and iterations of user feedback, the supporting documentation can become a useful resource.
The third part is dealing with user feedback. This can be the hardest part at times: feedback is more frequently negative than positive, and can require a cool head and a good deal of diplomacy at times. However, accepting such feedback is the best way to improve the system in general, make users happier, and thus avoid negative feedback in the future.
Automatically deleting code may sound like a strange idea: code is expensive to write, and is generally considered to be an asset. However, unused code costs time and effort, whether in maintaining it, or cleaning it up. Once a code base reaches a certain size, it starts to make real sense to invest engineering time in automating the clean-up process. At Google's scale, it is estimated that automatic code deletion has paid for itself tens of times over, in saved maintenance costs.
The implementation requires solutions to problems both technical and social in nature. While a lot of progress has been made in both these areas, they are not entirely solved yet. As improvements are made, though, the rate of acceptance of the deletions increases and automatic deletion becomes more and more impactful. This kind of investment will not make sense everywhere, but if you have a huge mono-repository, maybe it'd make sense for you too. At least at Google, reducing the C++ maintenance burden by 5% is a huge win.