
Google returns to the Munich Security Conference

Posted by Matt Ridenour, Head of Startup Developer Ecosystem - USA
Scaling high potential startups aimed at tackling climate change can have an immensely positive impact for our planet.
In line with Google’s broader commitment to address climate change, we are proud to announce the third cohort for our Google for Startups Accelerator: Climate Change program. This 10-week digital accelerator brings the best of Google’s people, products and programming to help take early-stage North American climate tech startups to the next level.
Agrology's predictive agriculture platform helps farmers grow with confidence and beat climate change through data, insights and soil monitoring at scale.
BattGenie provides Li-ion battery management software and solutions, enabling safe, fast charging while improving battery life cycle.
Bodhi empowers solar companies to deliver amazing customer experiences, automating communications so installers can focus on increasing renewable energy access.
Cambio is software that helps commercial real estate companies and their corporate tenants decarbonize their buildings.
Cleartrace is disrupting legacy reporting with a new standard for how energy and decarbonization information is collected, stored, accessed and transacted.
ElectricFish builds and deploys resilient, flexible EV infrastructure to accelerate decarbonization and support community climate adaptation.
Enersion offers zero-emission solar trigeneration energy that converts solar radiation into refrigerant-free cooling, heating and electricity.
Eugenie is an AI intelligence platform for asset-heavy manufacturers to track, trace, and reduce emissions while improving operations.
Finch is a platform that decodes products' environmental footprints to help consumers and shares insights with businesses.
Refiberd is tackling the 186 billion pound global textile waste problem with the first AI-empowered circular textile sorting and reclamation system.
Sesame Solar is decarbonizing disaster response with rapidly deployable mobile Nanogrids with essential services, providing continuous power from 100% renewable energy.
These companies will join the other 22 startups from across North America who have participated in the accelerator (see program alumni).
In addition to mentorship and technical project support, the 10-week program will focus on product design, customer acquisition, and leadership development, granting startups access to an expansive network of mentors, senior executives, and industry leaders. All Google for Startups Accelerators are equity-free, so selected companies don’t have to offer anything to participate.
We are honored to partner with this cohort of companies through this accelerator and beyond, to advance their technologies and protect our planet.
The program kicks off on Tuesday, March 7 and concludes with a virtual Demo Day on May 11. Stay tuned and join us in celebrating these exceptional startups.
The Stable channel is being updated to 110.0.5481.112 (Platform version: 15278.64.0) for most ChromeOS devices and will be rolled out over the next few days. This build contains a number of bug fixes and security updates.
If you find new issues, please let us know one of the following ways:
Interested in switching channels? Find out how.
Cole Brown,
Google ChromeOS
Note: Access to bug details and links may be kept restricted until a majority of users are updated with a fix. We will also retain restrictions if the bug exists in a third party library that other projects similarly depend on, but haven’t yet fixed.
[$3000] [1401666] High CVE-TBD Security: Sideload APKs on ChromeOS Reported by Samuel Culeron for Approach Belgium
We would also like to thank all security researchers that worked with us during the development cycle to prevent security bugs from ever reaching the stable channel.
The Bid Manager API v1.1 will sunset on April 27, 2023. The Bid Manager API v1.1 was deprecated in August 2022 and originally scheduled to sunset on February 28, 2023.
Please migrate to v2 before the sunset date to avoid an interruption of service.
You can read our release notes for more information about v2. Follow the steps in our v2 migration guide to help you migrate from v1.1 to v2.
If you run into issues or need help with your migration, please contact us using our support contact form.
TravelAssetSuggestionService
to suggest required assets (such as headlines, descriptions, long descriptions) that can be used to create asset groups in the Performance Max for travel goal campaigns. TravelAssetSuggestionService
is available to a closed allowlist for now.Customer
to track the migration status of location and image assets.SmartCampaignSettingService.GetSmartCampaignStatus
.BatchJobMetadata.execution_limit_seconds
to set the limit of execution in seconds. Batch jobs will be canceled if their execution time is longer than specified in this field.CombinedRuleUserListInfo
and ExpressionRuleUserListInfo
and their references in RuleBasedUserListInfo
. Use FlexibleUserListInfo
instead.ConversionUploadError.CUSTOMER_NOT_ACCEPTED_CUSTOMER_DATA_TERMS
will be thrown if you try to upload ClickConversion
with user_identifiers
set, but have not accepted the customer data terms.gbraid
, conversion_action
, and conversion_date_time
on different days. Trying to do so will cause a ConversionUploadError.CLICK_CONVERSION_ALREADY_EXISTS
error.Asset.field_type_policy_summaries
.local_service_id
.ProductLinkService
for adding and removing a link between a Google Ads account and an account of another product.ResponsiveSearchAdAssetRecommendation.current_ad
.ProductBiddingCategoryInfo.country_code
.PreferredContentInfo
and its reference from AdGroupBidModifier
.TargetCpm.target_frequency_goal
to support providing additional details about the goal of the Target CPM bidding strategy.Differential privacy (DP) machine learning algorithms protect user data by limiting the effect of each data point on an aggregated output with a mathematical guarantee. Intuitively the guarantee implies that changing a single user’s contribution should not significantly change the output distribution of the DP algorithm.
However, DP algorithms tend to be less accurate than their non-private counterparts because satisfying DP is a worst-case requirement: one has to add noise to “hide” changes in any potential input point, including "unlikely points’’ that have a significant impact on the aggregation. For example, suppose we want to privately estimate the average of a dataset, and we know that a sphere of diameter, Λ, contains all possible data points. The sensitivity of the average to a single point is bounded by Λ, and therefore it suffices to add noise proportional to Λ to each coordinate of the average to ensure DP.
![]() |
A sphere of diameter Λ containing all possible data points. |
Now assume that all the data points are "friendly," meaning they are close together, and each affects the average by at most 𝑟, which is much smaller than Λ. Still, the traditional way for ensuring DP requires adding noise proportional to Λ to account for a neighboring dataset that contains one additional "unfriendly" point that is unlikely to be sampled.
![]() |
Two adjacent datasets that differ in a single outlier. A DP algorithm would have to add noise proportional to Λ to each coordinate to hide this outlier. |
In “FriendlyCore: Practical Differentially Private Aggregation”, presented at ICML 2022, we introduce a general framework for computing differentially private aggregations. The FriendlyCore framework pre-processes data, extracting a “friendly” subset (the core) and consequently reducing the private aggregation error seen with traditional DP algorithms. The private aggregation step adds less noise since we do not need to account for unfriendly points that negatively impact the aggregation.
In the averaging example, we first apply FriendlyCore to remove outliers, and in the aggregation step, we add noise proportional to 𝑟 (not Λ). The challenge is to make our overall algorithm (outlier removal + aggregation) differentially private. This constrains our outlier removal scheme and stabilizes the algorithm so that two adjacent inputs that differ by a single point (outlier or not) should produce any (friendly) output with similar probabilities.
We begin by formalizing when a dataset is considered friendly, which depends on the type of aggregation needed and should capture datasets for which the sensitivity of the aggregate is small. For example, if the aggregate is averaging, the term friendly should capture datasets with a small diameter.
To abstract away the particular application, we define friendliness using a predicate 𝑓 that is positive on points 𝑥 and 𝑦 if they are “close” to each other. For example,in the averaging application 𝑥 and 𝑦 are close if the distance between them is less than 𝑟. We say that a dataset is friendly (for this predicate) if every pair of points 𝑥 and 𝑦 are both close to a third point 𝑧 (not necessarily in the data).
Once we have fixed 𝑓 and defined when a dataset is friendly, two tasks remain. First, we construct the FriendlyCore algorithm that extracts a large friendly subset (the core) of the input stably. FriendlyCore is a filter satisfying two requirements: (1) It has to remove outliers to keep only elements that are close to many others in the core, and (2) for neighboring datasets that differ by a single element, 𝑦, the filter outputs each element except 𝑦 with almost the same probability. Furthermore, the union of the cores extracted from these neighboring datasets is friendly.
The idea underlying FriendlyCore is simple: The probability that we add a point, 𝑥, to the core is a monotonic and stable function of the number of elements close to 𝑥. In particular, if 𝑥 is close to all other points, it’s not considered an outlier and can be kept in the core with probability 1.
Second, we develop the Friendly DP algorithm that satisfies a weaker notion of privacy by adding less noise to the aggregate. This means that the outcomes of the aggregation are guaranteed to be similar only for neighboring datasets 𝐶 and 𝐶' such that the union of 𝐶 and 𝐶' is friendly.
Our main theorem states that if we apply a friendly DP aggregation algorithm to the core produced by a filter with the requirements listed above, then this composition is differentially private in the regular sense.
Other applications of our aggregation method are clustering and learning the covariance matrix of a Gaussian distribution. Consider the use of FriendlyCore to develop a differentially private k-means clustering algorithm. Given a database of points, we partition it into random equal-size smaller subsets and run a good non-private k-means clustering algorithm on each small set. If the original dataset contains k large clusters then each smaller subset will contain a significant fraction of each of these k clusters. It follows that the tuples (ordered sets) of k-centers we get from the non-private algorithm for each small subset are similar. This dataset of tuples is expected to have a large friendly core (for an appropriate definition of closeness).
![]() |
We use our framework to aggregate the resulting tuples of k-centers (k-tuples). We define two such k-tuples to be close if there is a matching between them such that a center is substantially closer to its mate than to any other center.
![]() |
In this picture, any pair of the red, blue, and green tuples are close to each other, but none of them is close to the pink tuple. So the pink tuple is removed by our filter and is not in the core. |
We then extract the core by our generic sampling scheme and aggregate it using the following steps:
![]() | ![]() |
![]() | ![]() |
![]() | ![]() |
Below are the empirical results of our algorithms based on FriendlyCore. We implemented them in the zero-Concentrated Differential Privacy (zCDP) model, which gives improved accuracy in our setting (with similar privacy guarantees as the more well-known (𝜖, 𝛿)-DP).
We tested the mean estimation of 800 samples from a spherical Gaussian with an unknown mean. We compared it to the algorithm CoinPress. In contrast to FriendlyCore, CoinPress requires an upper bound 𝑅 on the norm of the mean. The figures below show the effect on accuracy when increasing 𝑅 or the dimension 𝑑. Our averaging algorithm performs better on large values of these parameters since it is independent of 𝑅 and 𝑑.
![]() | ![]() |
Left: Averaging in 𝑑= 1000, varying 𝑅. Right: Averaging with 𝑅= √𝑑, varying 𝑑. |
We tested the performance of our private clustering algorithm for k-means. We compared it to the Chung and Kamath algorithm that is based on recursive locality-sensitive hashing (LSH-clustering). For each experiment, we performed 30 repetitions and present the medians along with the 0.1 and 0.9 quantiles. In each repetition, we normalize the losses by the loss of k-means++ (where a smaller number is better).
The left figure below compares the k-means results on a uniform mixture of eight separated Gaussians in two dimensions. For small values of 𝑛 (the number of samples from the mixture), FriendlyCore often fails and yields inaccurate results. Yet, increasing 𝑛 increases the success probability of our algorithm (because the generated tuples become closer to each other) and yields very accurate results, while LSH-clustering lags behind.
FriendlyCore also performs well on large datasets, even without clear separation into clusters. We used the Fonollosa and Huerta gas sensors dataset that contains 8M rows, consisting of a 16-dimensional point defined by 16 sensors' measurements at a given point in time. We compared the clustering algorithms for varying k. FriendlyCore performs well except for k= 5 where it fails due to the instability of the non-private algorithm used by our method (there are two different solutions for k= 5 with similar cost that makes our approach fail since we do not get one set of tuples that are close to each other).
![]() |
k-means results on gas sensors' measurements over time, varying k. |
FriendlyCore is a general framework for filtering metric data before privately aggregating it. The filtered data is stable and makes the aggregation less sensitive, enabling us to increase its accuracy with DP. Our algorithms outperform private algorithms tailored for averaging and clustering, and we believe this technique can be useful for additional aggregation tasks. Initial results show that it can effectively reduce utility loss when we deploy DP aggregations. To learn more, and see how we apply it for estimating the covariance matrix of a Gaussian distribution, see our paper.
This work was led by Eliad Tsfadia in collaboration with Edith Cohen, Haim Kaplan, Yishay Mansour, Uri Stemmer, Avinatan Hassidim and Yossi Matias.
Hi everyone! We've just released Chrome Dev 112 (112.0.5594.1) for Android. It's now available on Google Play.
You can see a partial list of the changes in the Git log. For details on new features, check out the Chromium blog, and for details on web platform updates, check here.
If you find a new issue, please let us know by filing a bug.
Krishna Govind
Google Chrome