Hufsa Munawar wants Pakistani women to feel safe online

“Despite the amazing talent we have in the Pakistan women, a lot of them are not comfortable being online for safety reasons,” says Hufsa Munawar. Hufsa is a community manager for Google who works with developers in Pakistan, and is extremely aware of the challenges women in her region face on the internet. Hufsa also manages Google’s Women Techmakers program, which recently brought online safety trainings for women in the area.

Together with Jigsaw, a team at Google that explores threats to open societies and builds technology to inspire scalable solutions, Women Techmakers has worked to bring online safety training to more and more women around the globe. “The workshop content really breaks down the major online security issues that exist, names them and gives suggestions for dealing with them,” Hufsa says. As a part of this program, the Women Techmaker ambassadors of Pakistan conducted eight online safety trainings and six ideathons to empower women to build solutions addressing online security concerns.

Photograph of a woman looking out in to a crowd smiling with her arm raised in the air. She is wearing a purple tunic and holding a microphone. Her name tag reads "Hufsa."

Hufsa Munwar

Hufsa and her team were able to train over 1,300 participants across six different cities in Pakistan — and 100% of the participants who shared their feedback said they’ve faced online safety-related issues in the past. More encouragingly, 86% said they learned something new from the online training that would make them feel safer online.

“It’s about creating awareness and education,” she says. “When you feel like it’s not just you experiencing these things, but also others in your community, you start to feel more comfortable and motivated to look for solutions.” During the training, participants shared examples of moments when they felt unsafe online, and later the group went through examples of online threat tactics — things like doxing, hacking, hate speech, violent threats, video or image-based abuse, misinformation, defamation, cyber harassment and impersonation.

After exploring these negative threats, they turned their attention to solutions during the ideathons. During the ideathons, each participant worked on proposing a solution for a problem statement given to them. These problem statements were selected from the training module and focused specifically on what women face online. “These sessions were so informative. I’ve been in tech for eight years, and I was learning new things about how these kinds of online issues can be resolved.”

One ideathon team in Karachi included a young woman who had faced online harassment for wearing a head covering. “She came up on stage and presented her idea for an app-based community where you could talk about the online hate you were facing and receive help from an AI-based system that offered ideas on what you could do, and I was really proud of her,” says Hufsa. “Her confidence, to me, was the most important thing. I loved that she understood why it’s important to form a community, felt comfortable sharing her previous experiences and proposed a unique solution to the problem.”

Hufsa sees the growing interest in these kinds of safety trainings as a sign that the power of community building is becoming better understood. “Our Women Techmaker ambassadors from Pakistan, Hira Tariq, Irum Zahra, Aiman Saeed, Ramsha Siddiqui and Annie Gul have laid down an excellent foundation for the conversation that needs to happen around women’s online safety.” she says. “This experience was so powerful because I saw that the participants trust the Women Techmakers ambassadors, and that they’re making real connections.” And the work continues: Hufsa says women who attended the workshop are requesting similar training sessions for their workplaces. “This was just the beginning. Our ambassadors and other friends in the community are working to continue training women in this space and make Digital Pakistan a safer and more inclusive space for our women.”

Exploring first-party data in our Publisher Privacy Q&A

In the third episode of our Publisher Privacy Q&A series, we’re talking about first-party data and its important role in the privacy-centric future of digital advertising.

Questions covered in episode 3:

  1. What is first-party data?
  2. How does first-party data differ from third-party data?
  3. Why is first-party data important?
  4. How can publishers use their first-party data to grow advertising revenue?

Stay tuned for the fourth Publisher Privacy Q&A episode coming in March. In the meantime, check out episodes 1 and 2 of this series.

Roger Mooking on Black History Month in Canada

Editor's note: This Black History Month, we’re highlighting Black perspectives, and sharing stories from Black Googlers, partners, and culture shapers from across Canada. 

Roger Mooking is host of Man Fire Food and Wall of Chefs judge on Food Network Canada

Black History Month has meant many things to me over the years, and my relationship to it changes almost annually. In my formative years growing up in Edmonton, Alberta, Black History Month was a welcome anomaly from my day- to- day reality and something I embraced like a Kardashian to a selfie. Since then, I’ve felt all the emotions for this month, ranging from pride to disdain smeared with a trailer load of aloofness. It’s complicated. It is necessary and important to recognize our heroes and educate every generation. The other side of this algebraic equation has me perpetually asking “who has granted us this opportunity,” given there is an undeniable power-play in this dynamic. There is no “white history month” because well…that history will never be relegated at all and certainly not designated to 28 (9 with a leap year) days of the 365 day calendar.
Mooking is best known as the host of grilling and barbecue show Man Fire Food on Food Network Canada and Cooking Channel. The popular travelling food series showcases a dynamic range of live fire cooking, including whole hog barbecue, lobster boils, Hawaiian emu’s, seafood roasts and more! 

BHM always serves as a great reminder when companies and media outlets who don't reach out the other 11 months of the year call during Black History Month for a contribution from people who got that melanin poppin’. Recently, I’ve observed many more Black faces in front of the camera, and this welcome representation is not something I grew up seeing and it is certainly valuable for the most impressionable formative age Black minds. Unfortunately, although significant, it often feels like performance art as I don’t see the same commitment to that type of representation behind the scenes, in the boardrooms and in the executive levels of these same outlets. This reminder strengthens my resolve to continue doing what I do to level the playing field, which is constantly shifting. Me and my team occupy boardrooms, television sets, creative spaces, studios, and work in a variety of teams in front of, and behind the camera. We are always having to manage the creative commerce minefield with a balance of firm resolve, challenging discourse, and good old fun having.
Over the years, Mooking has garnered many accolades including the prestigious “Premiers Award” for excellence in the field of Creative Arts and Design, a Gourmand World Cookbook Award, a Socan Classics Award and countless “Best Of” mentions. 

It is wholly common, so much so, that it has become our expectation, that I am asked to participate in a campaign “for my perspective,” only to have my perspective being perceived as too niche or not mass market enough. This is when my curry chicken becomes a burger. The confusion is mind numbing because as I walk the streets of this beautiful country, I hear a vast array of languages being spoken, I find authentic restaurants representing every corner of the globe, and see increasing numbers of babies being born of diverse parents. It is very clear, and the statistics support the fact, that the “mass market” and “my perspective” is not what it was when I first arrived in Canada at 5 years old. I’ve observed this shift across the country in major, secondary, and rural communities. Yet, I am still mostly still facing the same discussions in these business environments that I was having 2 decades ago. Although the disconnect is incredibly frustrating, my commitment strengthens with every encounter, as they are numerous and often. Hopefully they will not be as numerous or as often for my kids' kids generation. Maybe by then, dynamics and representation in favour of marginalized communities will shift enough for there to be need for a “white history month” and my daughters will be asking Bill Gates great grandkids how they feel about it.

,Chrome Beta for Android Update

Hi everyone! We've just released Chrome Beta 98 (98.0.4758.87) for Android: it's now available on Google Play.

You can see a partial list of the changes in the Git log. For details on new features, check out the Chromium blog, and for details on web platform updates, check here.

If you find a new issue, please let us know by filing a bug.

Krishna Govind
Google Chrome

New integrated view for Gmail features email, Google Meet, Google Chat and Spaces in one place

What’s changing 

We’re introducing a new, integrated view for Gmail, making it easy to move between critical applications like Gmail, Chat and Meet in one unified location.





We’ll introduce this new experience according to this timeline:



Beginning February 8, 2022: 
  • Users can opt-in to test the new experience, allowing them to try it out and become more accustomed to it. Users can revert to classic Gmail via settings. 
  • We will share an update on the Workspace Updates Blog, along with Help Center content, once rollout begins.

By April 2022: 
  • Users who have not opted-in will begin seeing the new experience by default, but can revert to classic Gmail via settings.

By the end of Q2 2022: 
  • This will become the standard experience for Gmail, with no option to revert back.
  • Around the same time, users will also begin seeing the new streamlined navigation experience on Chat web (mail.google.com/chat). 
  • Important Note: This also means users will not have the option to configure Chat to display on the right side of Gmail.




We will share more information on the exact timing of these phases on the Workspace Updates blog. See below for more information.



Who’s impacted

End users



Why you’d use it 

When enabled, the new navigation menu allows you to easily switch between your inbox, important conversations, and join meetings without having to switch between tabs or open a new window. 


Notification bubbles make it easy to stay on top of what immediately needs your attention. When working in Chat and Spaces, you can view a full list of conversations and Spaces within a single screen, making it easier to navigate to and engage. 


When working in your inbox, you’ll be able to view the full array of Mail and Label options currently available in Gmail today. 


In the coming months, you will also see email and chat results when using the search bar, making it easier to find what you need by eliminating the need to search within a specific product.



We hope this new experience makes it easier for you to stay on top of what’s important and get work done faster in a single, focused location. Further, this will help reduce the need to switch between various applications, windows, or tabs.


Getting started

  • Admins: There is no admin control for this feature.
  • End users: This feature will be OFF by default and can be enabled by the user from their Gmail settings. 

Rollout pace

  • Rapid Release domains: Gradual rollout (up to 15 days for feature visibility) starting on February 8, 2022
  • Scheduled Release domains: Extended rollout (potentially longer than 15 days for feature visibility starting on February 22, 2022

Note: We will share an update on the Workspace Updates Blog, along with Help Center content, once rollout begins.


Availability

  • Available to Google Workspace Business Starter, Business Standard, Business Plus, Enterprise Essentials, Enterprise Standard, Enterprise Plus, Education Fundamentals, Education Plus, Frontline, and Nonprofits, as well as G Suite Basic and Business customers
  • Not available to Google Workspace Essentials customers

Controlling Neural Networks with Rule Representations

Deep neural networks (DNNs) provide more accurate results as the size and coverage of their training data increases. While investing in high-quality and large-scale labeled datasets is one path to model improvement, another is leveraging prior knowledge, concisely referred to as “rules” — reasoning heuristics, equations, associative logic, or constraints. Consider a common example from physics where a model is given the task of predicting the next state in a double pendulum system. While the model may learn to estimate the total energy of the system at a given point in time only from empirical data, it will frequently overestimate the energy unless also provided an equation that reflects the known physical constraints, e.g., energy conservation. The model fails to capture such well-established physical rules on its own. How could one effectively teach such rules so that DNNs absorb the relevant knowledge beyond simply learning from the data?

In “Controlling Neural Networks with Rule Representations”, published at NeurIPS 2021, we present Deep Neural Networks with Controllable Rule Representations (DeepCTRL), an approach used to provide rules for a model agnostic to data type and model architecture that can be applied to any kind of rule defined for inputs and outputs. The key advantage of DeepCTRL is that it does not require retraining to adapt the rule strength. At inference, the user can adjust rule strength based on the desired operation point of accuracy. We also propose a novel input perturbation method, which helps generalize DeepCTRL to non-differentiable constraints. In real-world domains where incorporating rules is critical — such as physics and healthcare — we demonstrate the effectiveness of DeepCTRL in teaching rules for deep learning. DeepCTRL ensures that models follow rules more closely while also providing accuracy gains at downstream tasks, thus improving reliability and user trust in the trained models. Additionally, DeepCTRL enables novel use cases, such as hypothesis testing of the rules on data samples and unsupervised adaptation based on shared rules between datasets.

The benefits of learning from rules are multifaceted:

  • Rules can provide extra information for cases with minimal data, improving the test accuracy.
  • A major bottleneck for widespread use of DNNs is the lack of understanding the rationale behind their reasoning and inconsistencies. By minimizing inconsistencies, rules can improve the reliability of and user trust in DNNs.
  • DNNs are sensitive to slight input changes that are human-imperceptible. With rules, the impact of these changes can be minimized as the model search space is further constrained to reduce underspecification.

Learning Jointly from Rules and Tasks
The conventional approach to implementing rules incorporates them by including them in the calculation of the loss. There are three limitations of this approach that we aim to address: (i) rule strength needs to be defined before learning (thus the trained model cannot operate flexibly based on how much the data satisfies the rule); (ii) rule strength is not adaptable to target data at inference if there is any mismatch with the training setup; and (iii) the rule-based objective needs to be differentiable with respect to learnable parameters (to enable learning from labeled data).

DeepCTRL modifies canonical training by creating rule representations, coupled with data representations, which is the key to enable the rule strength to be controlled at inference time. During training, these representations are stochastically concatenated with a control parameter, indicated by α, into a single representation. The strength of the rule on the output decision can be improved by increasing the value of α. By modifying α at inference, users can control the behavior of the model to adapt to unseen data.

DeepCTRL pairs a data encoder and rule encoder, which produce two latent representations, which are coupled with corresponding objectives. The control parameter α is adjustable at inference to control the relative weight of each encoder.

Integrating Rules via Input Perturbations
Training with rule-based objectives requires the objectives to be differentiable with respect to the learnable parameters of the model. There are many valuable rules that are non-differentiable with respect to input. For example, “higher blood pressure than 140 is likely to lead to cardiovascular disease” is a rule that is hard to be combined with conventional DNNs. We also introduce a novel input perturbation method to generalize DeepCTRL to non-differentiable constraints by introducing small perturbations (random noise) to input features and constructing a rule-based constraint based on whether the outcome is in the desired direction.

Use Cases
We evaluate DeepCTRL on machine learning use cases from physics and healthcare, where utilization of rules is particularly important.

  • Improved Reliability Given Known Principles in Physics
  • We quantify reliability of a model with the verification ratio, which is the fraction of output samples that satisfy the rules. Operating at a better verification ratio could be beneficial, especially if the rules are known to be always valid, as in natural sciences. By adjusting the control parameter α, a higher rule verification ratio, and thus more reliable predictions, can be achieved.

    To demonstrate this, we consider the time-series data generated from double pendulum dynamics with friction from a given initial state. We define the task as predicting the next state of the double pendulum from the current state while imposing the rule of energy conservation. To quantify how much the rule is learned, we evaluate the verification ratio.

    DeepCTRL enables controlling a model's behavior after learning, but without retraining. For the example of a double pendulum, conventional learning imposes no constraints to ensure the model follows physical laws, e.g., conservation of energy. The situation is similar for the case of DeepCTRL where the rule strength is low. So, the total energy of the system predicted at time t+1 ( blue) can sometimes be greater than that measured at time t (red), which is physically disallowed (bottom left). If rule strength in DeepCTRL is high, the model may follow the given rule but lose accuracy (discrepancy between red and blue is larger; bottom right). If rule strength is between the two extremes, the model may achieve higher accuracy (blue curve is close to red) and follow the rule properly (blue curve is lower than red one).

    We compare the performance of DeepCTRL on this task to conventional baselines of training with a fixed rule-based constraint as a regularization term added to the objective, λ. The highest of these regularization coefficients provides the highest verification ratio (shown by the green line in the second graph below), however, the prediction error is slightly worse than that of λ = 0.1 (orange line). We find that the lowest prediction error of the fixed baseline is comparable to that of DeepCTRL, but the highest verification ratio of the fixed baseline is still lower, which implies that DeepCTRL could provide accurate predictions while following the law of energy conservation. In addition, we consider the benchmark of imposing the rule-constraint with Lagrangian Dual Framework (LDF) and demonstrate two results where its hyperparameters are chosen by the lowest mean absolute error (LDF-MAE) and the highest rule verification ratio (LDF-Ratio) on the validation set. The performance of the LDF method is highly sensitive to what the main constraint is and its output is not reliable (black and pink dashed lines).

    Experimental results for the double pendulum task, showing the task-based mean absolute error (MAE), which measures the discrepancy between the ground truth and the model prediction, versus DeepCTRL as a function of the control parameter α. TaskOnly doesn’t have a rule constraint and Task & Rule has different rule strength (λ). LDF enforces rules by solving a constraint optimization problem.
    As above, but showing the verification ratio from different models.
    Experimental results for the double pendulum task showing the current and predicted energy at time t and t + 1, respectively.

    Additionally, the figures above illustrate the advantage DeepCTRL has over conventional approaches. For example, increasing the rule strength λ from 0.1 to 1.0 improves the verification ratio (from 0.7 to 0.9), but does not improve the mean absolute error. Arbitrarily increasing λ will continue to drive the verification ratio closer to 1, but will result in worse accuracy. Thus, finding the optimal value of λ will require many training runs through the baseline model, whereas DeepCTRL can find the optimal value for the control parameter α much more quickly.

  • Adapting to Distribution Shifts in Healthcare
  • The strengths of some rules may differ between subsets of the data. For example, in disease prediction, the correlation between cardiovascular disease and higher blood pressure is stronger for older patients than younger patients. In such situations, when the task is shared but data distribution and the validity of the rule differ between datasets, DeepCTRL can adapt to the distribution shifts by controlling α.

    Exploring this example, we focus on the task of predicting whether cardiovascular disease is present or not using a cardiovascular disease dataset. Given that higher systolic blood pressure is known to be strongly associated with cardiovascular disease, we consider the rule: “higher risk if the systolic blood pressure is higher”. Based on this, we split the patients into two groups: (1) unusual, where a patient has high blood pressure, but no disease or lower blood pressure, but has disease; and (2) usual, where a patient has high blood pressure and disease or low blood pressure, but no disease.

    We demonstrate below that the source data do not always follow the rule, and thus the effect of incorporating the rule can depend on the source data. The test cross entropy, which indicates classification accuracy (lower cross entropy is better), vs. rule strength for source or target datasets with varying usual / unusual ratio are visualized below. The error monotonically increases as α → 1 because the enforcement of the imposed rule, which doesn’t accurately reflect the source data, becomes more strict.

    Test cross entropy vs. rule strength for a source dataset with usual / unusual ratio of 0.30.

    When a trained model is transferred to the target domain, the error can be reduced by controlling α. To demonstrate this, we show three domain-specific datasets, which we call Target 1, 2, and 3. In Target 1, where the majority of patients are from the usual group, as α is increased, the rule-based representation has more weight and the resultant error decreases monotonically.

    As above, but for a Target dataset (1) with a usual / unusual ratio of 0.77.

    When the ratio of usual patients is decreased in Target 2 and 3, the optimal α is an intermediate value between 0 and 1. These demonstrate the capability to adapt the trained model via α.

    As above, but for Target 2 with a usual / unusual ratio of 0.50.
    As above, but for Target 3 with a usual / unusual ratio of 0.40.

Conclusions
Learning from rules can be crucial for constructing interpretable, robust, and reliable DNNs. We propose DeepCTRL, a new methodology used to incorporate rules into data-learned DNNs. DeepCTRL enables controllability of rule strength at inference without retraining. We propose a novel perturbation-based rule encoding method to integrate arbitrary rules into meaningful representations. We demonstrate three use cases of DeepCTRL: improving reliability given known principles, examining candidate rules, and domain adaptation using the rule strength.

Acknowledgements
We greatly appreciate the contributions of Jinsung Yoon, Xiang Zhang, Kihyuk Sohn and Tomas Pfister.

Source: Google AI Blog


Improving App Performance with Baseline Profiles

Or how to improve startup time by up to 40%

Posted by Kateryna Semenova, DevRel Engineer; Rahul Ravikumar, Software Engineer; Chris Craik, Software Engineer

ALT TEXT GOES HERE


Why is startup time important?

A lot of apps find correlation between app performance and user engagement. People expect apps to be responsive and fast to load. Startup time is one of the major metrics for app performance and quality.

Some of our partners have already invested a lot of time and resources for app startup optimizations. For example, check out the Facebook story.

In this blog post we’ll discuss Baseline Profiles and how they improve app and library performance, including startup time by up to 40%. While this blogpost focuses on startup, baseline profiles also significantly improve jank as well.


History

Android 9 (API level 28) introduced ART optimizing profiles in Play Cloud to improve app startup time. On average, we’ve seen that apps' cold starts are at least 15% faster across a variety of devices when Cloud Profiles are available.


How do Profiles work?

When the app is first launched after install or update, its code runs in an interpreted mode until it is JITted. In an APK, Java and Kotlin code is compiled as dex bytecode, but not fully compiled to machine code (since Android 6), due to the cost of storing and loading fully compiled apps. Classes and methods that are frequently used in the app, as well as those used for app startup, are recorded into a profile file. Once the device enters idle mode, ART compiles the apps based on these profiles. This speeds up subsequent app launches.

Starting with Android 9 (API level 28), Google Play also provides Cloud Profiles. When an app runs on a device, the profiles generated by ART are uploaded by the Play Store app and aggregated in the cloud. Once there are enough profiles uploaded for an application, the Play app uses the aggregated profile for subsequent installs.


Problem

While Cloud Profiles are great when they are available, they aren't always ready to be used when an app is installed. Collecting and aggregating the profiles usually takes several days, which is a problem when many apps update on a weekly basis. Many users will install an update before the Cloud Profile is available. The Google Android team started looking for other ways to improve the latency of profiles.


Solution

Baseline Profiles are a new mechanism to provide profiles which can be used on Android 7 (API level 24) and higher. A baseline profile is an ART profile generated by the Android Gradle plugin using a human readable profile format that can be provided by apps and libraries. An example might look like this:

HSPLandroidx/compose/runtime/ComposerImpl;->updateValue(Ljava/lang/Object;)V
HSPLandroidx/compose/runtime/ComposerImpl;->updatedNodeCount(I)I
HLandroidx/compose/runtime/ComposerImpl;->validateNodeExpected()V
PLandroidx/compose/runtime/CompositionImpl;->applyChanges()V
HLandroidx/compose/runtime/ComposerKt;->findLocation(Ljava/util/List;I)I

Example for Compose library.


The binary profile is stored in a specific location in the APK assets directory (assets/dexopt/baseline.prof).

Baseline Profiles are created during build time, shipped as part of the APK to Play, and then sent from Play to users when an app is downloaded. They fill the gap in the ART Cloud Profile pipeline, when Cloud Profiles are not yet available, and automatically merge with Cloud Profiles when they are.


This diagram displays the baseline profile workflow from creation through end-user delivery.

This diagram displays the baseline profile workflow from creation through end-user delivery.


One of the biggest benefits of Baseline Profiles is that they can be developed and evaluated locally so developers can see realistic end-user performance improvements. They are also supported on a lower version of Android(7 and higher) than Cloud Profiles, which are only available starting in Android 9.


Impact


App devs

In early 2021, Google Maps switched from a two-week to a one-week release cycle. More frequent updates meant more frequently discarding local pre-compilation, and more users experiencing slow launches without Play Cloud Profiles. By using Baseline Profiles, Google Maps improved their average startup time by 30% and saw a corresponding increase in searches by 2.4%, an immense gain for such an established app.


Library devs

Code in a library is just like that of an app - it's not fully compiled by default, which can be a problem if it does significant work on the critical path of startup.

Jetpack Compose is a UI library that is not a part of the Android system image and thus not fully compiled when installed, unlike much of the Android View toolkit code. This was causing performance problems, especially for the first few cold launches of the app.

To solve this problem, Compose uses profile installer. It ships baseline profile rules which reduce startup time and jank in Compose apps.

Google PlayStore’s search results page has been re-written with Compose. After incorporating the Baseline Profile rules from Compose, time to render the initial search results page with images improved by ~40%.

The Android team has also added Baseline Profiles to relevant AndroidX libraries. This benefits all Android apps using these libraries. Constraint Layout has found shipping profile rules reduces animation frame times by more than one millisecond.


How to use Baseline Profiles


Create a custom Baseline Profile

All apps and library developers can benefit from including Baseline Profiles. Ideally, developers create profiles for their most critical user journeys to ensure that those journeys have consistently fast performance regardless of whether cloud profiles are available. Check out the detailed guide on how to set up Baseline Profiles for both app and library developers.


Update dependencies

If you are not ready to generate Baseline Profiles for your app right now, you can still benefit from them by updating your dependencies. If you build with Android Gradle Plugin 7.1.0-alpha05 or newer, you'll get Baseline Profiles included in your APK that are already provided by libraries (such as Jetpack). Google Play compiles your app with these profiles at install time. You can supplement these profiles as part of building your application.


Measure Improvements

Don’t forget to measure improvements. Follow the steps on how to measure startup with the generated profile locally.


Provide feedback

Please share your feedback and let us know your experience!

Categorize content and enhance content protection at scale with Google Drive labels

What’s changing 

Automated classification with Google Workspace DLP and labels-driven sharing restrictions are now generally available. These features were part of a beta we announced last year for enhanced content classification, governance, and data loss prevention (DLP) with Google Drive labels. 

A new Admin console setting can now automatically apply up to 5 labels to all new files your users create, or to all newly created files owned by specific parts of your organization.