Responsible AI at Google Research: Context in AI Research (CAIR)

Artificial intelligence (AI) and related machine learning (ML) technologies are increasingly influential in the world around us, making it imperative that we consider the potential impacts on society and individuals in all aspects of the technology that we create. To these ends, the Context in AI Research (CAIR) team develops novel AI methods in the context of the entire AI pipeline: from data to end-user feedback. The pipeline for building an AI system typically starts with data collection, followed by designing a model to run on that data, deployment of the model in the real world, and lastly, compiling and incorporation of human feedback. Originating in the health space, and now expanded to additional areas, the work of the CAIR team impacts every aspect of this pipeline. While specializing in model building, we have a particular focus on building systems with responsibility in mind, including fairness, robustness, transparency, and inclusion.


Data

The CAIR team focuses on understanding the data on which ML systems are built. Improving the standards for the transparency of ML datasets is instrumental in our work. First, we employ documentation frameworks to elucidate dataset and model characteristics as guidance in the development of data and model documentation techniques — Datasheets for Datasets and Model Cards for Model Reporting.

For example, health datasets are highly sensitive and yet can have high impact. For this reason, we developed Healthsheets, a health-contextualized adaptation of a Datasheet. Our motivation for developing a health-specific sheet lies in the limitations of existing regulatory frameworks for AI and health. Recent research suggests that data privacy regulation and standards (e.g., HIPAA, GDPR, California Consumer Privacy Act) do not ensure ethical collection, documentation, and use of data. Healthsheets aim to fill this gap in ethical dataset analysis. The development of Healthsheets was done in collaboration with many stakeholders in relevant job roles, including clinical, legal and regulatory, bioethics, privacy, and product.

Further, we studied how Datasheets and Healthsheets could serve as diagnostic tools that surface the limitations and strengths of datasets. Our aim was to start a conversation in the community and tailor Healthsheets to dynamic healthcare scenarios over time.

To facilitate this effort, we joined the STANDING Together initiative, a consortium that aims to develop international, consensus-based standards for documentation of diversity and representation within health datasets and to provide guidance on how to mitigate risk of bias translating to harm and health inequalities. Being part of this international, interdisciplinary partnership that spans academic, clinical, regulatory, policy, industry, patient, and charitable organizations worldwide enables us to engage in the conversation about responsibility in AI for healthcare internationally. Over 250 stakeholders from across 32 countries have contributed to refining the standards.

Healthsheets and STANDING Together: towards health data documentation and standards.

Model

When ML systems are deployed in the real world, they may fail to behave in expected ways, making poor predictions in new contexts. Such failures can occur for a myriad of reasons and can carry negative consequences, especially within the context of healthcare. Our work aims to identify situations where unexpected model behavior may be discovered, before it becomes a substantial problem, and to mitigate the unexpected and undesired consequences.

Much of the CAIR team’s modeling work focuses on identifying and mitigating when models are underspecified. We show that models that perform well on held-out data drawn from a training domain are not equally robust or fair under distribution shift because the models vary in the extent to which they rely on spurious correlations. This poses a risk to users and practitioners because it can be difficult to anticipate model instability using standard model evaluation practices. We have demonstrated that this concern arises in several domains, including computer vision, natural language processing, medical imaging, and prediction from electronic health records.

We have also shown how to use knowledge of causal mechanisms to diagnose and mitigate fairness and robustness issues in new contexts. Knowledge of causal structure allows practitioners to anticipate the generalizability of fairness properties under distribution shift in real-world medical settings. Further, investigating the capability for specific causal pathways, or “shortcuts”, to introduce bias in ML systems, we demonstrate how to identify cases where shortcut learning leads to predictions in ML systems that are unintentionally dependent on sensitive attributes (e.g., age, sex, race). We have shown how to use causal directed acyclic graphs to adapt ML systems to changing environments under complex forms of distribution shift. Our team is currently investigating how a causal interpretation of different forms of bias, including selection bias, label bias, and measurement error, motivates the design of techniques to mitigate bias during model development and evaluation.

Shortcut Learning: For some models, age may act as a shortcut in classification when using medical images.

The CAIR team focuses on developing methodology to build more inclusive models broadly. For example, we also have work on the design of participatory systems, which allows individuals to choose whether to disclose sensitive attributes, such as race, when an ML system makes predictions. We hope that our methodological research positively impacts the societal understanding of inclusivity in AI method development.


Deployment

The CAIR team aims to build technology that improves the lives of all people through the use of mobile device technology. We aim to reduce suffering from health conditions, address systemic inequality, and enable transparent device-based data collection. As consumer technology, such as fitness trackers and mobile phones, become central in data collection for health, we explored the use of these technologies within the context of chronic disease, in particular, for multiple sclerosis (MS). We developed new data collection mechanisms and predictions that we hope will eventually revolutionize patient’s chronic disease management, clinical trials, medical reversals and drug development.

First, we extended the open-source FDA MyStudies platform, which is used to create clinical study apps, to make it easier for anyone to run their own studies and collect good quality data, in a trusted and safe way. Our improvements include zero-config setups, so that researchers can prototype their study in a day, cross-platform app generation through the use of Flutter and, most importantly, an emphasis on accessibility so that all patient’s voices are heard. We are excited to announce this work has now been open sourced as an extension to the original FDA-Mystudies platform. You can start setting up your own studies today!

To test this platform, we built a prototype app, which we call MS Signals, that uses surveys to interface with patients in a novel consumer setting. We collaborated with the National MS Society to recruit participants for a user experience study for the app, with the goal of reducing dropout rates and improving the platform further.

MS Signals app screenshots. Left: Study welcome screen. Right: Questionnaire.

Once data is collected, researchers could potentially use it to drive the frontier of ML research in MS. In a separate study, we established a research collaboration with the Duke Department of Neurology and demonstrated that ML models can accurately predict the incidence of high-severity symptoms within three months using continuously collected data from mobile apps. Results suggest that the trained models can be used by clinicians to evaluate the symptom trajectory of MS participants, which may inform decision making for administering interventions.

The CAIR team has been involved in the deployment of many other systems, for both internal and external use. For example, we have also partnered with Learning Ally to build a book recommendation system for children with learning disabilities, such as dyslexia. We hope that our work positively impacts future product development.


Human feedback

As ML models become ubiquitous throughout the developed world, it can be far too easy to leave voices in less developed countries behind. A priority of the CAIR team is to bridge this gap, develop deep relationships with communities, and work together to address ML-related concerns through community-driven approaches.

One of the ways we are doing this is through working with grassroots organizations for ML, such as Sisonkebiotik, an open and inclusive community of researchers, practitioners and enthusiasts at the intersection of ML and healthcare working together to build capacity and drive forward research initiatives in Africa. We worked in collaboration with the Sisonkebiotik community to detail limitations of historical top-down approaches for global health, and suggested complementary health-based methods, specifically those of grassroots participatory communities (GPCs). We jointly created a framework for ML and global health, laying out a practical roadmap towards setting up, growing and maintaining GPCs, based on common values across various GPCs such as Masakhane, Sisonkebiotik and Ro’ya.

We are engaging with open initiatives to better understand the role, perceptions and use cases of AI for health in non-western countries through human feedback, with an initial focus in Africa. Together with Ghana NLP, we have worked to detail the need to better understand algorithmic fairness and bias in health in non-western contexts. We recently launched a study to expand on this work using human feedback.

Biases along the ML pipeline and their associations with African-contextualized axes of disparities.

The CAIR team is committed to creating opportunities to hear more perspectives in AI development. We partnered with Sisonkebiotik to co-organize the Data Science for Health Workshop at Deep Learning Indaba 2023 in Ghana. Everyone’s voice is crucial to developing a better future using AI technology.


Acknowledgements

We would like to thank Negar Rostamzadeh, Stephen Pfohl, Subhrajit Roy, Diana Mincu, Chintan Ghate, Mercy Asiedu, Emily Salkey, Alexander D’Amour, Jessica Schrouff, Chirag Nagpal, Eltayeb Ahmed, Lev Proleev, Natalie Harris, Mohammad Havaei, Ben Hutchinson, Andrew Smart, Awa Dieng, Mahima Pushkarna, Sanmi Koyejo, Kerrie Kauer, Do Hee Park, Lee Hartsell, Jennifer Graves, Berk Ustun, Hailey Joren, Timnit Gebru and Margaret Mitchell for their contributions and influence, as well as our many friends and collaborators at Learning Ally, National MS Society, Duke University Hospital, STANDING Together, Sisonkebiotik, and Masakhane.

Source: Google AI Blog


Google Summer of Code 2024 Celebrating our 20th Year!

Google Summer of Code (GSoC) will be celebrating its 20th anniversary with our upcoming 2024 program. Over the past 19 years we have welcomed over 19,000 new contributors to open source through the program under the guidance of 19,000+ mentors from over 800 open source organizations in a wide range of fields.

We are honored and thrilled to keep GSoC’s mission of bringing new contributors into open source communities alive for 20 years. Open source communities thrive when they have new contributors with fresh, exciting ideas and the renewed energy they bring to these communities. Mentorship is a vital way to keep these new contributors coming into the open source ecosystem where they can see collaboration at its finest from their community members all across the world, all with different backgrounds and skills working towards a common goal.

With just over a week left in the 2023 program, we have had one of our most enthusiastic groups of GSoC contributors with 841 GSoC contributors completing their projects with 159 open source organizations. There are 68 GSoC contributors wrapping up their projects. A GSoC 2023 wrap up blog post will be coming late this month with stats and quotes from our contributors and mentors.

Our contributors and mentors have given us invaluable feedback and we are making one adjustment around project time commitment/project scope. For the 2024 program, there will be three options for project scope: medium at ~175 hours, large at ~350 hours and a new size: small at ~90 hours. The idea is to remove the barrier of available time that many potential contributors have and open the program to people who want to learn about open source development but can’t dedicate all or even half of their summer to the program.

As a reminder, GSoC 2024 is open to students and to beginners in open source software development that are over the age of 18 at time of registration.


Interested in applying to the Google Summer of Code Program?


Open Source Organizations

Check out our website to learn what it means to be a participating mentor organization. Watch the GSoC Org Highlight videos and get inspired about projects that contributors have worked on in the past.

Take a look through our mentor guide to learn about what it means to be part of Google Summer of Code, how to prepare your community, gather excited mentors, create achievable project ideas, and tips for applying. We welcome all types of open source organizations and encourage you to apply—it is especially exciting for us to welcome new orgs into the program and we hope you are inspired to get involved with our growing community. In 2024, we look forward to accepting more artificial intelligence/machine learning open source organizations.


Want to be a GSoC Contributor?

New to open source development or a student? Eager to gain experience on real-world software development projects used by thousands of people? It is never too early to start thinking about what kind of open source organization you’d like to learn more about and how the application process works!

Watch our ‘Introduction to GSoC’ video to see a quick overview of the program. Read through our contributor guide for important tips from past participants on preparing your proposal, what to think about if you wish to apply for the program, and explore our website for other resources. Continue to check for more information about the 2024 program once the 2023 program ends later this month.

Please share information about the 2024 GSoC program with your friends, family, colleagues, and anyone you think may be interested in joining our community. We are excited to welcome new contributors and mentoring organizations to celebrate the 20th year of Google Summer of Code!

By Stephanie Taylor – Program Manager, Google Open Source Programs Office

Overcoming leakage on error-corrected quantum processors

The qubits that make up Google quantum devices are delicate and noisy, so it’s necessary to incorporate error correction procedures that identify and account for qubit errors on the way to building a useful quantum computer. Two of the most prevalent error mechanisms are bit-flip errors (where the energy state of the qubit changes) and phase-flip errors (where the phase of the encoded quantum information changes). Quantum error correction (QEC) promises to address and mitigate these two prominent errors. However, there is an assortment of other error mechanisms that challenges the effectiveness of QEC.

While we want qubits to behave as ideal two-level systems with no loss mechanisms, this is not the case in reality. We use the lowest two energy levels of our qubit (which form the computational basis) to carry out computations. These two levels correspond to the absence (computational ground state) or presence (computational excited state) of an excitation in the qubit, and are labeled |0⟩ (“ket zero”) and |1⟩ (“ket one”), respectively. However, our qubits also host many higher levels called leakage states, which can become occupied. Following the convention of labeling the level by indicating how many excitations are in the qubit, we specify them as |2⟩, |3⟩, |4⟩, and so on.

In “Overcoming leakage in quantum error correction”, published in Nature Physics, we identify when and how our qubits leak energy to higher states, and show that the leaked states can corrupt nearby qubits through our two-qubit gates. We then identify and implement a strategy that can remove leakage and convert it to an error that QEC can efficiently fix. Finally, we show that these operations lead to notably improved performance and stability of the QEC process. This last result is particularly critical, since additional operations take time, usually leading to more errors.


Working with imperfect qubits

Our quantum processors are built from superconducting qubits called transmons. Unlike an ideal qubit, which only has two computational levels — a computational ground state and a computational excited state — transmon qubits have many additional states with higher energy than the computational excited state. These higher leakage states are useful for particular operations that generate entanglement, a necessary resource in quantum algorithms, and also keep transmons from becoming too non-linear and difficult to operate. However, the transmon can also be inadvertently excited into these leakage states through a variety of processes, including imperfections in the control pulses we apply to perform operations or from the small amount of stray heat leftover in our cryogenic refrigerator. These processes are collectively referred to as leakage, which describes the transition of the qubit from computational states to leakage states.

Consider a particular two-qubit operation that is used extensively in our QEC experiments: the CZ gate. This gate operates on two qubits, and when both qubits are in their |1⟩ level, an interaction causes the two individual excitations to briefly “bunch” together in one of the qubits to form |2⟩, while the other qubit becomes |0⟩, before returning to the original configuration where each qubit is in |1⟩. This bunching underlies the entangling power of the CZ gate. However, with a small probability, the gate can encounter an error and the excitations do not return to their original configuration, causing the operation to leave a qubit in |2⟩, a leakage state. When we execute hundreds or more of these CZ gates, this small leakage error probability accumulates.

Transmon qubits support many leakage states (|2⟩, |3⟩, |4⟩, …) beyond the computational basis (|0⟩ and |1⟩). While we typically only use the computational basis to represent quantum information, sometimes the qubit enters these leakage states, and disrupts the normal operation of our qubits.

A single leakage event is especially damaging to normal qubit operation because it induces many individual errors. When one qubit starts in a leaked state, the CZ gate no longer correctly entangles the qubits, preventing the algorithm from executing correctly. Not only that, but CZ gates applied to one qubit in leaked states can cause the other qubit to leak as well, spreading leakage through the device. Our work includes extensive characterization of how leakage is caused and how it interacts with the various operations we use in our quantum processor.

Once the qubit enters a leakage state, it can remain in that state for many operations before relaxing back to the computational states. This means that a single leakage event interferes with many operations on that qubit, creating operational errors that are bunched together in time (time-correlated errors). The ability for leakage to spread between the different qubits in our device through the CZ gates means we also concurrently see bunches of errors on neighboring qubits (space-correlated errors). The fact that leakage induces patterns of space- and time-correlated errors makes it especially hard to diagnose and correct from the perspective of QEC algorithms.


The effect of leakage in QEC

We aim to mitigate qubit errors by implementing surface code QEC, a set of operations applied to a collection of imperfect physical qubits to form a logical qubit, which has properties much closer to an ideal qubit. In a nutshell, we use a set of qubits called data qubits to hold the quantum information, while another set of measure qubits check up on the data qubits, reporting on whether they have suffered any errors, without destroying the delicate quantum state of the data qubits. One of the key underlying assumptions of QEC is that errors occur independently for each operation, but leakage can persist over many operations and cause a correlated pattern of multiple errors. The performance of our QEC strategies is significantly limited when leakage causes this assumption to be violated.

Once leakage manifests in our surface code transmon grid, it persists for a long time relative to a single surface code QEC cycle. To make matters worse, leakage on one qubit can cause its neighbors to leak as well.

Our previous work has shown that we can remove leakage from measure qubits using an operation called multi-level reset (MLR). This is possible because once we perform a measurement on measure qubits, they no longer hold any important quantum information. At this point, we can interact the qubit with a very lossy frequency band, causing whichever state the qubit was in (including leakage states) to decay to the computational ground state |0⟩. If we picture a Jenga tower representing the excitations in the qubit, we tumble the entire stack over. Removing just one brick, however, is much more challenging. Likewise, MLR doesn’t work with data qubits because they always hold important quantum information, so we need a new leakage removal approach that minimally disturbs the computational basis states.


Gently removing leakage

We introduce a new quantum operation called data qubit leakage removal (DQLR), which targets leakage states in a data qubit and converts them into computational states in the data qubit and a neighboring measure qubit. DQLR consists of a two-qubit gate (dubbed Leakage iSWAP — an iSWAP operation with leakage states) inspired by and similar to our CZ gate, followed by a rapid reset of the measure qubit to further remove errors. The Leakage iSWAP gate is very efficient and greatly benefits from our extensive characterization and calibration of CZ gates within the surface code experiment.

Recall that a CZ gate takes two single excitations on two different qubits and briefly brings them to one qubit, before returning them to their respective qubits. A Leakage iSWAP gate operates similarly, but almost in reverse, so that it takes a single qubit with two excitations (otherwise known as |2⟩) and splits them into |1⟩ on two qubits. The Leakage iSWAP gate (and for that matter, the CZ gate) is particularly effective because it does not operate on the qubits if there are fewer than two excitations present. We are precisely removing the |2⟩ Jenga brick without toppling the entire tower.

By carefully measuring the population of leakage states on our transmon grid, we find that DQLR can reduce average leakage state populations over all qubits to about 0.1%, compared to nearly 1% without it. Importantly, we no longer observe a gradual rise in the amount of leakage on the data qubits, which was always present to some extent prior to using DQLR.

This outcome, however, is only half of the puzzle. As mentioned earlier, an operation such as MLR could be used to effectively remove leakage on the data qubits, but it would also completely erase the stored quantum state. We also need to demonstrate that DQLR is compatible with the preservation of a logical quantum state.

The second half of the puzzle comes from executing the QEC experiment with this operation interleaved at the end of each QEC cycle, and observing the logical performance. Here, we use a metric called detection probability to gauge how well we are executing QEC. In the presence of leakage, time- and space-correlated errors will cause a gradual rise in detection probabilities as more and more qubits enter and stay in leakage states. This is most evident when we perform no reset at all, which rapidly leads to a transmon grid plagued by leakage, and it becomes inoperable for the purposes of QEC.

The prior state-of-the-art in our QEC experiments was to use MLR on the measure qubits to remove leakage. While this kept leakage population on the measure qubits (green circles) sufficiently low, data qubit leakage population (green squares) would grow and saturate to a few percent. With DQLR, leakage population on both the measure (blue circles) and data qubits (blue squares) remain acceptably low and stable.

With MLR, the large reduction in leakage population on the measure qubits drastically decreases detection probabilities and mitigates a considerable degree of the gradual rise. This reduction in detection probability happens even though we spend more time dedicated to the MLR gate, when other errors can potentially occur. Put another way, the correlated errors that leakage causes on the grid can be much more damaging than the uncorrelated errors from the qubits waiting idle, and it is well worth it for us to trade the former for the latter.

When only using MLR, we observed a small but persistent residual rise in detection probabilities. We ascribed this residual increase in detection probability to leakage accumulating on the data qubits, and found that it disappeared when we implemented DQLR. And again, the observation that the detection probabilities end up lower compared to only using MLR indicates that our added operation has removed a damaging error mechanism while minimally introducing uncorrelated errors.

Leakage manifests during surface code operation as increased errors (shown as error detection probabilities) over the number of cycles. With DQLR, we no longer see a notable rise in detection probability over more surface code cycles.


Prospects for QEC scale-up

Given these promising results, we are eager to implement DQLR in future QEC experiments, where we expect error mechanisms outside of leakage to be greatly improved, and sensitivity to leakage to be enhanced as we work with larger and larger transmon grids. In particular, our simulations indicate that scale-up of our surface code will almost certainly require a large reduction in leakage generation rates, or an active leakage removal technique over all qubits, such as DQLR.

Having laid the groundwork by understanding where leakage is generated, capturing the dynamics of leakage after it presents itself in a transmon grid, and showing that we have an effective mitigation strategy in DQLR, we believe that leakage and its associated errors no longer pose an existential threat to the prospects of executing a surface code QEC protocol on a large grid of transmon qubits. With one fewer challenge standing in the way of demonstrating working QEC, the pathway to a useful quantum computer has never been more promising.


Acknowledgements

This work would not have been possible without the contributions of the entire Google Quantum AI Team.

Source: Google AI Blog


San Antonio’s Young Women’s Leadership Academy (YWLA) empowering young women with the support of Google Fiber

Google Fiber is committed to supporting women in Science, Technology, Engineering, and Mathematics (STEM). As part of that commitment, we have supported the San Antonio Young Women's Leadership Academy Robotics program for the past several years with a grant that enables more young women at the academy to participate in the program. With the success of the program and the team winning their first competition this year, Ignacia Negrete Kilgore, YWLA CTE Departmental Chair shares the impact the program is making and how grateful she is for the sponsorships and partnerships that resulted in the success of the program so far.




Young Women’s Leadership Academy (YWLA) is a nonprofit STEM-focused organization in San Antonio, Texas. The entirety of what we do is dedicated to supporting young women by providing them with the necessary academic skills in STEM to achieve success in college — and it’s an organization I’m proud to be a part of. Right now, we serve over 500 female students, the majority of whom are Latina, live in low income households, and soon will become first generation college students.

Thumbnail

But before we get into the heart of the work we do at YWLA, I want to tell you a bit about how I fit into all of this. 

I started my engineering journey at Texas Tech University. I didn't know much about engineering at all — but I knew I wanted to learn. At first, it was very challenging, especially being one of the few women in the mechanical engineering program, and a Latina who was still learning English. I knew during my time there that I wanted to help change this feeling of “outsiderness” for future female engineers.

Soon after I graduated from Texas Tech, I was approached by a representative of YWLA with the opportunity to teach engineering to young women in their program. Just as I’d hoped, my doorway to make a difference in my field opened up. That was 8 years ago — and I’m still here, doing the work that I love by serving the future of women in STEM. 

Shortly after I started teaching, I began envisioning where I wanted my engineering program to go. That’s when the transformation of San Antonio’s only all-girl  FRC  Robotics program came into play.  

I got to work with my team and quickly added courses that would give our students certifications like AutoCAD, Inventor, and college credits for Computer Science, which led to the Robotics team qualifying for competitions. It was a big moment for our program and our students. But we had another massive hurdle to overcome -a stark lack of funding. 

Simply put, though we began to qualify for state competitions, we couldn’t afford to attend them. This felt incredibly unfair to our hardworking students, so once again… we got to work.

I put together a sponsorship package and was grateful to connect with a local member of the GFiber team, who understood the importance of the program and knew it was the type of digital Inclusion and equity initiative that GFiber works to support. 



That connection changed everything for us. Thanks to GFiber, we've been able to grow the program, provide better equipment, and not only qualify for competitions, but attend them (with multiple students)!  

Last year, the program was able to pay for additional hotel rooms, food, and rent an additional vehicle to carry a total of 14 students (almost the entire team) to the competition. This was a game changer for our  kids. Now, they could do the work and see their work in a competitive environment. This excitement from our students has radiated outwards. Our program is growing like never before and our graduates are ascending to new heights — attending prestigious engineering schools. 

We've had students go to MIT, Brown University,  Rice University, Texas A&M and the University of Texas at Austin, among others. In addition to getting into these competitive programs, the girls who participate in the robotics program are persisting through their rigorous engineering and stem degree programs to earn their degrees. The demands of the competitive robotic environment helps them build foundational skills for navigating these male-dominated spaces at the college level and beyond. This is the kind of impact that the YWLA Robotics program is making with the support of GFiber.



The achievements of our Robotics program would not be possible without a supportive team of people contributing to our success. We are truly blessed to have mentors and volunteers with great skills and experience. I am also thankful for my Head of School, Delia Montelongo, and my mentor and friend, Ashley Cash; these two inspiring women have taught me the skills needed to advocate for female STEM education and have always trusted and given me the freedom to go out there and find sponsors and partners like GFiber. It is amazing to see the growth and success that has come about due to all the support. I look forward to the continuous success of the Robotics program to educate and impact many more young girls, making great strides towards digital equity and inclusion.

Posted By Ignacia Kilgore, CTE Departmental Chair - San Antonio YWLA



Filter by people or groups in Google Drive

What’s changing

We’re adding a new filter to Google Drive that lets you see which files or folders have been shared with specific people or groups. 

Whether you’re trying to share a file with more audiences or reduce oversharing, this feature will give you greater visibility into who has access to files within and outside of your organization. 
Filter by people or groups in Google Drive

Getting started 

  • Admins: There is no admin control for this feature. The groups that appear in the People filter are based on the target audiences set up at the domain level by admins.  
  • End users: 
    • To find files shared with a person or group in drive.google.com: 
      • Navigate to My Drive, Shared drives, Shared with me, or Recent
      • Click on “People” filter
      • Search for a person or group who the files are shared with in the filter
      • Select “Shared with”
      • View the filtered list of files
    • To find files shared outside of your organization: 
      • Navigate to My Drive, Shared drives, Shared with me, or Recent
      • Click on “People” filter
      • Select “External users”
      • View the filtered list of files
    • To find files you own that are shared with a specific person or group: 
      • Navigate to My Drive, Shared drives, Shared with me, or Recent
      • Click on “People” filter
      • Search for yourself and select “Owner”
      • Click on “People” filter
      • Search for a person or group who the files are shared with in the filter
      • View the filtered list of files

Rollout pace 


Availability 

  • Available to all Google Workspace customers and users with personal Google Accounts

Streamlining the user experience in Google Chat to help you find what you need much faster

What’s changing 

We recently introduced numerous enhancements across Google Chat, and today we’re excited to announce the general availability of three features highlighting a new integrated experience. 

The redesigned navigation panel brings direct messages and spaces together and introduces shortcuts, a new framework to help you stay on top of your messages. 

To reduce friction while navigating between messages, home helps you quickly catch up on any new activity across all conversations in a single location. Additionally, you can narrow down your view by filtering for unread messages. 
chat home



Mentions give you visibility into the messages addressed specifically to you. Within this single destination for important, actionable messages, you can see and navigate to the messages that @-mention you. 
Chat mentions

In addition to these improvements to the user experience, we’re updating the icon for Google Chat over the coming weeks. The new icon has a cohesive look with other popular Workspace products and reflects the central role of business messaging and collaboration in Workspace. 
New Chat logo


Who’s impacted 

End users 


Why you’d use it 

These features are designed to help you stay on top of the busy flow of communication and make it easier to prioritize and find the conversations that are most important to you. 


Additional details 

The direct message and spaces sections will be listed separately, but are scrollable and collapsable in one unified list. 

The list of messages where you have been @-mentioned are sorted by recency. 
  • Each row represents the message within the thread that mentions you. 
  • If you are mentioned multiple times in a conversation, each mention will show up as one row. 
  • Unread messages are highlighted in light blue and include a blue dot. 
Clicking on a message in the Home or Mentions view will bring you directly to the conversation or thread where you can respond. 


Getting started 

Rollout pace 


Availability 

  • Available to all Google Workspace customers and users with personal Google Accounts 

Resources 

Ensuring high-quality apps on Google Play

Posted by Kobi Gluck, Director of Product Management, Google Play

Every day, Google Play helps billions of people around the world discover engaging, helpful, and enriching experiences on their devices. Maintaining consistently high app quality across these experiences is our top priority, which is why we continuously invest in new tools, features, and programs to help developers deliver the best apps and games.

Previously, we’ve highlighted our efforts to amplify the highest-quality apps on Google Play, as well as steer users away from lower-quality ones. Today, we’re sharing an update on this work and introducing some new policies and programs to boost app quality across the platform and connect people with experiences they’ll love, wherever they are, on whatever device they’re using.

Helping existing developers comply with our updated verification requirements

Earlier this year, we announced that all developers must meet an expanded set of verification requirements before publishing apps on Google Play, to help users make informed choices, prevent the spread of malware, and reduce fraud. We’ve already rolled out the requirements for developers creating new Play Console developer accounts.

Today, we're sharing how developers with existing accounts can complete these verifications to comply with the updated Play Console requirements policy. We know that developers of different types and sizes have different priorities, and that it might take some developers longer to verify than others. Because of this, we're allowing you to choose your own deadline by which to complete account verification.

Starting today, you can choose your preferred deadline in Play Console. Deadlines are available on a first-come, first-served basis, so choose your deadline early to guarantee a timeframe that works for you. If you don't choose a deadline before February 29, 2024, we'll assign one for you automatically.

Screenshot of 'choose your preferred account verification deadline now' prompt in Google Play Console

Required app testing for all new personal developer accounts

Developers who regularly use Play’s app testing tools before publishing release higher-quality apps and games, which can lead to higher ratings and more success on Google Play. In fact, apps that use our testing tools have on average 3 times the amount of app installs and user engagement compared to those that don't.

To help developers reap these benefits, developers with newly created personal Play Console accounts will soon be required to test their apps with at least 20 people for a minimum of two weeks before applying for access to production. This will allow developers to test their app, identify issues, get feedback, and ensure that everything is ready before they launch. Developers who create new personal developer accounts will start seeing this requirement in Play Console in the coming days.

Increased investment in app review

As more developers use new technologies in their mobile apps, apps on Play are becoming more sophisticated — but so are abuse methodologies. To ensure we continue to provide a safe and trusted experience, our global review teams now spend more time assessing new apps to make sure they provide a valuable user experience that does not deceive or defraud users, either via the app or off-Play activity, and complies with our policies.

While we do not anticipate significant changes to our overall app review timelines, it may take us longer to review a small portion of apps, such as apps designed for children or that request certain device permissions. These deeper reviews help ensure that users are engaging in safe and trusted experiences through Google Play.

Connecting users to great, trustworthy apps

Given the global reach and regional diversity of the Android ecosystem, it’s important that every Play user can easily discover the best content for their needs and not be limited to a single recommendation. To continue providing great content for users that rewards developer investment in quality, we’ve already begun:

    • Providing users with information on whether an app may not perform as well on their device, including phones, large screens, and wearables and
    • Surfacing more high-quality local and regional content

Next year, we’ll add new signifiers to app listings to help users find what they’re looking for, starting with a badge identifying official government apps.

All of these changes are designed to help connect people with safe, high-quality, useful, and relevant experiences they’ll love, wherever they are and on whatever device they’re using. Thank you for your ongoing investment in the quality of your user experiences. We look forward to continuing to provide you with the platform that helps you supercharge your growth and success.