Tag Archives: Diversity

SCIN: A new resource for representative dermatology images

Health datasets play a crucial role in research and medical education, but it can be challenging to create a dataset that represents the real world. For example, dermatology conditions are diverse in their appearance and severity and manifest differently across skin tones. Yet, existing dermatology image datasets often lack representation of everyday conditions (like rashes, allergies and infections) and skew towards lighter skin tones. Furthermore, race and ethnicity information is frequently missing, hindering our ability to assess disparities or create solutions.

To address these limitations, we are releasing the Skin Condition Image Network (SCIN) dataset in collaboration with physicians at Stanford Medicine. We designed SCIN to reflect the broad range of concerns that people search for online, supplementing the types of conditions typically found in clinical datasets. It contains images across various skin tones and body parts, helping to ensure that future AI tools work effectively for all. We've made the SCIN dataset freely available as an open-access resource for researchers, educators, and developers, and have taken careful steps to protect contributor privacy.

Example set of images and metadata from the SCIN dataset.

Dataset composition

The SCIN dataset currently contains over 10,000 images of skin, nail, or hair conditions, directly contributed by individuals experiencing them. All contributions were made voluntarily with informed consent by individuals in the US, under an institutional-review board approved study. To provide context for retrospective dermatologist labeling, contributors were asked to take images both close-up and from slightly further away. They were given the option to self-report demographic information and tanning propensity (self-reported Fitzpatrick Skin Type, i.e., sFST), and to describe the texture, duration and symptoms related to their concern.

One to three dermatologists labeled each contribution with up to five dermatology conditions, along with a confidence score for each label. The SCIN dataset contains these individual labels, as well as an aggregated and weighted differential diagnosis derived from them that could be useful for model testing or training. These labels were assigned retrospectively and are not equivalent to a clinical diagnosis, but they allow us to compare the distribution of dermatology conditions in the SCIN dataset with existing datasets.

The SCIN dataset contains largely allergic, inflammatory and infectious conditions while datasets from clinical sources focus on benign and malignant neoplasms.

While many existing dermatology datasets focus on malignant and benign tumors and are intended to assist with skin cancer diagnosis, the SCIN dataset consists largely of common allergic, inflammatory, and infectious conditions. The majority of images in the SCIN dataset show early-stage concerns — more than half arose less than a week before the photo, and 30% arose less than a day before the image was taken. Conditions within this time window are seldom seen within the health system and therefore are underrepresented in existing dermatology datasets.

We also obtained dermatologist estimates of Fitzpatrick Skin Type (estimated FST or eFST) and layperson labeler estimates of Monk Skin Tone (eMST) for the images. This allowed comparison of the skin condition and skin type distributions to those in existing dermatology datasets. Although we did not selectively target any skin types or skin tones, the SCIN dataset has a balanced Fitzpatrick skin type distribution (with more of Types 3, 4, 5, and 6) compared to similar datasets from clinical sources.

Self-reported and dermatologist-estimated Fitzpatrick Skin Type distribution in the SCIN dataset compared with existing un-enriched dermatology datasets (Fitzpatrick17k, PH², SKINL2, and PAD-UFES-20).

The Fitzpatrick Skin Type scale was originally developed as a photo-typing scale to measure the response of skin types to UV radiation, and it is widely used in dermatology research. The Monk Skin Tone scale is a newer 10-shade scale that measures skin tone rather than skin phototype, capturing more nuanced differences between the darker skin tones. While neither scale was intended for retrospective estimation using images, the inclusion of these labels is intended to enable future research into skin type and tone representation in dermatology. For example, the SCIN dataset provides an initial benchmark for the distribution of these skin types and tones in the US population.

The SCIN dataset has a high representation of women and younger individuals, likely reflecting a combination of factors. These could include differences in skin condition incidence, propensity to seek health information online, and variations in willingness to contribute to research across demographics.


Crowdsourcing method

To create the SCIN dataset, we used a novel crowdsourcing method, which we describe in the accompanying research paper co-authored with investigators at Stanford Medicine. This approach empowers individuals to play an active role in healthcare research. It allows us to reach people at earlier stages of their health concerns, potentially before they seek formal care. Crucially, this method uses advertisements on web search result pages — the starting point for many people’s health journey — to connect with participants.

Our results demonstrate that crowdsourcing can yield a high-quality dataset with a low spam rate. Over 97.5% of contributions were genuine images of skin conditions. After performing further filtering steps to exclude images that were out of scope for the SCIN dataset and to remove duplicates, we were able to release nearly 90% of the contributions received over the 8-month study period. Most images were sharp and well-exposed. Approximately half of the contributions include self-reported demographics, and 80% contain self-reported information relating to the skin condition, such as texture, duration, or other symptoms. We found that dermatologists’ ability to retrospectively assign a differential diagnosis depended more on the availability of self-reported information than on image quality.

Dermatologist confidence in their labels (scale from 1-5) depended on the availability of self-reported demographic and symptom information.

While perfect image de-identification can never be guaranteed, protecting the privacy of individuals who contributed their images was a top priority when creating the SCIN dataset. Through informed consent, contributors were made aware of potential re-identification risks and advised to avoid uploading images with identifying features. Post-submission privacy protection measures included manual redaction or cropping to exclude potentially identifying areas, reverse image searches to exclude publicly available copies and metadata removal or aggregation. The SCIN Data Use License prohibits attempts to re-identify contributors.

We hope the SCIN dataset will be a helpful resource for those working to advance inclusive dermatology research, education, and AI tool development. By demonstrating an alternative to traditional dataset creation methods, SCIN paves the way for more representative datasets in areas where self-reported data or retrospective labeling is feasible.


Acknowledgements

We are grateful to all our co-authors Abbi Ward, Jimmy Li, Julie Wang, Sriram Lakshminarasimhan, Ashley Carrick, Bilson Campana, Jay Hartford, Pradeep Kumar S, Tiya Tiyasirisokchai, Sunny Virmani, Renee Wong, Yossi Matias, Greg S. Corrado, Dale R. Webster, Dawn Siegel (Stanford Medicine), Steven Lin (Stanford Medicine), Justin Ko (Stanford Medicine), Alan Karthikesalingam and Christopher Semturs. We also thank Yetunde Ibitoye, Sami Lachgar, Lisa Lehmann, Javier Perez, Margaret Ann Smith (Stanford Medicine), Rachelle Sico, Amit Talreja, Annisah Um’rani and Wayne Westerlind for their essential contributions to this work. Finally, we are grateful to Heather Cole-Lewis, Naama Hammel, Ivor Horn, Michael Howell, Yun Liu, and Eric Teasley for their insightful comments on the study design and manuscript.

Source: Google AI Blog


More voices = More Bazel

Takeaways from the BazelCon DEI lunch panel

In front of a standing-room-only lunch panel, Google’s head of Developer X strategy Minu Puranik asks us, “If there is one thing you want to change [about Bazel’s DEI culture], what would it be and why?”

We’d spent the last hour on three main themes: community culture, fostering trust, and growing our next generation of leaders. Moderated by Minu, our panel brought together a slate of brilliant people from underrepresented groups to give a platform to our experiences and ideas. Together with representatives and allies in the community, we explored methods for building inclusivity and sought a better understanding of the institutional and systemic barriers to increasing diversity.

Culture defines how we act, which informs who feels welcome to contribute. Studies show that diverse contributor backgrounds yield more and better results, so how do we create a culture where everyone feels safe to share, ask questions, and contribute? Helen Altshuler, co-founder and CEO of EngFlow, relayed her experience regarding some best practices:

“Having people that can have your back is important to get past the initial push to submit something and feeling like it’s ok. You don’t need to respond to everything in one go. Last year, Cynthia Coah and I gave a talk on how to make contributions to the Bazel community. Best practices: better beginners’ documentation, classifying GitHub issues as ‘good first issue,’ and having Slack channels where code owners can play a more active role.”

                    Helen Altshuler, co-founder and CEO of EngFlow

Diving further, we discussed the need to make sure new contributors get positive, actionable feedback to reward them with context and resources, and encourage them to take the risk of contributing to the codebase. This encouragement of new contributors feeds directly into the next generation of technical influencers and leaders. Eva Howe, co-founder and Legal Counsel for Aspect, addressed the current lack of diversity in the community pipeline.

“I’d like to see more trainings like the Bazel Community Day. Trainings serve two purposes:

1. You can blend in, start talking to someone in the background, and form connections.
2. We can give a good first educational experience. It needs to be a welcoming space.”

                     Eva Howe, Legal Counsel – Aspect Dev

    In addition to industry trainings, the audience and panel brought up bootcamps and university classes as rich sources to find and promote diversity, though they cautioned that it takes active, ongoing effort to maintain an environment that diverse candidates are willing to stay in. There are fewer opportunities to take risks as part of a historically excluded group, and the feeling that you have to succeed for everyone who looks like you creates a high-pressure environment that is worse for learning outcomes.

    To bypass this pipeline problem, we can recruit promising candidates and sponsor them through getting the necessary experience on the job. Lyra Levin, Bazel’s internal technical writer at Google, spoke to this process of incentivizing and recognizing contributions outside the codebase, as a way to both encourage necessary glue work, and pull people into tech from parallel careers more hospitable to underrepresented candidates. And Sophia Vargas, Program Manager in Google’s OSPO (Open Source Programs Office), also offered insight regarding contributions.

    “If someone gives you an introduction to another person, recognize that. Knowing a system of people is work. Knowing where to find answers is work. Saying I’m going to be available and responding to emails is work. If you see a conversation where someone is getting unhelpful pushback, jump in and moderate it. Reward those who contribute by creating a space that can be collaborative and supportive.”

                         Lyra Levin, Technical Writer

    “Create ways to recognize non-code contributions. One example is a markdown file describing other forms of contribution, especially in cases that do not generate activity attached to a name on GitHub.”

    An audience member agreed that for the few PRs a positive experience is critical for community trust building: And indeed, open source is all about building trust. So how do we go about building trust? What should we do differently? Radhika Advani, Bazel’s product manager at Google, suggests that the key is to:

    “Make some amazing allies. Be kind and engage with empathy. Take your chances—there are lots of good people out there. You have to come from a place of vulnerability.”

                        - Radhika Advani, Bazel Product Manager

    Vargas also added some ideas for how to be an “amazing ally” and sponsor the careers of those around you, such as creating safe spaces to have these conversations because not everyone is bold enough to speak up or to ask for support since raising issues in a public forum can be intimidating. Making yourself accessible and providing anonymous forms for suggestions or feedback can serve as opportunities to educate yourself and to increase awareness of diverging opinions.

    An audience member stated that recognizing an action that is alienating to a member of your group—even just acknowledging their experience or saying something to the room—can be very powerful to create a sense of safety and belonging. And another said that those in leadership positions being forthright about the limits of their knowledge, gives people the freedom to not know everything.

    So to Minu’s question, what should we do to improve Bazel’s culture?

    Helen: Create a governance group on Slack to ensure posts are complying with the community code of conduct guidelines. Review how this is managed for other OSS communities.

    Sophia: Institutionalize mentorship; have someone else review what you’ve done and give you the confidence to push a change. Nurture people. We need to connect new and established members of the community.

    Lyra: Recruit people in parallel careers paths with higher representation. Give them sponsorship to transition to tech.

    Radhika: Be more inclusive. All the jargon can get overwhelming, so let’s consider how we can make things simpler, including with non-technical metaphors.

    Eva: Consider what each of us can do to make the experience for people onboarding better.

    There are more ways to be a Bazel contributor than raising PRs. Being courageous, vulnerable and open contributes to the culture that creates the code. Maintainers: practice empathy and remember the human on the other side of the screen. Be a coach and a mentor, knowing that you are opening the door for more people to build the product you love, with you. Developers: be brave and see the opportunities to accept sponsorship into the space. Bazel is for everyone.

    By Lyra Levin, Minu Puranik, Keerthana Kumar, Radhika Advani, and Sophia Vargas – Bazel Panel

    Mentoring future women Experts

    Posted by Justyna Politanska-Pyszko

    Google Developers Experts is a global community of developers, engineers and thought leaders who passionately share their technical knowledge with others.

    Becoming a Google Developers Expert is no easy task. First, you need to have strong skills in one of the technical areas - Android, Kotlin, Google Cloud, Machine Learning, Web Technologies, Angular, Firebase, Google Workspace, Flutter or other. You also need to have a track record of sharing your knowledge - be it via conference talks, your personal blog, youtube videos or in some other form. Finally, you need one more thing. The courage to approach an existing Expert or a Google employee and ask them to support your application.

    It’s not easy, but it’s worth it. Joining the Experts community comes with many opportunities: direct access to product teams at Google, invitations to events and projects, entering a network of technology enthusiasts from around the world.

    On a quest to make these opportunities available to a diverse group of talented people globally, we launched “Road to GDE”: a mentoring program to support women in their journey to become Google Developers Experts.

    Mentors and Mentees meeting online

    Mentors and Mentees meeting online

    For 3 months, 17 mentors from the Experts community were mentoring mentees on topics like public speaking, building their professional portfolio and confidence boosting. What did they learn during the program?

    Glafira Zhur: No time for fear! With my Mentor’s help, I got invited to speak at several events, two of which are already scheduled for the summer. I created my speaker portfolio and made new friends in the community. It was a great experience.

    Julia Miocene: I learned that I shouldn't be afraid to do what someone else has already done. Even if there are talks or articles on some topic already, I will do them differently anyway. And for people, it’s important to see things from different perspectives. Just do what you like and don’t be afraid.

    Bhavna Thacker: I got motivated to continue my community contributions, learnt how to promote my work and reach more developers, so that they can benefit from my efforts. Overall, It was an excellent program. Thanks to all organisers and my mentor - Garima Jain. I am definitely looking forward to applying to the Experts program soon!

    Road to GDE mentee - Glafira Zhur and her mentor - Natalia Venditto.

    Road to GDE mentee - Glafira Zhur and her mentor - Natalia Venditto.

    Congratulations to all 17 mentees who completed the Program: Maris Botero, Clarissa Loures, Layale Matta, Bhavika Panara, Stefanie Urchs, Alisa Tsvetkova, Glafira Zhur, Wafa Waheeda Syed, Helen Kapatsa, Karin-Aleksandra Monoid, Sveta Krivosheeva, Ines Akrap, Julia Miocene, Vandana Srivastava, Anna Zharkova, Bhavana Thacker, Debasmita Sarkar

    And to their mentors - all members of the Google Developers Experts community: Lesly Zerna, Bianca Ximenes, Kristina Simakova, Sayak Paul, Karthik Muthuswamy, Jeroen Meijer, Natalia Venditto, Martina Kraus, Merve Noyan, Annyce Davis, Majid Hajian, James Milner, Debbie O'Brien, Niharika Arora, Nicola Corti, Garima Jain, Kamal Shree Soundirapandian

    To learn more about the Experts program, follow us on Twitter, Linkedin or Medium.

    Announcing the 2021 Research Scholar Program Recipients

    In March 2020 we introduced the Research Scholar Program, an effort focused on developing collaborations with new professors and encouraging the formation of long-term relationships with the academic community. In November we opened the inaugural call for proposals for this program, which was received with enthusiastic interest from faculty who are working on cutting edge research across many research areas in computer science, including machine learning, human-computer interaction, health research, systems and more.

    Today we are pleased to announce that in this first year of the program we have granted 77 awards, which included 86 principal investigators representing 15+ countries and over 50 universities. Of the 86 award recipients, 43% identify as an historically marginalized group within technology. Please see the full list of 2021 recipients on our web page, as well as in the list below.

    We offer our congratulations to this year’s recipients, and look forward to seeing what they achieve!

    Algorithms and Optimization
    Alexandros Psomas, Purdue University
       Auction Theory Beyond Independent, Quasi-Linear Bidders
    Julian Shun, Massachusetts Institute of Technology
       Scalable Parallel Subgraph Finding and Peeling Algorithms
    Mary Wootters, Stanford University
       The Role of Redundancy in Algorithm Design
    Pravesh K. Kothari, Carnegie Mellon University
       Efficient Algorithms for Robust Machine Learning
    Sepehr Assadi, Rutgers University
       Graph Clustering at Scale via Improved Massively Parallel Algorithms

    Augmented Reality and Virtual Reality
    Srinath Sridhar, Brown University
       Perception and Generation of Interactive Objects

    Geo
    Miriam E. Marlier, University of California, Los Angeles
       Mapping California’s Compound Climate Hazards in Google Earth Engine
    Suining He, University of Connecticut
       Fairness-Aware and Cross-Modality Traffic Learning and Predictive Modeling for Urban Smart Mobility Systems

    Human Computer Interaction
    Arvind Satyanarayan, Massachusetts Institute of Technology
       Generating Semantically Rich Natural Language Captions for Data Visualizations to Promote Accessibility
    Dina El-Zanfaly, Carnegie Mellon University
       In-the-making: An intelligence mediated collaboration system for creative practices
    Katharina Reinecke, University of Washington
       Providing Science-Backed Answers to Health-related Questions in Google Search
    Misha Sra, University of California, Santa Barbara
       Hands-free Game Controller for Quadriplegic Individuals
    Mohsen Mosleh, University of Exeter Business School
       Effective Strategies to Debunk False Claims on Social Media: A large-scale digital field experiments approach
    Tanushree Mitra, University of Washington
       Supporting Scalable Value-Sensitive Fact-Checking through Human-AI Intelligence

    Health Research
    Catarina Barata, Instituto Superior Técnico, Universidade de Lisboa
       DeepMutation – A CNN Model To Predict Genetic Mutations In Melanoma Patients
    Emma Pierson, Cornell Tech, the Jacobs Institute, Technion-Israel Institute of Technology, and Cornell University
       Using cell phone mobility data to reduce inequality and improve public health
    Jasmine Jones, Berea College
       Reachout: Co-Designing Social Connection Technologies for Isolated Young Adults
    Mojtaba Golzan, University of Technology Sydney, Jack Phu, University of New South Wales
       Autonomous Grading of Dynamic Blood Vessel Markers in the Eye using Deep Learning
    Serena Yeung, Stanford University
       Artificial Intelligence Analysis of Surgical Technique in the Operating Room

    Machine Learning and Data Mining
    Aravindan Vijayaraghavan, Northwestern University, Sivaraman Balakrishnan, Carnegie Mellon University
       Principled Approaches for Learning with Test-time Robustness
    Cho-Jui Hsieh, University of California, Los Angeles
       Scalability and Tunability for Neural Network Optimizers
    Golnoosh Farnadi, University of Montreal, HEC Montreal/MILA
       Addressing Algorithmic Fairness in Decision-focused Deep Learning
    Harrie Oosterhuis, Radboud University
       Search and Recommendation Systems that Learn from Diverse User Preferences
    Jimmy Ba, University of Toronto
       Model-based Reinforcement Learning with Causal World Models
    Nadav Cohen, Tel-Aviv University
       A Dynamical Theory of Deep Learning
    Nihar Shah, Carnegie Mellon University
       Addressing Unfairness in Distributed Human Decisions
    Nima Fazeli, University of Michigan
       Semi-Implicit Methods for Deformable Object Manipulation
    Qingyao Ai, University of Utah
       Metric-agnostic Ranking Optimization
    Stefanie Jegelka, Massachusetts Institute of Technology
       Generalization of Graph Neural Networks under Distribution Shifts
    Virginia Smith, Carnegie Mellon University
       A Multi-Task Approach for Trustworthy Federated Learning

    Mobile
    Aruna Balasubramanian, State University of New York – Stony Brook
       AccessWear: Ubiquitous Accessibility using Wearables
    Tingjun Chen, Duke University
       Machine Learning- and Optical-enabled Mobile Millimeter-Wave Networks

    Machine Perception
    Amir Patel, University of Cape Town
       WildPose: 3D Animal Biomechanics in the Field using Multi-Sensor Data Fusion
    Angjoo Kanazawa, University of California, Berkeley
       Practical Volumetric Capture of People and Scenes
    Emanuele Rodolà, Sapienza University of Rome
       Fair Geometry: Toward Algorithmic Debiasing in Geometric Deep Learning
    Minchen Wei, Hong Kong Polytechnic University
       Accurate Capture of Perceived Object Colors for Smart Phone Cameras
    Mohsen Ali and Izza Aftab, Information Technology University of the Punjab, Pakistan
       Is Economics From Afar Domain Generalizable?
    Vineeth N Balasubramanian, Indian Institute of Technology Hyderabad
       Bridging Perspectives of Explainability and Adversarial Robustness
    Xin Yu and Linchao Zhu, University of Technology Sydney
       Sign Language Translation in the Wild

    Networking
    Aurojit Panda, New York University
       Bertha: Network APIs for the Programmable Network Era
    Cristina Klippel Dominicini, Instituto Federal do Espirito Santo
       Polynomial Key-based Architecture for Source Routing in Network Fabrics
    Noa Zilberman, University of Oxford
       Exposing Vulnerabilities in Programmable Network Devices
    Rachit Agarwal, Cornell University
       Designing Datacenter Transport for Terabit Ethernet

    Natural Language Processing
    Danqi Chen, Princeton University
       Improving Training and Inference Efficiency of NLP Models
    Derry Tanti Wijaya, Boston University, Anietie Andy, University of Pennsylvania
       Exploring the evolution of racial biases over time through framing analysis
    Eunsol Choi, University of Texas at Austin
       Answering Information Seeking Questions In The Wild
    Kai-Wei Chang, University of California, Los Angeles
       Certified Robustness to against language differences in Cross-Lingual Transfer
    Mohohlo Samuel Tsoeu, University of Cape Town
       Corpora collection and complete natural language processing of isiXhosa, Sesotho and South African Sign languages
    Natalia Diaz Rodriguez, University of Granada (Spain) + ENSTA, Institut Polytechnique Paris, Inria. Lorenzo Baraldi, University of Modena and Reggio Emilia
       SignNet: Towards democratizing content accessibility for the deaf by aligning multi-modal sign representations

    Other Research Areas
    John Dickerson, University of Maryland – College Park, Nicholas Mattei, Tulane University
       Fairness and Diversity in Graduate Admissions
    Mor Nitzan, Hebrew University
       Learning representations of tissue design principles from single-cell data
    Nikolai Matni, University of Pennsylvania
       Robust Learning for Safe Control

    Privacy
    Foteini Baldimtsi, George Mason University
       Improved Single-Use Anonymous Credentials with Private Metabit
    Yu-Xiang Wang, University of California, Santa Barbara
       Stronger, Better and More Accessible Differential Privacy with autodp

    Quantum Computing
    Ashok Ajoy, University of California, Berkeley
       Accelerating NMR spectroscopy with a Quantum Computer
    John Nichol, University of Rochester
       Coherent spin-photon coupling
    Jordi Tura i Brugués, Leiden University
       RAGECLIQ - Randomness Generation with Certification via Limited Quantum Devices
    Nathan Wiebe, University of Toronto
       New Frameworks for Quantum Simulation and Machine Learning
    Philipp Hauke, University of Trento
       ProGauge: Protecting Gauge Symmetry in Quantum Hardware
    Shruti Puri, Yale University
       Surface Code Co-Design for Practical Fault-Tolerant Quantum Computing

    Structured Data, Extraction, Semantic Graph, and Database Management
    Abolfazl Asudeh, University Of Illinois, Chicago
       An end-to-end system for detecting cherry-picked trendlines
    Eugene Wu, Columbia University
       Interactive training data debugging for ML analytics
    Jingbo Shang, University of California, San Diego
       Structuring Massive Text Corpora via Extremely Weak Supervision

    Security
    Chitchanok Chuengsatiansup and Markus Wagner, University of Adelaide
       Automatic Post-Quantum Cryptographic Code Generation and Optimization
    Elette Boyle, IDC Herzliya, Israel
       Cheaper Private Set Intersection via Advances in "Silent OT"
    Joseph Bonneau, New York University
       Zeroizing keys in secure messaging implementations
    Yu Feng , University of California, Santa Barbara, Yuan Tian, University of Virginia
       Exploit Generation Using Reinforcement Learning

    Software engineering and Programming Languages
    Kelly Blincoe, University of Auckland
       Towards more inclusive software engineering practices to retain women in software engineering
    Fredrik Kjolstad, Stanford University
       Sparse Tensor Algebra Compilation to Domain-Specific Architectures
    Milos Gligoric, University of Texas at Austin
       Adaptive Regression Test Selection
    Sarah E. Chasins, University of California, Berkeley
       If you break it, you fix it: Synthesizing program transformations so that library maintainers can make breaking changes

    Systems
    Adwait Jog, College of William & Mary
       Enabling Efficient Sharing of Emerging GPUs
    Heiner Litz, University of California, Santa Cruz
       Software Prefetching Irregular Memory Access Patterns
    Malte Schwarzkopf, Brown University
       Privacy-Compliant Web Services by Construction
    Mehdi Saligane, University of Michigan
       Autonomous generation of Open Source Analog & Mixed Signal IC
    Nathan Beckmann, Carnegie Mellon University
       Making Data Access Faster and Cheaper with Smarter Flash Caches
    Yanjing Li, University of Chicago
       Resilient Accelerators for Deep Learning Training Tasks

    Source: Google AI Blog


    #ShareTheMicInCyber: Brooke Pearson


    In an effort to showcase the breadth and depth of Black+ contributions to security and privacy fields, we’ve launched a profile series that aims to elevate and celebrate the Black+ voices in security and privacy we have here at Google.



    Brooke Pearson manages the Privacy Sandbox program at Google, and her team's mission is to, “Create a thriving web ecosystem that is respectful of users and private by default.” Brooke lives this mission and it is what makes her an invaluable asset to the Chrome team and Google. 

    In addition to her work advancing the fields of security and privacy, she is a fierce advocate for women in the workplace and for elevating the voices of her fellow Black+ practitioners in security and privacy. She has participated and supported the #ShareTheMicInCyber campaign since its inception.

    Brooke is passionate about delivering privacy solutions that work and making browsing the web an inherently more private experience for users around the world.Why do you work in security or privacy?

    I work in security and privacy to protect people and their personal information. It’s that simple. Security and privacy are two issues that are core to shaping the future of technology and how we interact with each other over the Internet. The challenges are immense, and yet the ability to impact positive change is what drew me to the field.

    Tell us a little bit about your career journey to Google

    My career journey into privacy does not involve traditional educational training in the field. In fact, my background is in public policy and communications, but when I transitioned to the technology industry, I realized that the most pressing policy issues for companies like Google surround the nascent field of privacy and the growing field of security.

    After I graduated from college at Azusa Pacific University, I was the recipient of a Fulbright scholarship to Macau, where I spent one year studying Chinese and teaching English. I then moved to Washington D.C. where I initially worked for the State Department while finishing my graduate degree in International Public Policy at George Washington University. I had an amazing experience in that role and it afforded me some incredible networking opportunities and the chance to travel the world, as I worked in Afghanistan and Central Asia.

    After about five years in the public sector, I joined Facebook as a Program Manager for the Global Public Policy team, initially focused on social good programs like Safety Check and Charitable Giving. Over time, I could see that the security team at Facebook was focused on fighting the proliferation of misinformation, and this called to me as an area where I could put my expertise in communication and geopolitical policy to work. So I switched teams and I've been in the security and privacy field ever since, eventually for Uber and now with Google's Chrome team.

    At Google, privacy and security are at the heart of everything we do. Chrome is tackling some of the world's biggest security and privacy problems, and everyday my work impacts billions of people around the world. Most days, that's pretty daunting, but every day it's humbling and inspiring.

    What is your security or privacy "soapbox"?

    If we want to encourage people to engage in more secure behavior, we have to make it easy to understand and easy to act on. Every day we strive to make our users safer with Google by implementing security and privacy controls that are effective and easy for our users to use and understand.

    As a program manager, I’ve learned that it is almost always more effective to offer a carrot than a stick, when it comes to security and privacy hygiene. I encourage all of our users to visit our Safety Center to learn all the ways Google helps you stay safe online, every day.

    If you are interested in following Brooke’s work here at Google and beyond, please follow her on Twitter @brookelenet. We will be bringing you more profiles over the coming weeks and we hope you will engage with and share these with your network.

    If you are interested in participating or learning more about #ShareTheMicInCyber, click here.

    #ShareTheMicInCyber: Rob Duhart

    Posted by Matt Levine, Director, Risk Management

    In an effort to showcase the breadth and depth of Black+ contributions to security and privacy fields, we’ve launched a series in support of #ShareTheMicInCyber that aims to elevate and celebrate the Black+ voices in security and privacy we have here at Google.

    Today, we will hear from Rob Duhart, he leads a cross functional team at Google that aims to enable and empower all of our products, like Chrome, Android and Maps, to mature their security risk journey.

    Rob’s commitment to making the internet a safer place extends far beyond his work at Google, he is a member of the Cyber Security Executive Education Advisory Board of Directors at Washington University in St. Louis, where he helps craft the future of cyber security executive education globally. Rob also sits on the board of the EC-Council and has founded chapters of the International Consortium of Cybersecurity Professionals (ICMCP) across the country.

    Rob is passionate about securing the digital world and supporting Black+, women, and underrepresented minorities across the technology landscape.


    Why do you work in security or privacy?

    I have been in the cyber world long enough to know how important it is for security and privacy to be top of mind and focus for organizations of all shapes and sizes. My passion lies in keeping users and Googlers safe. One of the main reasons I joined Google is its commitment to security and privacy.


    Tell us a little bit about your career journey to Google...

    I was fortunate to begin my cybersecurity career in the United States Government working at the Department of Energy, FBI, and the Intelligence Community. I transitioned to the private sector in 2017 and have been fortunate to lead talented security teams at Cardinal Health and Ford Motor Company.

    My journey into cybersecurity was not traditional. I studied Political Science at Washington University in St. Louis, completed graduate education at George Mason University and Carnegie Mellon University. I honed my skills and expertise in this space through hands on experience and with the support of many amazing mentors. It has been the ride of a lifetime and I look forward to what is next.

    To those thinking about making a career change or are just starting to get into security, my advice is don’t be afraid to ask for help.


    What is your security or privacy "soapbox"?

    At Google, we implement a model known as Federated Security, where our security teams partner across our Product Areas to enable security program maturity Google wide. Our Federated Security team believes in harnessing the power of relationship, engagement, and community to drive maturity into every product. Security and privacy are team sports – it takes business leaders and security leaders working together to secure and protect our digital and physical worlds.

    If you are interested in following Rob’s work here at Google and beyond, please follow him on Twitter @RobDuhart. We will be bringing you more profiles over the coming weeks and we hope you will engage with and share these with your network.

    If you are interested in participating or learning more about #ShareTheMicInCyber, click here.

    Google’s initiative for more inclusive language in open source projects

    Certain terms in open source projects reinforce negative associations and unconscious biases. At Google, we want our language to be inclusive. The Google Open Source Programs Office (OSPO) created and posted a policy for new Google-run projects to remove the terms “slave,” “whitelist,” and “blacklist,” and replace them with more inclusive alternatives, such as “replica,” “allowlist,” and “blocklist.” OSPO required that new projects follow this policy beginning October 2020, and has plans to enforce these changes on more complex, established projects beginning in 2021. 


    To ensure this policy was implemented in a timely manner, a small team within OSPO and Developer Relations orchestrated tool and policy updates and an open-source specific fix-it, a virtual event where Google engineers dedicate time to fixing a project. The fix-it focused on existing projects and non-breaking changes, but also served as a reminder that inclusivity is an important part of our daily work. Now that the original fix-it is over, the policy remains and the projects continue.

    For more information on why inclusive language matters to us, you can check out Google Developer Documentation Style Guide which contains a section on word-choice with useful, clearer alternatives. Regardless of the phrases used, it is necessary to understand that certain terms reinforce biases and that replacing them is a positive step, both in creating a more welcoming atmosphere for everyone and in being more technically accurate. In short, words matter.


    By Erin Balabanian, Open Source Compliance.

    Announcing the Recipients of the 2020 Award for Inclusion Research

    At Google, it is our ongoing goal to support faculty who are conducting innovative research that will have positive societal impact. As part of that goal, earlier this year we launched the Award for Inclusion Research program, a global program that supports academic research in computing and technology addressing the needs of underrepresented populations. The Award for Inclusion Research program allows faculty and Google researchers an opportunity to partner on their research initiatives and build new and constructive long-term relationships.

    We received 100+ applications from over 100 universities, globally, and today we are excited to announce the 16 proposals chosen for funding, focused on an array of topics around diversity and inclusion, algorithmic bias, education innovation, health tools, accessibility, gender bias, AI for social good, security, and social justice. The proposals include 25 principal investigators who focus on making the community stronger through their research efforts.

    Congratulations to announce this year’s recipients:

    "Human Centred Technology Design for Social Justice in Africa"
    Anicia Peters (University of Namibia) and Shaimaa Lazem (City for Scientific Research and Technological Applications, Egypt)

    "Modern NLP for Regional and Dialectal Language Variants"
    Antonios Anastasopoulos (George Mason University)

    "Culturally Relevant Collaborative Health Tracking Tools for Motivating Heart-Healthy Behaviors Among African Americans"
    Aqueasha Martin-Hammond (Indiana University - Purdue University Indianapolis) and Tanjala S. Purnell (Johns Hopkins University)

    "Characterizing Energy Equity in the United States"
    Destenie Nock and Constantine Samaras (Carnegie Mellon University)

    "Developing a Dialogue System for a Culturally-Responsive Social Programmable Robot"
    Erin Walker (University of Pittsburgh) and Leshell Hatley (Coppin State University)

    "Eliminating Gender Bias in NLP Beyond English"
    Hinrich Schuetze (LMU Munich)

    "The Ability-Based Design Mobile Toolkit: Enabling Accessible Mobile Interactions through Advanced Sensing and Modeling"
    Jacob O. Wobbrock (University of Washington)

    "Mutual aid and community engagement: Community-based mechanisms against algorithmic bias"
    Jasmine McNealy (University of Florida)

    "Empowering Syrian Girls through Culturally Sensitive Mobile Technology and Media Literacy
    Karen Elizabeth Fisher (University of Washington) and Yacine Ghamri-Doudane (University of La Rochelle)

    "Broadening participation in data science through examining the health, social, and economic impacts of gentrification"
    Latifa Jackson (Howard University) and Hasan Jackson (Howard University)

    "Understanding How Peer and Near Peer Mentors co-Facilitating the Active Learning Process of Introductory Data Structures Within an Immersive Summer Experience Effected Rising Sophomore Computer Science Student Persistence and Preparedness for Careers in Silicon Valley"
    Legand Burge (Howard University) and Marlon Mejias (University of North Carolina at Charlotte)

    "Who is Most Likely to Advocate for this Case? A Machine Learning Approach"
    Maria De-Arteaga (University of Texas at Austin)

    "Contextual Rendering of Equations for Visually Impaired Persons"
    Meenakshi Balakrishnan (Indian Institute of Technology Delhi, India) and Volker Sorge (University of Birmingham)

    "Measuring the Cultural Competence of Computing Students and Faculty Nationwide to Improve Diversity, Equity, and Inclusion"
    Nicki Washington (Duke University)

    "Designing and Building Collaborative Tools for Mixed-Ability Programming Teams"
    Steve Oney (University of Michigan)

    "Iterative Design of a Black Studies Research Computing Initiative through `Flipped Research’"
    Timothy Sherwood and Sharon Tettegah (University of California, Santa Barbara)

    Source: Google AI Blog


    Announcing the Recipients of the 2020 Award for Inclusion Research

    At Google, it is our ongoing goal to support faculty who are conducting innovative research that will have positive societal impact. As part of that goal, earlier this year we launched the Award for Inclusion Research program, a global program that supports academic research in computing and technology addressing the needs of underrepresented populations. The Award for Inclusion Research program allows faculty and Google researchers an opportunity to partner on their research initiatives and build new and constructive long-term relationships.

    We received 100+ applications from over 100 universities, globally, and today we are excited to announce the 16 proposals chosen for funding, focused on an array of topics around diversity and inclusion, algorithmic bias, education innovation, health tools, accessibility, gender bias, AI for social good, security, and social justice. The proposals include 25 principal investigators who focus on making the community stronger through their research efforts.

    Congratulations to announce this year’s recipients:

    "Human Centred Technology Design for Social Justice in Africa"
    Anicia Peters (University of Namibia) and Shaimaa Lazem (City for Scientific Research and Technological Applications, Egypt)

    "Modern NLP for Regional and Dialectal Language Variants"
    Antonios Anastasopoulos (George Mason University)

    "Culturally Relevant Collaborative Health Tracking Tools for Motivating Heart-Healthy Behaviors Among African Americans"
    Aqueasha Martin-Hammond (Indiana University - Purdue University Indianapolis) and Tanjala S. Purnell (Johns Hopkins University)

    "Characterizing Energy Equity in the United States"
    Destenie Nock and Constantine Samaras (Carnegie Mellon University)

    "Developing a Dialogue System for a Culturally-Responsive Social Programmable Robot"
    Erin Walker (University of Pittsburgh) and Leshell Hatley (Coppin State University)

    "Eliminating Gender Bias in NLP Beyond English"
    Hinrich Schuetze (LMU Munich)

    "The Ability-Based Design Mobile Toolkit: Enabling Accessible Mobile Interactions through Advanced Sensing and Modeling"
    Jacob O. Wobbrock (University of Washington)

    "Mutual aid and community engagement: Community-based mechanisms against algorithmic bias"
    Jasmine McNealy (University of Florida)

    "Empowering Syrian Girls through Culturally Sensitive Mobile Technology and Media Literacy
    Karen Elizabeth Fisher (University of Washington) and Yacine Ghamri-Doudane (University of La Rochelle)

    "Broadening participation in data science through examining the health, social, and economic impacts of gentrification"
    Latifa Jackson (Howard University) and Hasan Jackson (Howard University)

    "Understanding How Peer and Near Peer Mentors co-Facilitating the Active Learning Process of Introductory Data Structures Within an Immersive Summer Experience Effected Rising Sophomore Computer Science Student Persistence and Preparedness for Careers in Silicon Valley"
    Legand Burge (Howard University) and Marlon Mejias (University of North Carolina at Charlotte)

    "Who is Most Likely to Advocate for this Case? A Machine Learning Approach"
    Maria De-Arteaga (University of Texas at Austin)

    "Contextual Rendering of Equations for Visually Impaired Persons"
    Meenakshi Balakrishnan (Indian Institute of Technology Delhi, India) and Volker Sorge (University of Birmingham)

    "Measuring the Cultural Competence of Computing Students and Faculty Nationwide to Improve Diversity, Equity, and Inclusion"
    Nicki Washington (Duke University)

    "Designing and Building Collaborative Tools for Mixed-Ability Programming Teams"
    Steve Oney (University of Michigan)

    "Iterative Design of a Black Studies Research Computing Initiative through `Flipped Research’"
    Timothy Sherwood and Sharon Tettegah (University of California, Santa Barbara)

    Source: Google AI Blog


    Exploring New Ways to Support Faculty Research



    For the past 15 years, the Google Faculty Research Award Program has helped support world-class technical research in computer science, engineering, and related fields, funding over 2000 academics at ~400 Universities in 50+ countries since its inception. As Google Research continues to evolve, we continually explore new ways to improve our support of the broader research community, specifically on how to support new faculty while also strengthening our existing collaborations .

    To achieve this goal, we are introducing two new programs aimed at diversifying our support across a larger community. Moving forward, these programs will replace the Faculty Research Award program, allowing us to better engage with, and support, up-and-coming researchers:

    The Research Scholar Program supports early-career faculty (those who have received their doctorate within the past 7 years) who are doing impactful research in fields relevant to Google, and is intended to help to develop new collaborations and encourage long term relationships. This program will be open for applications in Fall 2020, and we encourage submissions from faculty at universities around the world.

    We will also be piloting the Award for Inclusion Research Program, which will recognize and support research that addresses the needs of historically underrepresented populations. This Summer we will invite faculty—both directly and via their institutions—to submit their research proposals for consideration later this year, and we will notify award recipients by year's end.

    These programs will complement our existing support of academic research around the world, including the Latin America Research Awards, the PhD Fellowship Program, the Visiting Researcher Program and research grant funding. To explore other ways we are supporting the research community, please visit this page. As always, we encourage faculty to review our publication database for overlapping research interests for collaboration opportunities, and apply to the above programs. We look forward to working with you!

    Source: Google AI Blog