Author Archives: MJ Pham

This Googler’s team is making shopping more inclusive

There’s a lot to love about online shopping: It’s fast, it’s easy and there are a ton of options to choose from. But there’s one obvious challenge — you can’t try anything on. This is something Google product manager Debbie Biswas noticed, as a tech industry veteran and startup founder herself. “Historically, the fashion industry only celebrates people of a certain size and skin color,” she says. “This was something I wanted to change.”

Debbie grew up in India and moved to the U.S. after she graduated college. “I started a company in the women's apparel space, where I learned to solve user pain points around shopping for clothes, sizing and styling.” While working on her startup, Debbie realized how hard shopping was for women, including herself — the models in the images didn’t show her how something would look on her. 

“When I got an opportunity to work at Google Shopping, I realized I could solve so many of these problems at scale using the best AI/ML tech in the industry,” she says. “As a woman of color, and someone who doesn't conform to the ‘traditional beautiful size,’ I feel very motivated to solve apparel shopping problems for people like me.”

A look at Style AI in action.

This was what Debbie and her team wanted to accomplish with Style AI. Style AI is a Shopping feature that helps people see how a product looks on various types of body styles and offers styling advice. Style AI works by using a machine learning algorithm to look at a specific product and visually understand it. “So if someone searches ‘gingham long sleeve shirt,’ Style AI will look at images of long-sleeved gingham shirts, apply our vision recognition technology and understand things like the pattern and the sleeve length and show users fashions that might interest them.” In order to make sure Style AI was inclusive of all different types of shapes, sizes and skin tones Debbie consulted with Google’s Product Fairness, or ProFair, team. ProFair helps teams at Google apply the AI Principles by investigating fairness issues. Together, they find ways to build inclusive services, strengthen equity in data labels and promote fairness and combat bias in AI. 

ProFair held sessions where everyone involved in the project could look for “fairness issues,” which helped Debbie’s team adjust how they designed Style AI. And there was much to consider. “First, we need to be careful of what data we train a model on. If you tell a machine that a certain size and skin color is what it needs to look for, it will,” Debbie explains. “So as responsible product owners, we need to make sure we train it the right way. Even after this, a machine can make many mistakes unknowingly — for example, not realizing that a certain style can be very offensive in one culture and be totally cool in another.” 

For instance, before launching in countries like India and Brazil, ProFair held local focus groups in collaboration with Google’s Product Inclusion team. Debbie says this helped her team find diverse images and clothing for these specific demographics. Debbie’s — and the entire team’s — ultimate goal is that shoppers will feel like they’re seeing themselves when they look for clothing. “Looking at stock product images does not help you decide on your purchase,” she says. “We just always think about what people told us while we were building Style AI: ‘I want to see the product on someone like me!’”

These researchers are bringing AI to farmers

“Farmers feed the entire world — so how might we support them to be resilient and build sustainable systems that also support global food security?” It’s a question that Diana Akrong found herself asking last year. Diana is a UX researcher based in Accra, Ghana, and the founding member of Google’s Accra UX team.

Across the world, her manager Dr. Courtney Heldreth, was equally interested in answering this question. Courtney is a social psychologist and a staff UX researcher based in Seattle, and both women work as part of Google’s People + Artificial Intelligence Research (PAIR) group. “Looking back on history, we can see how the industrial revolution played a significant role in creating global inequality,” she says. “It set most of Western Europe onto a path of economic dominance that was then followed by both military and political dominance.” Courtney and Diana teamed up on an exploratory effort focused on how AI can help better the lives of small, local farming communities in the Global South. They and their team want to understand what farmers need, their practices, value systems, what their social lives are like — and make sure that Google products reflect these dynamics.

One result of their work is a recently published research paper. The paper — written alongside their colleagues Dr. Jess Holbrook at Google and Dr. Norman Makoto Su of Indiana University and published in the ACM Interactions trade journal — dives into why we need farmer-centered AI research, and what it could mean not just for farmers, but for everyone they feed. I recently took some time to learn more about their work.


How would you explain your job to someone who isn't in tech?

Courtney: I would say I’m a researcher trying to understand underserved and historically marginalized users’ lives and needs so we can create products that work better for them. 

Diana: I’m a researcher who looks at how people interact with technology. My superpower is my curiosity and it’s my mission to understand and advocate for user needs, explore business opportunities and share knowledge.


What’s something on your mind right now? 

Diana: Because of COVID-19, there’s the threat of a major food crisis in India and elsewhere. We’re wondering how we can work with small farms as well as local consumers, policymakers, agricultural workers, agribusiness owners and NGOs to solve this problem.

Agriculture is very close to my heart, personally. Prior to joining Google, I spent a lot of time learning from smallholder farmers across my country and helping design concepts to address their needs. 

“Farmers feed the entire world — so how might we support them to be resilient and build sustainable systems that also support global food security?” Diana Akrong
UX researcher, Google


Courtney: I’ve been thinking about how AI can be seen as this magical, heroic thing, but there are also many risks to using it in places where there aren’t laws to protect people. When I think about Google’s AI Principles — be socially beneficial, be accountable to people, avoid reinforcing bias, prioritize safety — those things define what projects I want to work on. It’s also why my colleague Tabitha Yong and I developed a set of best practices for designing more equitable AI products.


Can you tell me more about your paper, “What Does AI Mean for Smallholder Farmers? A Proposal for Farmer-Centered AI Research,” recently published in ACM Interactions

Courtney: The impact and failures of AI are often very western and U.S.-centric. We’re trying to think about how to make this more fair and inclusive for communities with different needs around the globe. For example, in our farmer-centered AI research, we know that most existing AI solutions are designed for large farms in the developed world. However, many farmers in the Global South live and work in rural areas, which trail behind urban areas in terms of connectivity and digital adoption. By focusing on the daily realities of these farmers, we can better understand different perspectives, especially those of people who don’t live in the U.S. and Europe, so that Google’s products work for everyone, everywhere.

Why did you want to work at Google?

Diana: I see Google as home to teams with diverse experiences and skills who work collaboratively to tackle complex, important issues that change real people’s lives. I’ve thrived here because I get to work on projects I care about and play a critical role in growing the UX community here in Ghana.

Courtney: I chose Google because we work on the world's hardest problems. Googlers are  fearless and the reach of Google’s products and services is unprecedented. As someone who comes from an underrepresented group, I never thought I would work here. To be here at this moment is so important to me, my community and my family. When I look at issues I care about the most — marginalized and underrepresented communities — the work we do plays a critical role in preventing algorithmic bias, bridging the digital divide and lessening these inequalities. 


How have you seen your research help real people? 

Courtney: In 2018, we worked with Titi Akinsanmi, Google’s Policy and Government Relations Lead for West and Francophone Africa, and PAIR Co-lead and Principal Research Scientist Fernanda Viegas on the report for AI in Nigeria. Since then, the Ministry of Technology and Science reached out to Google to help form a strategy around AI. We’ve seen government bodies in sub-Saharan Africa use this paper as a roadmap to develop their own responsible AI policies.


How should aspiring AI thinkers and future technologists prepare for a career in this field?

Diana: My main advice? Start with people and their needs. A digital solution or AI may not be necessary to solve every problem. The PAIR Guidebook is a great reference for best practices and examples for designing with AI.

Maysam Moussalem teaches Googlers human-centered AI

Originally, Maysam Moussalem dreamed of being an architect. “When I was 10, I looked up to see the Art Nouveau dome over the Galeries Lafayette in Paris, and I knew I wanted to make things like that,” she says. “Growing up between Austin, Paris, Beirut and Istanbul just fed my love of architecture.” But she found herself often talking to her father, a computer science (CS) professor, about what she wanted in a career. "I always loved art and science and I wanted to explore the intersections between fields. CS felt broader to me, and so I ended up there."

While in grad school for CS, her advisor encouraged her to apply for a National Science Foundation Graduate Research Fellowship. “Given my lack of publications at the time, I wasn’t sure I should apply,” Maysam remembers. “But my advisor gave me some of the best advice I’ve ever received: ‘If you try, you may not get it. But if you don’t try, you definitely won’t get it.’” Maysam received the scholarship, which supported her throughout grad school. “I’ll always be grateful for that advice.” 

Today, Maysam works in AI, in Google’s Machine Learning Education division and also as the co-author and editor-in-chief of the People + AI Research (PAIR) Guidebook. She’s hosting a session at Google I/O on “Building trusted AI products” as well, which you can view when it’s live at 9 am PT Thursday, May 20, as a part of Google Design’s I/O Agenda. We recently took some time to talk to Maysam about what landed her at Google, and her path toward responsible innovation.

How would you explain your job to someone who isn't in tech?

I create different types of training, like workshops and labs for Googlers who work in machine learning and data science. I also help create guidebooks and courses that people who don’t work at Google use.

What’s something you didn’t realize would help you in your career one day?

I didn’t think that knowing seven languages would come in handy for my work here, but it did! When I was working on the externalization of the Machine Learning Crash Course, I was so happy to be able to review modules and glossary entries for the French translation!

How do you apply Google’s AI Principles in your work? 

I’m applying the AI Principles whenever I’m helping teams learn best practices for building user-centered products with AI. It’s so gratifying when someone who’s taken one of my classes tells me they had a great experience going through the training, they enjoyed learning something new and they feel ready to apply it in their work. Just like when I was an engineer, anytime someone told me the tool I’d worked on helped them do their job better and addressed their needs, it drove home the fourth AI principle: Being accountable to people. It’s so important to put people first in our work. 

This idea was really important when I was working on Google’s People + AI Research (PAIR) Guidebook. I love PAIR’s approach of putting humans at the center of product development. It’s really helpful when people in different roles come together and pool their skills to make better products. 

How did you go from being an engineer to doing what you’re doing now? 

At Google, it feels like I don't have to choose between learning and working. There are tech talks every week, plus workshops and codelabs constantly. I’ve loved continuing to learn while working here.

Being raised by two professors also gave me a love of teaching. I wanted to share what I'd learned with others. My current role enables me to do this and use a wider range of my skills.

My background as an engineer gives me a strong understanding of how we build software at Google's scale. This inspires me to think more about how to bring education into the engineering workflow, rather than forcing people to learn from a disconnected experience.

How can aspiring AI thinkers and future technologists prepare for a career in responsible innovation? 

Pick up and exercise a variety of skills! I’m a technical educator, but I’m always happy to pick up new skills that aren’t traditionally specific to my job. For example, I was thinking of a new platform to deliver internal data science training, and I learned how to create a prototype using UX tools so that I could illustrate my ideas really clearly in my proposal. I write, code, teach, design and I’m always interested in learning new techniques from my colleagues in other roles.

And spend time with your audience, the people who will be using your product or the coursework you’re creating or whatever it is you’re working on. When I was an engineer, I’d always look for opportunities to sit with, observe, and talk with the people who were using my team’s products. And I learned so much from this process.

What drives Nithya Sambasivan’s fight for fairness

When Nithya Sambasivan was finishing her undergraduate degree in engineering, she felt slightly unsatisfied. “I wanted to know, ‘how will the technology I build impact people?’” she says. Luckily, she would soon discover the field of Human Computer Interaction (HCI) and pursue her graduate degrees. 

She completed her master’s and PhD in HCI focusing on technology design for low-income communities in India. “I worked with sex workers, slum communities, microentrepreneurs, fruit and vegetables sellers on the streetside...” she says. “I wanted to understand what their values, aspirations and struggles are, and how we can build with them in mind.” 

Today, Nithya is the founder of the HCI group at the Google Research India lab and an HCI researcher at PAIR, a multidisciplinary team at Google that explores the human side of AI by doing fundamental research, building tools, creating design frameworks, and working with diverse communities. She recently sat down to answer some of our questions about her journey to researching responsible AI, fairness and championing historically underrepresented technology users.

How would you explain your job to someone who isn't in tech?

I’m a human-computer interaction (HCI) researcher, which means I study people to better understand how to build technology that works for them. There’s been a lot of focus in the research community on building AI systems and the possibility of positively impacting the lives of billions of people. I focus on human-centered, responsible AI; specifically looking for ways it can empower communities in the Global South, where over 80% of the world’s population lives. Today, my research outlines a road map for fairness research in India, calling for re-contextualizing datasets and models while empowering communities and enabling an entire fairness ecosystem.

What originally inspired your interest in technology? 

I grew up in a middle class family, the younger of two daughters from the South of India. My parents have very progressive views about gender roles and independence, especially in a conservative society — this definitely influenced what and how I research; things like gender, caste and  poverty. In school, I started off studying engineering, which is a conventional path in India. Then, I went on to focus on HCI and designing with my own and other under-represented communities around the world.

Nithya smiling at a small child while working in the field.

How do Google’s  AI Principles inform your research? And how do you approach your research in general?

Context matters. A general theory of algorithmic fairness cannot be based on “Western” populations alone. My general approach is to research an important long-term, foundational problem. For example, our research on algorithmic fairness reframes the conversation on ethical AI away from focusing mainly on Western, meaning largely European or North American, perspectives. Another project revealed that AI developers have historically focused more on the model — or algorithm — instead of the data. Both deeply affect the eventual AI performance, so being so focused on only one aspect creates downstream problems. For example, data sets may fully miss sub-populations, so when they are deployed, they may  have much higher error rates or be unusable. Or they could make outcomes worse for certain groups, by misidentifying them as suspects for crimes or erroneously denying them bank loans they should receive.  

These insights not only enable AI systems to be better designed for under-represented communities; they also generate new considerations in the field of computing for humane and inclusive data collection, gender and social status representation, and privacy and safety needs of the most vulnerable. They are then  incorporated into Google products that millions of people use, such as Safe Folder on Files Go, Google Go’s incognito mode, Neighbourly‘s privacy, Safe Safer by Google Maps and Women in STEM videos. 

What are some of the questions you’re seeking to answer with your work?

How do we challenge inherent “West”-centric assumptions for algorithmic fairness, tech norms and make AI work better for people around the world?

For example, there’s an assumption that algorithmic biases can be fixed by adding more data from different groups. But in India, we've found that data can't always represent individuals or events for many different reasons like economics and access to devices. The data could come mostly from middle class Indian men, since they’re more likely to have internet access. This means algorithms will work well for them. Yet, over half the population — primarily women, rural and tribal communities — lack access to the internet and they’re left out. Caste, religion and other factors can also contribute to new biases for AI models. 

How should aspiring AI thinkers and future technologists prepare for a career in this field? 

It’s really important that Brown and Black people enter this field. We not only bring technical skills but also lived experiences and values that are so critical to the field of computing. Our communities are the most vulnerable to AI interventions, so it’s important we shape and build these systems. To members of this community: Never play small or let someone make you feel small. Involve yourself in the political, social and ecological aspects of the invisible, not on tech innovation alone. We can’t afford not to.

Meet the researcher creating more access with language

When you’ve got your hands full, so you use your voice to ask your phone to play your favorite song, it can feel like magic. In reality, it’s a more complicated combination of engineering, design and natural language processing at work, making it easier for many of us to use our smartphones. But what happens when this voice technology isn’t available in our own language? 

This is something Google India researcher Shachi Dave considers as part of her day-to-day work. While English is the most widely spoken language globally, it ranks third as the most widely spoken native language (behind Mandarin and Spanish)—just ahead of Hindi, Bengali and a number of other languages that are official in India. Home to more than one billion people and an impressive number of official languages—22, to be exact—India is at the cutting edge of Google’s language localization or L10n (10 represents the number of letters between ‘l’ and ‘n’) efforts. 

Shachi, who is a founding member of the Google India Research team, works on natural language understanding, a field of artificial intelligence (AI) which builds computer algorithms to understand our everyday speech and language. Working with Google’s AI principles, she aims to ensure teams build our products to be socially beneficial and inclusive. Born and raised in India, Shachi graduated with a master’s degree in computer science from the University of Southern California. After working at a few U.S. startups, she joined Google over 12 years ago and returned to India to take on more research and leadership responsibilities. Since she joined the company, she has worked closely with teams in Mountain View, New York, Zurich and Tel Aviv. She also actively contributes towards improving diversity and inclusion at Google through mentoring fellow female software engineers.

How would you explain your job to someone who isn't in tech?

My job is to make sure computers can understand and interact with humans naturally, a field of computer science we call natural language processing (NLP). Our research has found that many Indian users tend to use a mix of English and their native language when interacting with our technology, so that’s why understanding natural language is so important—it’s key to localization, our efforts to provide our services in every language and culture—while making sure our technology is fun to use and natural-sounding along the way.

What are some of the biggest challenges you’re tackling in your work now?


The biggest challenge is that India is a multilingual country, with 22 official languages. I have seen friends, family and even strangers struggle with technology that doesn’t work for them in their language, even though it can work so well in other languages. 

Let’s say one of our users is a shop owner and lives in a small village in the southern Indian state of Telangana. She goes online for the first time with her phone. But since she has never used a computer or smartphone before, using her voice is the most natural way for her to interact with her phone. While she knows some English, she is also more comfortable speaking in her native language, Telugu. Our job is to make sure that she has a positive experience and does not have to struggle to get the information she needs. Perhaps she’s able to order more goods for her shop through the web, or maybe she decides to have her services listed online to grow her business. 

So that’s part of my motivation to do my research, and that’s one of Google’s AI Principles, too—to make sure our technology is socially beneficial. 

Speaking of the AI Principles, what other principles help inform your research?

Another one of Google’s AI Principles is avoiding creating or reinforcing unfair bias. AI systems are good at recognizing patterns within data. Given that most data that we feed into training an AI system is generated by humans, it tends to have human biases and prejudices. I look for systematic ways to remove these biases. This requires constant awareness: being aware of how people have different languages, backgrounds and financial statuses. Our society has people from the entire financial spectrum, from super rich to low-income, so what works on the most expensive phones might not work on lower-cost devices. Also, some of our users might not be able to read or write, so we need to provide some audio and visual tools for them to have a better internet experience.

What led you to this career and inspired you to join Google?  

I took an Introduction to Artificial Intelligence course as an undergraduate, and it piqued my interest and curiosity. That ultimately led to research on machine translation at the Indian Institute of Technology Bombay and then an advanced degree at the University of Southern California. After that, I spent some time working at U.S. startups that were using NLP and machine learning. 

But I wanted more. I wanted to be intellectually challenged, solving hard problems. Since Google had the computing power and reputation for solving problems at scale, it became one of my top choices for places to work. 

Now you’ve been at Google for over 12 years. What are some of the most rewarding moments of your career?

Definitely when I saw the quality improvements I worked on go live on Google Search and Assistant, positively impacting millions of people. I remember I was able to help launch local features like getting the Assistant to play the songs people wanted to hear. Playing music upon request makes people happy, and it’s a feature that still works today. 

Over the years, I have gone through difficult situations as someone from an underrepresented group. I was fortunate to have a great support network—women peers as well as allies—who helped me. I try to pay it forward by being a mentor for underrepresented groups both within and outside Google.

How should aspiring AI researchers prepare for a career in this field? 

First, be a lifelong learner: The industry is moving at a fast pace. It’s important to carve out time to keep yourself well-read about the latest research in your field as well as related fields.

Second, know your motivation: When a problem is super challenging and super hard, you need to have that focus and belief that what you’re doing is going to contribute positively to our society.

Fernanda Viégas puts people at the heart of AI

When Fernanda Viégas was in college, it took three years with three different majors before she decided she wanted to study graphic design and art history. And even then, she couldn’t have imagined the job she has today: building artificial intelligence and machine learning with fairness and transparency in mind to help people in their daily lives.  

Today Fernanda, who grew up in Rio de Janeiro, Brazil, is a senior researcher at Google. She’s based in London, where she co-leads the global People + AI Research (PAIR) Initiative, which she co-founded with fellow senior research scientist Martin M. Wattenberg and Senior UX Researcher Jess Holbrook, and the Big Picture team. She and her colleagues make sure people at Google think about fairness and values–and putting Google’s AI Principlesinto practice–when they work on artificial intelligence. Her team recently launched a seriesof “AI Explorables,"a collection of interactive articles to better explain machine learning to everyone. 

When she’s not looking into the big questions around emerging technology, she’s also an artist, known for her artistic collaborations with Wattenberg. Their data visualization art is a part of the permanent collection of the Museum of Modern Art in New York.  

I recently sat down with Fernanda via Google Meet to talk about her role and the importance of putting people first when it comes to AI. 

How would you explain your job to someone who isn't in tech?

As a research scientist, I try to make sure that machine learning (ML) systems can be better understood by people, to help people have the right level of trust in these systems. One of the main ways in which our work makes its way to the public is through the People + AI Guidebook, a set of principles and guidelines for user experience (UX) designers, product managers and engineering teams to create products that are easier to understand from a user’s perspective.

What is a key challenge that you’re focusing on in your research? 

My team builds data visualization tools that help people building AI systems to consider issues like fairness proactively, so that their products can work better for more people. Here’s a generic example: Let’s imagine it's time for your coffee break and you use an app that uses machine learning for recommendations of coffee places near you at that moment. Your coffee app provides 10 recommendations for cafes in your area, and they’re all well-rated. From an accuracy perspective, the app performed its job: It offered information on a certain number of cafes near you. But it didn’t account for unintended unfair bias. For example: Did you get recommendations only for large businesses? Did the recommendations include only chain coffee shops? Or did they also include small, locally owned shops? How about places with international styles of coffee that might be nearby? 

The tools our team makes help ensure that the recommendations people get aren’t unfairly biased. By making these biases easy to spot with engaging visualizations of the data, we can help identify what might be improved. 

What inspired you to join Google? 

It’s so interesting to consider this because my story comes out of repeated failures, actually! When I was a student in Brazil, where I was born and grew up, I failed repeatedly in figuring out what I wanted to do. After spending three years studying for different things—chemical engineering, linguistics, education—someone said to me, “You should try to get a scholarship to go to the U.S.” I asked them why I should leave my country to study somewhere when I wasn’t even sure of my major. “That's the thing,” they said. “In the U.S. you can be undecided and change majors.” I loved it! 

So I went to the U.S. and by the time I was graduating, I decided I loved design but I didn't want to be a traditional graphic designer for the rest of my life. That’s when I heard about the Media Lab at MIT and ended up doing a master's degree and PhD in data visualization there. That’s what led me to IBM, where I met Martin M. Wattenberg. Martin has been my working partner for 15 years now; we created a startup after IBM and then Google hired us. In joining, I knew it was our chance to work on products that have the possibility of affecting the world and regular people at scale. 

Two years ago, we shared our seven AI Principles to guide our work. How do you apply them to your everyday research?

One recent example is from our work with the Google Flights team. They offered users alerts about the “right time to buy tickets,” but users were asking themselves, Hmm, how do I trust this alert?  So the designers used our PAIR Guidebook to underscore the importance of AI explainability in their discussions with the engineering team. Together, they redesigned the feature to show users how the price for a flight has changed over the past few months and notify them when prices may go up or won’t get any lower. When it launched, people saw our price history graph and responded very well to it. By using our PAIR Guidebook, the team learned that how you explain your technology can significantly shape the user’s trust in your system. 

Historically, ML has been evaluated along the lines of mathematical metrics for accuracy—but that’s not enough. Once systems touch real lives, there’s so much more you have to think about, such as fairness, transparency, bias and explainability—making sure people understand why an algorithm does what it does. These are the challenges that inspire me to stay at Google after more than 10 years. 

What’s been one of the most rewarding moments of your career?

Whenever we talk to students and there are women and minorities who are excited about working in tech, that’s incredibly inspiring to me. I want them to know they belong in tech, they have a place here. 

Also, working with my team on a Google Doodle about the composer Johann Sebastian Bach last year was so rewarding. It was the very first time Google used AI for a Doodle and it was thrilling to tell my family in Brazil, look, there’s an AI Doodle that uses our tech! 

How should aspiring AI thinkers and future technologists prepare for a career in this field? 

Try to be deep in your field of interest. If it’s AI, there are so many different aspects to this technology, so try to make sure you learn about them. AI isn’t just about technology. It’s always useful to be looking at the applications of the technology, how it impacts real people in real situations.