Author Archives: MJ Pham

What drives Nithya Sambasivan’s fight for fairness

When Nithya Sambasivan was finishing her undergraduate degree in engineering, she felt slightly unsatisfied. “I wanted to know, ‘how will the technology I build impact people?’” she says. Luckily, she would soon discover the field of Human Computer Interaction (HCI) and pursue her graduate degrees. 

She completed her master’s and PhD in HCI focusing on technology design for low-income communities in India. “I worked with sex workers, slum communities, microentrepreneurs, fruit and vegetables sellers on the streetside...” she says. “I wanted to understand what their values, aspirations and struggles are, and how we can build with them in mind.” 

Today, Nithya is the founder of the HCI group at the Google Research India lab and an HCI researcher at PAIR, a multidisciplinary team at Google that explores the human side of AI by doing fundamental research, building tools, creating design frameworks, and working with diverse communities. She recently sat down to answer some of our questions about her journey to researching responsible AI, fairness and championing historically underrepresented technology users.

How would you explain your job to someone who isn't in tech?

I’m a human-computer interaction (HCI) researcher, which means I study people to better understand how to build technology that works for them. There’s been a lot of focus in the research community on building AI systems and the possibility of positively impacting the lives of billions of people. I focus on human-centered, responsible AI; specifically looking for ways it can empower communities in the Global South, where over 80% of the world’s population lives. Today, my research outlines a road map for fairness research in India, calling for re-contextualizing datasets and models while empowering communities and enabling an entire fairness ecosystem.

What originally inspired your interest in technology? 

I grew up in a middle class family, the younger of two daughters from the South of India. My parents have very progressive views about gender roles and independence, especially in a conservative society — this definitely influenced what and how I research; things like gender, caste and  poverty. In school, I started off studying engineering, which is a conventional path in India. Then, I went on to focus on HCI and designing with my own and other under-represented communities around the world.

Nithya smiling at a small child while working in the field.

How do Google’s  AI Principles inform your research? And how do you approach your research in general?

Context matters. A general theory of algorithmic fairness cannot be based on “Western” populations alone. My general approach is to research an important long-term, foundational problem. For example, our research on algorithmic fairness reframes the conversation on ethical AI away from focusing mainly on Western, meaning largely European or North American, perspectives. Another project revealed that AI developers have historically focused more on the model — or algorithm — instead of the data. Both deeply affect the eventual AI performance, so being so focused on only one aspect creates downstream problems. For example, data sets may fully miss sub-populations, so when they are deployed, they may  have much higher error rates or be unusable. Or they could make outcomes worse for certain groups, by misidentifying them as suspects for crimes or erroneously denying them bank loans they should receive.  

These insights not only enable AI systems to be better designed for under-represented communities; they also generate new considerations in the field of computing for humane and inclusive data collection, gender and social status representation, and privacy and safety needs of the most vulnerable. They are then  incorporated into Google products that millions of people use, such as Safe Folder on Files Go, Google Go’s incognito mode, Neighbourly‘s privacy, Safe Safer by Google Maps and Women in STEM videos. 

What are some of the questions you’re seeking to answer with your work?

How do we challenge inherent “West”-centric assumptions for algorithmic fairness, tech norms and make AI work better for people around the world?

For example, there’s an assumption that algorithmic biases can be fixed by adding more data from different groups. But in India, we've found that data can't always represent individuals or events for many different reasons like economics and access to devices. The data could come mostly from middle class Indian men, since they’re more likely to have internet access. This means algorithms will work well for them. Yet, over half the population — primarily women, rural and tribal communities — lack access to the internet and they’re left out. Caste, religion and other factors can also contribute to new biases for AI models. 

How should aspiring AI thinkers and future technologists prepare for a career in this field? 

It’s really important that Brown and Black people enter this field. We not only bring technical skills but also lived experiences and values that are so critical to the field of computing. Our communities are the most vulnerable to AI interventions, so it’s important we shape and build these systems. To members of this community: Never play small or let someone make you feel small. Involve yourself in the political, social and ecological aspects of the invisible, not on tech innovation alone. We can’t afford not to.

Meet the researcher creating more access with language

When you’ve got your hands full, so you use your voice to ask your phone to play your favorite song, it can feel like magic. In reality, it’s a more complicated combination of engineering, design and natural language processing at work, making it easier for many of us to use our smartphones. But what happens when this voice technology isn’t available in our own language? 

This is something Google India researcher Shachi Dave considers as part of her day-to-day work. While English is the most widely spoken language globally, it ranks third as the most widely spoken native language (behind Mandarin and Spanish)—just ahead of Hindi, Bengali and a number of other languages that are official in India. Home to more than one billion people and an impressive number of official languages—22, to be exact—India is at the cutting edge of Google’s language localization or L10n (10 represents the number of letters between ‘l’ and ‘n’) efforts. 

Shachi, who is a founding member of the Google India Research team, works on natural language understanding, a field of artificial intelligence (AI) which builds computer algorithms to understand our everyday speech and language. Working with Google’s AI principles, she aims to ensure teams build our products to be socially beneficial and inclusive. Born and raised in India, Shachi graduated with a master’s degree in computer science from the University of Southern California. After working at a few U.S. startups, she joined Google over 12 years ago and returned to India to take on more research and leadership responsibilities. Since she joined the company, she has worked closely with teams in Mountain View, New York, Zurich and Tel Aviv. She also actively contributes towards improving diversity and inclusion at Google through mentoring fellow female software engineers.

How would you explain your job to someone who isn't in tech?

My job is to make sure computers can understand and interact with humans naturally, a field of computer science we call natural language processing (NLP). Our research has found that many Indian users tend to use a mix of English and their native language when interacting with our technology, so that’s why understanding natural language is so important—it’s key to localization, our efforts to provide our services in every language and culture—while making sure our technology is fun to use and natural-sounding along the way.

What are some of the biggest challenges you’re tackling in your work now?


The biggest challenge is that India is a multilingual country, with 22 official languages. I have seen friends, family and even strangers struggle with technology that doesn’t work for them in their language, even though it can work so well in other languages. 

Let’s say one of our users is a shop owner and lives in a small village in the southern Indian state of Telangana. She goes online for the first time with her phone. But since she has never used a computer or smartphone before, using her voice is the most natural way for her to interact with her phone. While she knows some English, she is also more comfortable speaking in her native language, Telugu. Our job is to make sure that she has a positive experience and does not have to struggle to get the information she needs. Perhaps she’s able to order more goods for her shop through the web, or maybe she decides to have her services listed online to grow her business. 

So that’s part of my motivation to do my research, and that’s one of Google’s AI Principles, too—to make sure our technology is socially beneficial. 

Speaking of the AI Principles, what other principles help inform your research?

Another one of Google’s AI Principles is avoiding creating or reinforcing unfair bias. AI systems are good at recognizing patterns within data. Given that most data that we feed into training an AI system is generated by humans, it tends to have human biases and prejudices. I look for systematic ways to remove these biases. This requires constant awareness: being aware of how people have different languages, backgrounds and financial statuses. Our society has people from the entire financial spectrum, from super rich to low-income, so what works on the most expensive phones might not work on lower-cost devices. Also, some of our users might not be able to read or write, so we need to provide some audio and visual tools for them to have a better internet experience.

What led you to this career and inspired you to join Google?  

I took an Introduction to Artificial Intelligence course as an undergraduate, and it piqued my interest and curiosity. That ultimately led to research on machine translation at the Indian Institute of Technology Bombay and then an advanced degree at the University of Southern California. After that, I spent some time working at U.S. startups that were using NLP and machine learning. 

But I wanted more. I wanted to be intellectually challenged, solving hard problems. Since Google had the computing power and reputation for solving problems at scale, it became one of my top choices for places to work. 

Now you’ve been at Google for over 12 years. What are some of the most rewarding moments of your career?

Definitely when I saw the quality improvements I worked on go live on Google Search and Assistant, positively impacting millions of people. I remember I was able to help launch local features like getting the Assistant to play the songs people wanted to hear. Playing music upon request makes people happy, and it’s a feature that still works today. 

Over the years, I have gone through difficult situations as someone from an underrepresented group. I was fortunate to have a great support network—women peers as well as allies—who helped me. I try to pay it forward by being a mentor for underrepresented groups both within and outside Google.

How should aspiring AI researchers prepare for a career in this field? 

First, be a lifelong learner: The industry is moving at a fast pace. It’s important to carve out time to keep yourself well-read about the latest research in your field as well as related fields.

Second, know your motivation: When a problem is super challenging and super hard, you need to have that focus and belief that what you’re doing is going to contribute positively to our society.

Fernanda Viégas puts people at the heart of AI

When Fernanda Viégas was in college, it took three years with three different majors before she decided she wanted to study graphic design and art history. And even then, she couldn’t have imagined the job she has today: building artificial intelligence and machine learning with fairness and transparency in mind to help people in their daily lives.  

Today Fernanda, who grew up in Rio de Janeiro, Brazil, is a senior researcher at Google. She’s based in London, where she co-leads the global People + AI Research (PAIR) Initiative, which she co-founded with fellow senior research scientist Martin M. Wattenberg and Senior UX Researcher Jess Holbrook, and the Big Picture team. She and her colleagues make sure people at Google think about fairness and values–and putting Google’s AI Principlesinto practice–when they work on artificial intelligence. Her team recently launched a seriesof “AI Explorables,"a collection of interactive articles to better explain machine learning to everyone. 

When she’s not looking into the big questions around emerging technology, she’s also an artist, known for her artistic collaborations with Wattenberg. Their data visualization art is a part of the permanent collection of the Museum of Modern Art in New York.  

I recently sat down with Fernanda via Google Meet to talk about her role and the importance of putting people first when it comes to AI. 

How would you explain your job to someone who isn't in tech?

As a research scientist, I try to make sure that machine learning (ML) systems can be better understood by people, to help people have the right level of trust in these systems. One of the main ways in which our work makes its way to the public is through the People + AI Guidebook, a set of principles and guidelines for user experience (UX) designers, product managers and engineering teams to create products that are easier to understand from a user’s perspective.

What is a key challenge that you’re focusing on in your research? 

My team builds data visualization tools that help people building AI systems to consider issues like fairness proactively, so that their products can work better for more people. Here’s a generic example: Let’s imagine it's time for your coffee break and you use an app that uses machine learning for recommendations of coffee places near you at that moment. Your coffee app provides 10 recommendations for cafes in your area, and they’re all well-rated. From an accuracy perspective, the app performed its job: It offered information on a certain number of cafes near you. But it didn’t account for unintended unfair bias. For example: Did you get recommendations only for large businesses? Did the recommendations include only chain coffee shops? Or did they also include small, locally owned shops? How about places with international styles of coffee that might be nearby? 

The tools our team makes help ensure that the recommendations people get aren’t unfairly biased. By making these biases easy to spot with engaging visualizations of the data, we can help identify what might be improved. 

What inspired you to join Google? 

It’s so interesting to consider this because my story comes out of repeated failures, actually! When I was a student in Brazil, where I was born and grew up, I failed repeatedly in figuring out what I wanted to do. After spending three years studying for different things—chemical engineering, linguistics, education—someone said to me, “You should try to get a scholarship to go to the U.S.” I asked them why I should leave my country to study somewhere when I wasn’t even sure of my major. “That's the thing,” they said. “In the U.S. you can be undecided and change majors.” I loved it! 

So I went to the U.S. and by the time I was graduating, I decided I loved design but I didn't want to be a traditional graphic designer for the rest of my life. That’s when I heard about the Media Lab at MIT and ended up doing a master's degree and PhD in data visualization there. That’s what led me to IBM, where I met Martin M. Wattenberg. Martin has been my working partner for 15 years now; we created a startup after IBM and then Google hired us. In joining, I knew it was our chance to work on products that have the possibility of affecting the world and regular people at scale. 

Two years ago, we shared our seven AI Principles to guide our work. How do you apply them to your everyday research?

One recent example is from our work with the Google Flights team. They offered users alerts about the “right time to buy tickets,” but users were asking themselves, Hmm, how do I trust this alert?  So the designers used our PAIR Guidebook to underscore the importance of AI explainability in their discussions with the engineering team. Together, they redesigned the feature to show users how the price for a flight has changed over the past few months and notify them when prices may go up or won’t get any lower. When it launched, people saw our price history graph and responded very well to it. By using our PAIR Guidebook, the team learned that how you explain your technology can significantly shape the user’s trust in your system. 

Historically, ML has been evaluated along the lines of mathematical metrics for accuracy—but that’s not enough. Once systems touch real lives, there’s so much more you have to think about, such as fairness, transparency, bias and explainability—making sure people understand why an algorithm does what it does. These are the challenges that inspire me to stay at Google after more than 10 years. 

What’s been one of the most rewarding moments of your career?

Whenever we talk to students and there are women and minorities who are excited about working in tech, that’s incredibly inspiring to me. I want them to know they belong in tech, they have a place here. 

Also, working with my team on a Google Doodle about the composer Johann Sebastian Bach last year was so rewarding. It was the very first time Google used AI for a Doodle and it was thrilling to tell my family in Brazil, look, there’s an AI Doodle that uses our tech! 

How should aspiring AI thinkers and future technologists prepare for a career in this field? 

Try to be deep in your field of interest. If it’s AI, there are so many different aspects to this technology, so try to make sure you learn about them. AI isn’t just about technology. It’s always useful to be looking at the applications of the technology, how it impacts real people in real situations.