Author Archives: Reena Jana

“Lift as you lead”: Meet 2 women defining responsible AI

At Google, Marian Croak’s technical research team, The Center for Responsible AI and Human-Centered Technology, and Jen Gennai’s operations and governance team, Responsible Innovation, collaborate often on creating a fairer future for AI systems.

The teams complement each other to support computer scientists, UX researchers and designers, product managers and subject matter experts in the social sciences, human rights and civil rights. Collectively, their teams include more than 200 people around the globe focused on putting our AI Principles – Google’s ethical charter – into practice.

“The intersection of AI systems and society is a critical area of my team’s technical research,” Marian says. “Our approach includes working directly with people who use and are impacted by AI systems. Working together with Jen’s central operations team, the idea is to make AI more useful and reduce potential harm before products launch.”

For Women’s History Month, we wanted to talk to them both about this incredibly meaningful work and how they bring their lived experiences to it.

How do you define “responsible AI”?

Marian: It’s the technical realization of our AI Principles. We need to understand how AI systems are performing in respect to fairness, transparency, interpretability, robustness and privacy. When gaps occur, we fix them. We benchmark and evaluate how product teams are adopting what Jen and I call smart practices. These are trusted practices based on patterns we see across Google as we’re developing new AI applications, and the data-driven results of applying these practices over time.

Jen: There are enormous opportunities to use AI for positive impact — and the potential for harm, too. The key is ethical deployment. “Responsible AI” for me means taking deliberate steps to ensure technology works the way it’s intended to and doesn’t lead to malicious or unintended negative consequences. This involves applying the smart practices Marian mentioned through repeatable processes and a governance structure for accountability.

How do your teams work together?

Marian: They work hand in hand. My team conducts scientific research and creates open source tools like Fairness Indicators and Know Your Data. A large portion of our technical research and product work is centered in societal context and human and civil rights, so Jen’s team is integral to understanding the problems we seek to help solve.

Jen: The team I lead defines Google policies, handles day-to-day operations and central governance structure, and conducts ethical assessments. We’re made up of user researchers, social scientists, ethicists, human rights specialists, policy and privacy advisors and legal experts.

One team can’t work without the other! This complementary relationship allows many different perspectives and lived experiences to inform product design decisions. Here’s an example, which was led by women from a variety of global backgrounds: Marian’s team designed a streamlined, open source format for documenting technical details of datasets, called data cards. When researchers on the Translate team, led by product manager Romina Stella, recently developed a new dataset for studying and preventing gender bias in machine learning, members of my team, Anne P., N’Mah Y. and Reena Jana, reviewed the dataset for alignment with the AI Principles. They recommended that the Translate researchers publish a data card for details on how the dataset was created and tested. The Translate team then worked with UX designer Mahima Pushkarna on Marian’s team to create and launch the card alongside the dataset.

I’m inspired most when someone tells me I can’t do something. No matter what obstacles you face, believe you have the skills, the knowledge and the passion to make your dreams come true.

How did you end up working in this very new field?

Marian: I’ve always been drawn to hard problems. This is a very challenging area! It’s so multifaceted and constantly evolving. That excites me. It’s an honor to work with so many passionate people who care so deeply about our world and understanding how to use technology for social good.

I’ll always continue to seek out solutions to these problems because I understand the profound impact this work will have on our society and our world, especially communities underrepresented in the tech industry.

Jen: I spent many years leading User Research and User Advocacy on Google’s Trust and Safety team. An area I focused on was ML Fairness. I never thought I’d get to work on it full time. But in 2016 my leadership team wanted to have a company-wide group concentrating on worldwide positive social benefits of AI. In 2017, I joined the team that was writing and publishing the AI Principles. Today, I apply my operational knowledge to make sure that as a company, we meet the obligations we laid out in the Principles.

What advice do you have for girls and women interested in pursuing careers in responsible tech?

Marian: I’m inspired most when someone tells me I can’t do something. No matter what obstacles you face, believe you have the skills, the knowledge and the passion to make your dreams come true. Find motivation in the small moments, find motivation in those who doubt you, but most importantly, never forget to believe in the greatness of you.

Jen: Don’t limit yourself even if you don’t have a computer science degree. I don’t. I was convinced I’d work in sustainability and environmental non-profits, and now I lead a team working to make advanced technologies work better for everyone. This space requires so many different skills, whether in program management, policy, engineering, UX or business and strategy.

My mantra is “lift as you lead.” Don’t just build a network for yourself; build a supportive network to empower everyone who works with you — and those who come after you, especially those who are currently underrepresented in the tech sector. Your collective presence in this space makes a positive impact! And it’s even stronger when you build a better future together.

An intro to AI, made for students

Adorable, operatic blobs. A global, online guessing game. Scribbles that transform into works of art. These may not sound like they’re part of a curriculum, but learning the basics of how artificial intelligence (AI) works doesn’t have to be complicated, super-technical or boring.

To celebrate Digital Learning Day, we’re releasing a new lesson from Applied Digital Skills, Google’s free, online, video-based curriculum (and part of the larger Grow with Google initiative). “Discover AI in Daily Life” was designed with middle and high school students in mind, and dives into how AI is built, and how it helps people every day.

AI for anyone — and everyone

“Twenty or 30 years ago, students might have learned basic typing skills in school,” says Dr. Patrick Gage Kelley, a Google Trust and Safety user experience researcher who co-created (and narrates) the “Discover AI in Daily Life” lesson. “Today, ‘AI literacy’ is a key skill. It's important that students everywhere, from all backgrounds, are given the opportunity to learn about AI.”

“Discover AI in Daily Life” begins with the basics. You’ll find simple, non-technical explanations of how a machine can “learn” from patterns in data, and why it’s important to train AI responsibly and avoid unfair bias.

First-hand experiences with AI

“By encouraging students to engage directly with everyday tools and experiment with them, they get a first-hand experience of the potential uses and limitations of AI,” says Dr. Annica Voneche, the lesson’s learning designer. “Those experiences can then be tied to a more theoretical explanation of the technology behind it, in a way that makes the often abstract concepts behind AI tangible.”

Guided by Google’s AI Principles, the lesson also explores why it’s important to develop AI systems responsibly. Developed with feedback from a student advisor and several middle- and high-school teachers, the lesson is intended for use in a wide range of courses, not just in computer science (CS) or technology classes.

“It's crucial for students, regardless of whether they are CS students or not, to understand why the responsible development of AI is important,” says Tammi Ramsey, a high school teacher who contributed feedback. “AI is becoming a widespread phenomenon. It’s part of our everyday lives.”

Whether taught in-person or remotely, teachers can use the lesson’s three- to six-minute videos as tools to introduce a variety of students to essential AI concepts. “We want students to learn how emerging technologies, like AI, work,” says Sue Tranchina, a teacher who contributed to the lesson. “So students become curious and inspired to not just use AI, but create it.”

From Boggle to Google: Meg Mitchell’s mission to make AI for everyone

Long before Meg Mitchell founded the Ethical AI team at Google in 2017, she loved Boggle, the classic game where players come up with words from random letters in three minutes or less. Looking back at her childhood Boggle-playing days, Meg sees the game as her early inspiration to pursue studying computational linguistics. “I always loved identifying patterns, solving puzzles, language games, and creating new things,”  Meg says. “And Boggle had it all. It was a puzzle, and it was creative.”

The creative puzzles she tackles today as a Senior Research Scientist at Google are developing tools and techniques to help artificial intelligence (AI) evolve ethically over time, reflecting Google’s AI Principles. We caught up with Meg to talk about what took her from playing Boggle to working at Google. 

How do you describe your job at a dinner party to people who don’t work in tech?

When I used to work in language generation, my partner would say, “she makes robots talk.” Now that I work on AI Ethics as well, he says “she makes robots talk and helps them avoid inheriting human biases.” Everyone gets it when he says that! But I say “I work in AI Ethics.” I’ve found that gets people curious, and they generally want to know what that means. I say: ”When people create an AI system, it might not work well for everyone, meaning, it might limit what they can do in the world. What I do is develop frameworks for how well an AI system is doing in terms of offering equitable experiences for different people, so that the AI doesn’t affect different people disproportionately. This helps us avoid creating products that consistently don’t work well for some people and better for others.”

What’s an example that illustrates your work?

My team has developed what we call Model Cards, a way to help anyone, even non-technical people like journalists or designers, as well as everyday people, understand how specific machine learning, or ML, models work. The technical definition of an ML model: An ML model is the mathematical model  that makes predictions by using algorithms that learn statistical relationships among examples. And the technical definition of a Model Card is a framework for documenting a model’s performance and intended usage.

Here’s a less technical explanation of Model Cards: You know the nutritional labels on food packaging that talk about calories, vitamin content, serving size, and ingredients? Model Cards are like these, but for ML models. They show, in a structured and easy-to-read way, what the ML model does, how well it works, its limitations, and more.

Recently, two cross-industry organizations, Partnership on AI and OpenAI,  decided to apply our work on Model Cards to their frameworks and systems, respectively. 

You started out studying linguistics. How did you know this field was for you?

Growing up, I was equally good at math and reading and writing, but I generally thought of myself as being good with language. Of course, this was a gender norm at the time. But I also taught myself to code and started programming for fun when I was 13. When I was  a junior in high school, I liked doing creative things, and I really wanted to take a ceramics class in my free period. At the same time, I was in a calculus class, and my teacher literally got on her knee to encourage me to take advanced math instead. By the time I got to college, I was balancing both language and math, and my senior thesis at Reed College was on computational linguistics, and more specifically, on the generation of referring expressions. In non-technical terms, it’s simply about making appropriate references to people, places or things. My Ph.D. is in language generation, too—specifically vision-to-language generation, which is about translating visual things, like photos, into language, like captions or stories. 

Eventually, I had an “aha moment” when I knew I wanted to pursue this field, and it’s thanks to my dog, Wendell. Wendell was a Great Dane. When I walked Wendell, tons of people would stop and say, “That’s not a dog, that’s a horse!”  Once in a while, they’d say, “You should put a saddle on him!” They said the exact same phrases. After six years of hearing people say the same thing when they saw Wendell, I thought the consistency was so fascinating from a psycholinguistics point of view. I literally saw every day that people have stored prototypes in their minds. I realized through Wendell that although language is creative, and expressive, we say predictable things—and there are clear patterns. And sometimes, these predictable things we say are inaccurate and perpetuate stereotypes.


Wendell
149368_1000227309994200_3094795515423381764_n.jpg

Looking back,  I see I was very naturally interested in ethics in AI, in terms of fairness and inclusion, before it was “a thing.”

What’s your favorite part of your job?

Programming! I’m happiest when I’m coding. It’s how I de-stress. My colleagues ask me “how long has it been since you coded?” the way some people ask each other “how long has it been since you’ve had coffee?” or “how long has it been since you had a vacation?” If I haven’t coded in more than two weeks, I’m not my happiest self.

What’s the most challenging part of your job? 

When we’re thinking of the end-to-end development of an AI system, there are challenges to making them more ethical, even if it seems like that’s obviously the right thing to do. Unintended bias creeps in. Unintentional outcomes occur. One way to avoid these are to represent many points of view and experiences, to catch gaps in terms of where and when an AI system isn’t performing as well for some people than for others. Who is at the table making decisions influences how a system is designed. This is why issues of diversity, equity and inclusion are a core part of my AI research, and why I encourage hiring AI talent that represents many dimensions of diversity.

What’s one habit that makes you and your team successful?

I message with the people I work with often. Everyone is remote, but it doesn’t feel like it. We share a lot of crazy, celebratory GIFs and happy emoji. Which makes sense, given my appreciation for fairness and language: GIFs and emoji are something that everyone can understand quickly and easily!