Author Archives: Grace Wu

A look at the Responsible Innovation Fellowship

Responsible AI is an emerging field. While AI enables us to solve problems in exciting new ways, the scale of its impact raises new challenges. As a result, it’s important to develop AI responsibly so that it can empower and be helpful to everyone.

As the next generation of technologists prepares to enter the field, the demands of education are changing accordingly. Computer science education has historically been synonymous with learning to code. But today, as advanced technology and AI begin to impact almost every facet of our lives — from college admissions, medical diagnosis, social media, to credit limits — it’s necessary for technologists to be educated and prepared to not only understand the societal impacts of technology, but also do so from a diverse set of perspectives, cultures and communities.

The Responsible Innovation Fellowship (RIF) program was designed by Google’s Responsible Innovation team members Cherish Molezion and Kendra Schultz to contribute to this progress. The fellowship supports career exploration and equips students with the knowledge and skills they need to enter the field of responsible innovation. RIF encourages students from currently underrepresented backgrounds — including students from Minority Serving Institutions, such as Historically Black Colleges & Universities (HBCUs), Hispanic-Serving Institutions (HSIs), and Historically Women’s Colleges (HWCs) — to apply. Additionally, the program fosters collaboration between technical and non-technical fields, demonstrating the need for both perspectives in technology. RIF welcomes applications from humanities and social sciences students as well as computer science students.

As Cherish explains, RIF aims to “provide equitable opportunity and community for budding technologists to exercise their ethical imaginations. Through diversifying the makeup of the responsible technology landscape by expanding across all regions, we can truly uphold our mission to build for everyone.”

This spring, the RIF team welcomed the inaugural cohort, engaging 20 students during the 5-week fellowship. The program covered topics from human and social ethics in AI, socio-technical harms, and synthetic media, and ended in a final capstone presentation where students applied the methods they learned to assess ethical considerations for a hypothetical AI application of their choosing. “My team and I are inspired by the students’ ingenuity,” says Jen Gennai, Google’s Director of Responsible Innovation. “We learned from their distinct lived experiences and perspectives.”

The students are now bringing their learnings back to their schools and communities. April LaGrone, a public health student at Western Michigan University, plans to apply her responsible AI learnings in the health field. “I’m passionate about equitable research, ensuring research study participants have control, power and a voice,” she says.” In the same way that AI literacy is important, we need to focus on health literacy so people understand what their rights are, become educated on their health care options, and are able to advocate for themselves. Explainability and health literacy go hand-in-hand.”

José C. Sánchez Curet, a student at the University of Puerto Rico, Río Piedras, hopes to share his learnings with other students. José is working with leaders in his university’s computer science department. “I hope to teach seminars about AI ethics and foster conversations where we can collaborate on possible improvements to responsible technology development.”

Christina Carpenter, an elementary education student at Bay Path University – and a mom – hopes to apply RIF insights to her career and also everyday life. “We’re creating the next generation of innovators, and I hope to teach little minds how to be responsible with technology and the negative impacts of certain types of technology use.” With her son heading off to middle school soon, Christina is mindful that “he’s grown up with electronics his whole life, and this is when he needs to learn how to be responsible with technology.”

To celebrate the lessons learned and accomplishments of the entire cohort, we’ve put together a Responsible Innovation Fellowship Yearbook, where you can read about all of the Fellows and how they plan on applying what they learned to their careers.

The Responsible Innovation Fellowship program will soon be accepting applications for future cohorts — join the mailing list here.

Helping people understand AI

If you’re like me, you may have noticed that AI has become a part of daily life. I wake up each morning and ask my smart assistant about the weather. I recently applied for a new credit card and the credit limit was likely determined by a machine learning model. And while typing the previous sentence, I got a word choice suggestion that “probably” might flow better than “likely,” a suggestion powered by AI.

As a member of Google’s Responsible Innovation team, I think a lot about how AI works and how to develop it responsibly. Recently, I spoke with Patrick Gage Kelley, Head of Research Strategy on Google’s Trust & Safety team, to learn more about developing products that help people recognize and understand AI in their daily lives.

How do you help people navigate a world with so much AI?

My goal is to ensure that people, at a basic level, know how AI works and how it impacts their lives. AI systems can be really complicated, but the goal of explaining AI isn’t to get everyone to become programmers and understand all of the technical details — it’s to make sure people understand the parts that matter to them.

When AI makes a decision that affects people (whether it’s recommending a video or qualifying for a loan), we want to explain how that decision was made. And we don’t want to just provide a complicated technical explanation, but rather, information that is meaningful, helpful, and equips people to act if needed.

We also want to find the best times to explain AI. Our goal is to help people develop AI literacy early, including in primary and secondary education. And when people use products that rely on AI (everything from online services to medical devices), we want to include a lot of chances for people to learn about the role AI plays, as well as its benefits and limitations. For example, if people are told early on what kinds of mistakes AI-powered products are likely to make, then they are better prepared to understand and remedy situations that might arise.

Do I need to be a mathematician or programmer to have a meaningful understanding of AI?

No! A good metaphor here is financial literacy. While we may not need to know every detail of what goes into interest rate hikes or the intricacies of financial markets, it’s important to know how they impact us — from paying off credit cards, to buying a home, or paying for student loans. In the same way, AI explainability isn’t about understanding every technical aspect of a machine learning algorithm – it’s about knowing how to interact with it and how it impacts our daily lives.

How should AI practitioners — developers, designers, researchers, students, and others — think about AI explainability?

Lots of practitioners are doing important work on explainability. Some focus on interpretability, making it easier to identify specific factors that influence a decision. Others focus on providing “in-the-moment explanations” right when AI makes a decision. These can be helpful, especially when carefully designed. However, AI systems are often so complex that we can’t rely on in-the-moment explanations entirely. It’s just too much information to pack into a single moment. Instead, AI education and literacy should be incorporated into the entire user journey and built continuously throughout a person’s life.

More generally, AI practitioners should think about AI explainability as fundamental to the design and development of the entire product experience. At Google, we use our AI Principles to guide responsible technology development. In accordance with AI Principle #4: “Be accountable to people,” we encourage AI practitioners to think about all the moments and ways they can help people understand how AI operates and makes decisions.

How are you and your collaborators working to improve explanations of AI?

We develop resources that help AI practitioners learn creative ways to incorporate AI explainability in product design. For example, in the PAIR Guidebook we launched a series of ethical case studies to help AI practitioners think through tricky issues and hone their skills for explaining AI. We also do fundamental research like this paper to learn more about how people perceive AI as a decision-maker, and what values they would like AI-powered products to embody.

We’ve learned that many AI practitioners want concrete examples of good explanations of AI that they can build on, so we’re currently developing a story-driven visual design toolkit for explanations of a fictional AI app. The toolkit will be publicly available, so teams in startups and tech companies everywhere can prioritize explainability in their work.

An illustration of a sailboat navigating the coast of Maine

The visual design toolkit provides story-driven examples of good explanations of AI.

I want to learn more about AI explainability. Where should I start?

This February, we released an Applied Digital Skills lesson, “Discover AI in Daily Life.” It’s a great place to start for anyone who wants to learn more about how we interact with AI everyday.

We also hope to speak about AI explainability at the upcoming South by Southwest Conference. Our proposed session would dive deeper into these topics, including our visual design toolkit for product designers. If you’re interested in learning more about AI explainability and our work, you can vote for our proposal through the SXSW PanelPicker® here.

How Unni’s passion for social impact led him to Google

Welcome to the latest edition of “My Path to Google,” where we talk to Googlers, interns, apprentices and alumni about how they got to Google, what they do in their roles and how they prepared for their interviews.

In celebration of Asian Pacific American Heritage Month, today’s post features Unni Nair, a senior research strategist on Google’s Responsible Innovation team. As a second-generation Indian American, Unni’s background has helped shape his passion for sustainability and responsible artificial intelligence (AI).

What’s your role at Google?

I’m a senior research strategist on the Responsible Innovation team. In this role, I use Google’s AI Principles to help our teams build products that are both helpful and socially responsible. More specifically, I’m passionate about how we can proactively incorporate responsible AI into emerging technologies to drive sustainable development priorities. For example, I’ve been working with the Google Earth Engine team to align their work with our AI Principles, which we spoke about in a workshop at Google I/O. I helped the team develop a data set — used by governments, companies and researchers — to efficiently display information related to conservation, biodiversity, agriculture and forest management efforts.

Can you tell us a bit about yourself?

I was born in Scranton, Pennsylvania, but I lived in many different parts of the U.S., and often traveled internationally, throughout my childhood. Looking back, I realize how fortunate I was to live in and learn from so many different communities at such a young age. As a child of Indian immigrants, I was exposed to diverse ways of life and various forms of inequity. These experiences gave me a unique perspective on the world, helping me see the potential in every human being and nurturing a sense of duty to uplift others. It took dabbling in fields from social work to philosophy, and making lots of mistakes along the way, to figure out how to turn this passion into impact.

In honor of Asian Pacific American Heritage Month, how else has your background influenced your work?

I’m grateful for having roots in the 5,000+ year-old Indian civilization and am constantly reminded of its value working in Silicon Valley. One notable example that’s influenced my professional life is the concept of Ahimsa — the ethical principle of not causing harm to other living things. While its historical definition has been more spiritually related, in modern day practice I’ve found it’s nurtured a respect for nature and a passion for sustainability and human rights in business. This contemporary interpretation of Ahimsa also encourages me to consider the far-reaching impacts — for better or for worse — that technology can have on people, the environment or society at large.

How did you ultimately end up at Google?

I was itching to work on more technology-driven solutions to global sustainability issues. I started to see that many of the world’s challenges are in part driven by macro forces like rapid globalization and technology growth. However, the sustainability field and development sector were slow to adapt from analog problem solving. I wanted to explore unconventional solutions like artificial intelligence, which is why I taught myself the Python programming language and learned more about AI. I started hearing about Google’s AI-first approach to help users and society, with an emphasis on the need to develop that technology responsibly. So I applied to the Responsible Innovation team for the chance to create helpful technology with social benefit in mind.

Any advice for aspiring Googlers?

Google is one of those rare places where the impact you’re making isn’t just on a narrow band of users — it’s on society at large. So, take the time to reflect on what sort of impact you want to make in the world. Knowing your answer to that question will allow you to weave your past experiences into a cohesive narrative during the interview process. And more importantly, it will also serve as your personal guide when making important decisions throughout your career.