Author Archives: Grace Wu

Hear what Google’s first Responsible Innovation intern learned

In 2018, we launched Google’s AI Principles to ensure we’re building AI that not only solves important problems and helps people in their daily lives, but also AI that is ethical, fair and safe. At the same time, we launched a central Responsible Innovation team to ensure the rest of Google is held accountable to these AI Principles. As the team grows, we continue to incorporate the perspectives and ideas of people from around the world — and this spring we welcomed our first intern, Lieke Dom. Lieke is based in Amsterdam, recently got her Master’s in Digital Business & Innovation, and is completing her Master’s in Applied Ethics.

I sat down with Lieke to learn more about her experience so far, including how her educational career led her here and what she’s learned from the internship.

Can you tell me a bit about your background?

In undergrad, I studied Communication Science and had some exposure to subjects like ethics and philosophy of technology. Studying at a technical university triggered my interest in this field, so I started a Masters in Philosophy of Science, Technology & Society. While I felt the tools and methodologies that you learn in philosophy are important to technology and business, I realized I didn’t want to go into pure philosophy as my main profession.

Why is that?

I think of ethical decision making as a skill that’s essential to most — if not all — professions. In order for a company, or a society, to truly build ethical technology, everyone involved in the research and product development process has to be equipped with ethical and responsible problem solving skills.

How did this thinking shape your educational focus?

I wanted to think about ethical problems with an emphasis on how we can apply methodologies from ethics and philosophy to contemporary issues. So, I pivoted to a Digital Business & Innovation degree followed by a Masters in Applied Ethics, both of which I’m completing during my internship. By combining these programs, I learned a lot about the opportunities technology provides businesses and the challenges that arise as a result of technological innovation.

Both of those degrees seem really well suited for the field of Responsible Innovation — did you know this was the field you wanted to go into when you chose those degrees?

While I knew I wanted to go into a field that combined ethics and technology, I didn’t know that a team like the Responsible Innovation team existed for most of my academic career. I chose studies based on my interests, but I wasn’t sure what it could bring me in my further career. Then, during my first Masters, a friend of mine gave me a book by Barbara Sher called Refuse to Choose!, which highlights the power of combining seemingly distinct fields. Reading about other people who didn’t choose a specific course and instead studied what interests them made me realize that the most important thing is that your journey makes sense to you. Although my degrees felt pretty haphazard (to others), it made sense to me how these areas complement each other. However, I was unsure about how these would come together in a professional career. So I was excited to find out about Google’s Responsible Innovation initiatives and AI Principles and eventually find a role on this team.

Did your understanding of tech ethics change during your internship?

During my internship I got to sit in on some AI Principles Reviews, a process that assesses proposals for new AI research and application for alignment with our Principles. I’m also working on expanding our body of external case studies so that we can share our learnings with AI practitioners everywhere — my colleague Dr. Molly FitzMorris recently published our team’s first business school case study in partnership with the Berkeley Haas School of Business. I’ve enjoyed working on these case studies because they show how our Principles are operationalized across the whole company.

These experiences deepened my belief that ethical decision making is an important skill for everyone to have, from developers, to designers, and researchers beyond teams like Responsible Innovation. Being on this team has also reinforced that it’s essential to have people tasked with taking deep dives into what the ethical development of technologies like AI should look like, ensuring that other people put those ideas into practice. Ethics aren’t defined or static, so it’s important to have people who devote themselves completely to it.

Can you share any key learnings and takeaways from your internship?

Stay eager to learn, and always ask a lot of questions. Find what genuinely interests you, and don’t be afraid if that strays from traditional or linear career paths; even if those areas don’t seem directly related, interdisciplinary skills and thinking are incredibly valuable.

And if you’re interested in going into tech, don’t limit yourself to purely technical fields. These days, technology is interwoven into almost all aspects of our everyday lives. Understanding the human and cultural components of new technology is essential to understanding its broader impact — and ensuring that it is really serving everyone.

A look at the Responsible Innovation Fellowship

Responsible AI is an emerging field. While AI enables us to solve problems in exciting new ways, the scale of its impact raises new challenges. As a result, it’s important to develop AI responsibly so that it can empower and be helpful to everyone.

As the next generation of technologists prepares to enter the field, the demands of education are changing accordingly. Computer science education has historically been synonymous with learning to code. But today, as advanced technology and AI begin to impact almost every facet of our lives — from college admissions, medical diagnosis, social media, to credit limits — it’s necessary for technologists to be educated and prepared to not only understand the societal impacts of technology, but also do so from a diverse set of perspectives, cultures and communities.

The Responsible Innovation Fellowship (RIF) program was designed by Google’s Responsible Innovation team members Cherish Molezion and Kendra Schultz to contribute to this progress. The fellowship supports career exploration and equips students with the knowledge and skills they need to enter the field of responsible innovation. RIF encourages students from currently underrepresented backgrounds — including students from Minority Serving Institutions, such as Historically Black Colleges & Universities (HBCUs), Hispanic-Serving Institutions (HSIs), and Historically Women’s Colleges (HWCs) — to apply. Additionally, the program fosters collaboration between technical and non-technical fields, demonstrating the need for both perspectives in technology. RIF welcomes applications from humanities and social sciences students as well as computer science students.

As Cherish explains, RIF aims to “provide equitable opportunity and community for budding technologists to exercise their ethical imaginations. Through diversifying the makeup of the responsible technology landscape by expanding across all regions, we can truly uphold our mission to build for everyone.”

This spring, the RIF team welcomed the inaugural cohort, engaging 20 students during the 5-week fellowship. The program covered topics from human and social ethics in AI, socio-technical harms, and synthetic media, and ended in a final capstone presentation where students applied the methods they learned to assess ethical considerations for a hypothetical AI application of their choosing. “My team and I are inspired by the students’ ingenuity,” says Jen Gennai, Google’s Director of Responsible Innovation. “We learned from their distinct lived experiences and perspectives.”

The students are now bringing their learnings back to their schools and communities. April LaGrone, a public health student at Western Michigan University, plans to apply her responsible AI learnings in the health field. “I’m passionate about equitable research, ensuring research study participants have control, power and a voice,” she says.” In the same way that AI literacy is important, we need to focus on health literacy so people understand what their rights are, become educated on their health care options, and are able to advocate for themselves. Explainability and health literacy go hand-in-hand.”

José C. Sánchez Curet, a student at the University of Puerto Rico, Río Piedras, hopes to share his learnings with other students. José is working with leaders in his university’s computer science department. “I hope to teach seminars about AI ethics and foster conversations where we can collaborate on possible improvements to responsible technology development.”

Christina Carpenter, an elementary education student at Bay Path University – and a mom – hopes to apply RIF insights to her career and also everyday life. “We’re creating the next generation of innovators, and I hope to teach little minds how to be responsible with technology and the negative impacts of certain types of technology use.” With her son heading off to middle school soon, Christina is mindful that “he’s grown up with electronics his whole life, and this is when he needs to learn how to be responsible with technology.”

To celebrate the lessons learned and accomplishments of the entire cohort, we’ve put together a Responsible Innovation Fellowship Yearbook, where you can read about all of the Fellows and how they plan on applying what they learned to their careers.

The Responsible Innovation Fellowship program will soon be accepting applications for future cohorts — join the mailing list here.

Helping people understand AI

If you’re like me, you may have noticed that AI has become a part of daily life. I wake up each morning and ask my smart assistant about the weather. I recently applied for a new credit card and the credit limit was likely determined by a machine learning model. And while typing the previous sentence, I got a word choice suggestion that “probably” might flow better than “likely,” a suggestion powered by AI.

As a member of Google’s Responsible Innovation team, I think a lot about how AI works and how to develop it responsibly. Recently, I spoke with Patrick Gage Kelley, Head of Research Strategy on Google’s Trust & Safety team, to learn more about developing products that help people recognize and understand AI in their daily lives.

How do you help people navigate a world with so much AI?

My goal is to ensure that people, at a basic level, know how AI works and how it impacts their lives. AI systems can be really complicated, but the goal of explaining AI isn’t to get everyone to become programmers and understand all of the technical details — it’s to make sure people understand the parts that matter to them.

When AI makes a decision that affects people (whether it’s recommending a video or qualifying for a loan), we want to explain how that decision was made. And we don’t want to just provide a complicated technical explanation, but rather, information that is meaningful, helpful, and equips people to act if needed.

We also want to find the best times to explain AI. Our goal is to help people develop AI literacy early, including in primary and secondary education. And when people use products that rely on AI (everything from online services to medical devices), we want to include a lot of chances for people to learn about the role AI plays, as well as its benefits and limitations. For example, if people are told early on what kinds of mistakes AI-powered products are likely to make, then they are better prepared to understand and remedy situations that might arise.

Do I need to be a mathematician or programmer to have a meaningful understanding of AI?

No! A good metaphor here is financial literacy. While we may not need to know every detail of what goes into interest rate hikes or the intricacies of financial markets, it’s important to know how they impact us — from paying off credit cards, to buying a home, or paying for student loans. In the same way, AI explainability isn’t about understanding every technical aspect of a machine learning algorithm – it’s about knowing how to interact with it and how it impacts our daily lives.

How should AI practitioners — developers, designers, researchers, students, and others — think about AI explainability?

Lots of practitioners are doing important work on explainability. Some focus on interpretability, making it easier to identify specific factors that influence a decision. Others focus on providing “in-the-moment explanations” right when AI makes a decision. These can be helpful, especially when carefully designed. However, AI systems are often so complex that we can’t rely on in-the-moment explanations entirely. It’s just too much information to pack into a single moment. Instead, AI education and literacy should be incorporated into the entire user journey and built continuously throughout a person’s life.

More generally, AI practitioners should think about AI explainability as fundamental to the design and development of the entire product experience. At Google, we use our AI Principles to guide responsible technology development. In accordance with AI Principle #4: “Be accountable to people,” we encourage AI practitioners to think about all the moments and ways they can help people understand how AI operates and makes decisions.

How are you and your collaborators working to improve explanations of AI?

We develop resources that help AI practitioners learn creative ways to incorporate AI explainability in product design. For example, in the PAIR Guidebook we launched a series of ethical case studies to help AI practitioners think through tricky issues and hone their skills for explaining AI. We also do fundamental research like this paper to learn more about how people perceive AI as a decision-maker, and what values they would like AI-powered products to embody.

We’ve learned that many AI practitioners want concrete examples of good explanations of AI that they can build on, so we’re currently developing a story-driven visual design toolkit for explanations of a fictional AI app. The toolkit will be publicly available, so teams in startups and tech companies everywhere can prioritize explainability in their work.

An illustration of a sailboat navigating the coast of Maine

The visual design toolkit provides story-driven examples of good explanations of AI.

I want to learn more about AI explainability. Where should I start?

This February, we released an Applied Digital Skills lesson, “Discover AI in Daily Life.” It’s a great place to start for anyone who wants to learn more about how we interact with AI everyday.

We also hope to speak about AI explainability at the upcoming South by Southwest Conference. Our proposed session would dive deeper into these topics, including our visual design toolkit for product designers. If you’re interested in learning more about AI explainability and our work, you can vote for our proposal through the SXSW PanelPicker® here.

How Unni’s passion for social impact led him to Google

Welcome to the latest edition of “My Path to Google,” where we talk to Googlers, interns, apprentices and alumni about how they got to Google, what they do in their roles and how they prepared for their interviews.

In celebration of Asian Pacific American Heritage Month, today’s post features Unni Nair, a senior research strategist on Google’s Responsible Innovation team. As a second-generation Indian American, Unni’s background has helped shape his passion for sustainability and responsible artificial intelligence (AI).

What’s your role at Google?

I’m a senior research strategist on the Responsible Innovation team. In this role, I use Google’s AI Principles to help our teams build products that are both helpful and socially responsible. More specifically, I’m passionate about how we can proactively incorporate responsible AI into emerging technologies to drive sustainable development priorities. For example, I’ve been working with the Google Earth Engine team to align their work with our AI Principles, which we spoke about in a workshop at Google I/O. I helped the team develop a data set — used by governments, companies and researchers — to efficiently display information related to conservation, biodiversity, agriculture and forest management efforts.

Can you tell us a bit about yourself?

I was born in Scranton, Pennsylvania, but I lived in many different parts of the U.S., and often traveled internationally, throughout my childhood. Looking back, I realize how fortunate I was to live in and learn from so many different communities at such a young age. As a child of Indian immigrants, I was exposed to diverse ways of life and various forms of inequity. These experiences gave me a unique perspective on the world, helping me see the potential in every human being and nurturing a sense of duty to uplift others. It took dabbling in fields from social work to philosophy, and making lots of mistakes along the way, to figure out how to turn this passion into impact.

In honor of Asian Pacific American Heritage Month, how else has your background influenced your work?

I’m grateful for having roots in the 5,000+ year-old Indian civilization and am constantly reminded of its value working in Silicon Valley. One notable example that’s influenced my professional life is the concept of Ahimsa — the ethical principle of not causing harm to other living things. While its historical definition has been more spiritually related, in modern day practice I’ve found it’s nurtured a respect for nature and a passion for sustainability and human rights in business. This contemporary interpretation of Ahimsa also encourages me to consider the far-reaching impacts — for better or for worse — that technology can have on people, the environment or society at large.

How did you ultimately end up at Google?

I was itching to work on more technology-driven solutions to global sustainability issues. I started to see that many of the world’s challenges are in part driven by macro forces like rapid globalization and technology growth. However, the sustainability field and development sector were slow to adapt from analog problem solving. I wanted to explore unconventional solutions like artificial intelligence, which is why I taught myself the Python programming language and learned more about AI. I started hearing about Google’s AI-first approach to help users and society, with an emphasis on the need to develop that technology responsibly. So I applied to the Responsible Innovation team for the chance to create helpful technology with social benefit in mind.

Any advice for aspiring Googlers?

Google is one of those rare places where the impact you’re making isn’t just on a narrow band of users — it’s on society at large. So, take the time to reflect on what sort of impact you want to make in the world. Knowing your answer to that question will allow you to weave your past experiences into a cohesive narrative during the interview process. And more importantly, it will also serve as your personal guide when making important decisions throughout your career.