Author Archives: Andrea Lewis Åkerman

Bringing new life to Swedish endangered animals using AR

According to the UN, more plants and animals are threatened with extinction now than in any other period of human history — approximately 1 million species globally. The accelerating pace of extinction is an urgent matter, and at this week’s UN biodiversity conference representatives from countries all over the world are coming together virtually to set out a plan for how to combat the challenge of better protecting our endangered ecological ecosystem.

Sweden, which is home to much of the iconic wildlife in the northern hemisphere — from moose and bears to reindeer and wolverines— currently has 2,249 threatened species, according to the IUCN Red List. Each of these animals plays a vital role in the ecosystem we are all a part of, yet according to a recent study by Kantar Sifo, 30% of Swedes don’t believe or know if there are animals currently at risk of becoming extinct in Sweden.

Meet five endangered species in 3D

Today, in collaboration with the Swedish Society for Nature Conservation, and in an effort to raise awareness of endangered animals, we are bringing five new Swedish endangered species to Search in augmented reality. Now, by simply searching for the lynx, arctic fox, white-backed woodpecker, harbour porpoise or moss carder bee in the Google App and tapping “View in 3D”, people from all over the world will be able to meet the animals up close in a life-size scale with movement and sound.

An image of augmented reality animals: a porpoise, a woodpecker, a lynx, an arctic fox and a flower with a bee

Experts from the Swedish Society for Nature Conservation have selected these specific animals for their varying types of reasons for endangerment in the country and relevance to certain types of habitats, based on the IUCN’s Red List. The white-backed woodpecker and the harbour porpoise (in the Baltic sea) are “critically” endangered, with only a few individuals left of each species. The arctic fox has an “endangered” threat status due to its decreasing population, and the lynx and the moss carder bee are considered “vulnerable” - meaning that their natural habitats need to be protected for these species to be able to continue to reproduce in the wild. These animals also exist in other regions and outside of the Nordics, with varying threat levels from none to urgent.

Reasons for endangerment

  • The white-backed woodpecker is affected by logging
  • The harbour porpoise is affected by toxins and noise pollution
  • The arctic fox’s habitat is at risk due to climate change
  • The lynx is affected by traffic and illegal hunting
  • The moss carder bee is contending with a decreasing number of flowers

Preserving endangered animals is a complex effort that requires collective action. Everyone can do something, and by launching this new Search experience we hope that we can help people in and outside of Sweden learn more about the issues at hand and experience some of nature's beloved creatures up close.

Whether you want to take a photo with the arctic fox or teach your family about the moss carder bee, the #Google3D animals are available for anyone to try out starting today through Google Search.

Music, memories and mental health: An homage to Avicii

Today’s Google Doodle celebrates the life and legacy of Swedish DJ, record producer and songwriter Tim Bergling — also known by his stage name, Avicii — on what would have been his 32nd birthday. From producing hit songs that topped international charts to headlining festivals around the world, Tim will forever be remembered as one of the pioneers and most influential visionaries of electronic dance music.

In 2018, Tim passed away at 28-years-old from suicide after struggling with mental health issues. In his memory, Tim’s father Klas and his mother Anki started a foundation to raise awareness and address the stigma of mental health among young people — Tim Bergling Foundation

To remember Tim on this day as well as learn more about mental health, we talked to Tim’s father Klas Bergling.


Tell us about Tim in your own words — how do you remember him? 

Klas: Tim was a kind and open person, full of energy, stubbornness and integrity. He had a special set of attributes, and if you watched the documentary about his life, I think you can also tell he wasn’t really built for fame in the way he was exposed to it. 

Despite his success and fame, he remained humble and treated people with kindness and equal respect. 

Was there a moment when you understood how musically talented he was? 

Klas: When Tim was about 10-years-old, he sang the Swedish national anthem at full capacity. He really lived in the moment when doing that, and it was times like this I initially understood there was something special there. 

Being part of a generation that didn’t grow up with house music, I used to view it as a monotonous, repetitive beat. When I started taking power walks back in the early days of Tim’s career, listening to his music, I realized what beautiful melodies were captured within the songs. It was an “aha moment” —  this is really music — and I started needing it to get me going. Tim produced more melodic songs over the years, with “Bromance” being one of the big eye-openers to his talent for me personally.

Were there any moments you were especially proud of Tim during his career? 

Klas: Tim was such a special person, I was always proud of him for just being the person he was. In terms of his musical accomplishments, I will never forget when he played in a park called Strömparterren in Stockholm in the early days of his career. He’d told me explicitly to not come — maybe because it wasn’t very cool to have your father around at that age — but I went anyway and hid behind a tree. It was a great evening and I remember feeling surprised, amazed and very proud. When I came to find him backstage afterwards, he was so glad I came. 

An especially proud moment was also when Tim played in Globen Arena, today named Avicii Arena in effort to bring more attention to mental health, and I decided to sit completely by myself to take in the experience, as well as when the whole family went to watch him play at the festival “Summerburst” at Stockholm Olympic Stadium. He performed brilliantly at both shows — they were such great evenings.

A photography showing Klas holding Tim as a wrong child.



After Tim passed away, you and your wife Anki started the Tim Bergling Foundation. Can you tell us about this work? 

Klas:After Tim’s suicide, a lot of people reached out to us. Some who were in similar situations, but also a lot of fans who’d been following him throughout the years. Many people told us that Tim and his songs meant a lot to them and they felt like they knew him, which I think they did in a sense.

The scale of mental health issues among young people is staggering. Tim was always interested in psychology and spirituality, and we wanted to honor him by doing what we could to help others. That’s how we brought the Tim Bergling Foundation to life, with the goal of contributing to young people’s mental health, lowering the rate of suicide among young people as well as removing the stigma around it. It’s not something you can do on your own, you need to cooperate broadly, and that’s what we try to do. We’re interested in bringing music into the picture as well, and have started working with organizations to spark young people's creativity by giving them better access to creating and remixing music of their own. 

What advice would you give to someone who has a friend or family member experiencing anxiety, depression or mental illness? 

Klas: It’s not always easy, not least due to the stigma around these topics; it can be hard to talk about. But that’s what we need to do — talk about it. Simple things like asking questions can go a long way in helping someone heal. And if you see someone moving in the wrong direction, you should encourage or help them seek support. 

I also think it’s very important for companies to get more engaged in these conversations and enable their employees to talk more openly about mental health. 

People everywhere grieved Tim’s passing and celebrated his legacy — what has that been like for your family? 

Klas: It’s given us great support in our sorrow and grief, a privilege we understand few in the same situation experience. You’ll always feel alone in a sense, but the love we’ve received from all around the world has meant a lot. I truly believe the small things — a smile, a short note — mean so much to people who are grieving. It can be hard to know what to do, and you often feel like whatever you do it’s not enough, but a few words often go a long way. 

Is there a song of Tim’s that has a special meaning to you?

Klas:I always come back to the song “Bromance.” The song stands for so much that Tim was, and sends a message of friendship, which was always important to Tim. 

Ask a Techspert: How do machine learning models explain themselves?

Editor’s Note: Do you ever feel like a fish out of water? Try being a tech novice and talking to an engineer at a place like Google. Ask a Techspert is a series on the Keyword asking Googler experts to explain complicated technology for the rest of us. This isn’t meant to be comprehensive, but just enough to make you sound smart at a dinner party. 

A few years ago, I learned that a translation from Finnish to English using Google Translate led to an unexpected outcome. The sentence “hän on lentäjä” became “he is a pilot” in English, even though “hän” is a gender-neutral word in Finnish. Why did Translate assume it was “he” as the default? 

As I started looking into it, I became aware that just like humans, machines are affected by society’s biases. The machine learning model for Translate relied on training data, which consisted of the input from hundreds of millions of already-translated examples from the web. “He” was more associated with some professions than “she” was, and vice versa. 

Now, Google provides options for both feminine and masculine translations when adapting gender-neutral words in several languages, and there’s a continued effort to roll it out more broadly. But it’s still a good example of how machine learning can reflect the biases we see all around us. Thankfully, there are teams at Google dedicated to finding human-centered solutions to making technology inclusive for everyone. I sat down with Been Kim, a Google researcher working on the People + AI Research (PAIR) team, who devotes her time to making sure artificial intelligence puts people, not machines, at its center, and helping others understand the full spectrum of human interaction with machine intelligence. We talked about how you make machine learning models easy to interpret and understand, and why it’s important for everybody to have a basic idea of how the technology works.

Been Kim

Why is this field of work so important?

Machine learning is such a powerful tool, and because of that, you want to make sure you’re using it responsibly. Let’s take an electric machine saw as an example. It’s a super powerful tool, but you need to learn how to use it in order not to cut your fingers. Once you learn, it’s so useful and efficient that you’ll never want to go back to using a hand saw. And the same goes for machine learning. We want to help you understand and use machine learning correctly, fairly and safely. 

Since machine learning is used in our everyday lives, it’s also important for everyone to understand how it impacts us. No matter whether you’re a coffee shop owner using machine learning to optimize the purchase of your beans based on seasonal trends, or your doctor diagnoses you with a disease with the help of this technology, it’s often crucial to understand why a machine learning model has produced the outcome it has. It’s also important for developers and decision-makers to be able to explain or present a machine learning model to people in order to do so. This is what we call “interpretability.” 

How do you make machine learning models easier to understand and interpret? 

There are many different ways to make an ML model easier to understand. One way is to make the model reflect how humans think from the start, and have the model "trained" to provide explanations along with predictions, meaning when it gives you an outcome, it also has to explain how it got there. 

Another way is to try and explain a model after the training on data is done. This is something you can do when the model has been built to use input to provide an output from its own perspective, optimizing for prediction, without a clear “how” included. This means you're able to plug things into it and see what comes out, and that can give you some insight into how the model generally makes decisions, but you don't necessarily know exactly how specific inputs are interpreted by the model in specific cases. 

One way to try and explain models after they’ve been trained is using low level features or high level concepts. Let me give you an example of what this means. Imagine a system that classifies pictures: you give it a picture and it says, “This is a cat.” A low level feature is when I then ask the machine which pixels mattered for that prediction, it can tell us if it was one pixel or the other, and we might be able to see that the pixels in question show the cat’s whiskers. But we might also see that it is a scattering of pixels that don’t appear meaningful to the human eye, or that it’s made the wrong interpretation. High level concepts are more similar to the way humans communicate with one another. Instead of asking about pixels, I’d ask, “Did the whiskers matter for the prediction? or the paws?” and again, the machine can show me what imagery led it to reach this conclusion. Based on the outcome, I can understand the model better. (Together with researchers from Stanford, we’ve published papers that go into further detail on this for those who are interested.)

Can machines understand some things that we humans can’t? 

Yes! This is an area that I am very interested in myself. I am currently working on a way to showcase how technology can help humans learn new things. Machine learning technology is better at some things than we are; for example it can analyze and interpret data at a much larger scale than humans can. Leveraging this technology, I believe we can enlighten human scientists with knowledge they haven't previously been aware of. 

What do you need to be careful of when you’re making conclusions based on machine learning models?

First of all, we have to be careful that human bias doesn't come into play. Humans carry biases that we simply cannot help and are often unaware of, so if an explanation is up to a human’s interpretation, and often it is, then we have a problem. Humans read what they want to read. Now, this doesn’t mean that you should remove humans from the loop. Humans communicate with machines, and vice versa. Machines need to communicate their outcomes in the form of a clear statement using quantitative data, not one that is vague and completely open for interpretation. If the latter happens, then the machine hasn’t done a very good job and the human isn’t able to provide good feedback to the machine. It could also be that the outcome simply lacks additional context only the human can provide, or that it could benefit from having caveats, in order for them to make an informed judgement about the results of the model. 

What are some of the main challenges of this work? 

Well, one of the challenges for computer scientists in this field is dealing with non mathematical objectives, which are things you might want to optimize for, but don’t have an equation for. You can’t always define what is good for humans using math. That requires us to test and evaluate methods with rigor, and have a table full of different people to discuss the outcome. Another thing has to do with complexity. Humans are so complex that we have a whole field of work - psychology - to study this. So in my work, we don't just have computational challenges, but also complex humans that we have to consider. Value-based questions such as “what defines fairness?” are even harder. They require interdisciplinary collaboration, and a diverse group of people in the room to discuss each individual matter.

What's the most exciting part? 

I think interpretability research and methods are making a huge impact. Machine learning technology is a powerful tool that will transform society as we know it, and helping others to use it safely is very rewarding. 

On a more personal note, I come from South Korea and grew up in circumstances where I feel I didn’t have too many opportunities. I was incredibly lucky to get a scholarship to MIT and come to the U.S. When I think about the people who haven't had these opportunities to be educated in science or machine learning, and knowing that this machine learning technology can really help and be useful to them in their everyday lives if they use it safely, I feel really motivated to be working on democratizing this technology. There's many ways to do it, and interpretability is one of the things that I can contribute with.  

Meet the Googlers working to ensure tech is for everyone

During their early studies and careers, Tiffany Deng, Tulsee Doshi and Timnit Gebru found themselves asking the same questions: Why is it that some products and services work better for some than others, and why isn’t everyone represented around the table when a decision is being made? Their collective passion to create a digital world that works for everyone is what brought the three women to Google, where they lead efforts to make machine learning systems fair and inclusive. 

I sat down with Tiffany, Tulsee and Timnit to discuss why working on machine learning fairness is so important, and how they came to work in this field.  

How would you explain your job to someone who isn't in tech?

Tiffany: I’d say my job is to make sure we’re not reinforcing any of the entrenched and embedded biases humans might have into products people use, and that every time you pick up a product—a Google product—you as an individual can have a good experience when using it. 

Timnit: I help machines understand imagery and text. Just like a human, if a machine tries to learn a pattern or understand something, and it is trained on input that’s been provided for it to do just that, the input, or data in this case, has societal bias. This could lead to a biased outcome or prediction made by the machine. And my work is to figure out different ways of mitigating this bias. 

Tulsee: My work includes making sure everyone has positive experiences with our products, and that people don’t feel excluded or stereotyped, especially based on their identities. The products should work for you as an individual, and provide the best experience possible. 

What made you want to work in this field?

Tulsee:When I started college, I was unsure of what I wanted to study. I came in with an interest in math, and quickly found myself taking a variety of classes in computer science, among other topics. But no matter which interesting courses I took, I often felt a disconnect between what I was studying and the people the work would help. I kept coming back to wanting to focus on people, and after taking classes like child psychology and philosophy of AI, I decided I wanted to take on a role where I could combine my skill sets with a people-centered approach. I think everyone has an experience of services and technology not working for them, and solving for that is a passion behind much of what I do. 

Tiffany:After graduating from West Point I joined the army as an intelligence officer before becoming a consultant and working for the State Department and the Department of Defense. I then joined Facebook as a privacy manager for a period of time, and that’s when I started working on more ML fairness-related matters. When people ask me how I ended up where I am, I’d say that there’s never a straight path to finding your passion, and all the experiences that I’ve had outside of tech are ones I bring into the work I’m doing today. 

An important “aha moment” for me was about a year and a half ago, when my son had a rash all over his body and we went to the doctor to get help. They told us they weren’t able to diagnose him because his skin wasn’t red, and of course, his skin won’t turn red as he has deep brown skin. Someone telling me they can’t diagnose my son because of his skin—that’s troubling as a parent. I wanted to understand the root cause of the issue—why is this not working for me and my family, the way it does for others? Fast forwarding, when thinking about how AI will someday be ubiquitous and an important component in assisting human decision-making, I wanted to get involved and help ensure that we’re building technology that works equally as well for everyone. 

Timnit: I grew up with a father and two sisters working in electrical engineering, so I followed their path and decided to also pursue studies in the field. After spending some time at Apple working as a circuit designer and starting my own company, I went back to studying image processing and completed a Ph.D. in computer vision. Towards the end of my Ph.D., I read a ProPublica article discussing racial bias in predicting crime recidivism rates. At the same time, I started thinking more about how there were very few, if any, Black people in grad school and that whenever I went to conferences, Black people weren’t represented in the decisions driving this field of work. That’s how I came to found a nonprofit organization called Black in AI, along with Rediet Abebe, to increase the visibility of Black people working in the field. After graduating with my Ph.D. I did a postdoc at Microsoft research and soon after that, I took a role at Google as the co-lead of the ethical AI research team which was founded by Meg Mitchell

What are some of the main challenges in this work, and why is it so important? 

Tulsee:The challenge question is interesting, and a hard one. First of all, there is the theoretical and sociological question on the notion of fairness—how does one define what is fair? Addressing fairness concerns requires multiple perspectives, and product development approaches ranging from technical to design. Because of this, even for use cases where we have a lot of experience, there are still many challenges for product teams to understand the different approaches for measuring and tackling fairness concerns. This is one of the reasons why I believe tooling and resources are so critical, and why we’re investing in them for both internal and external purposes.

Another important aspect is company culture and how companies define their values and motivate their employees. We are starting to see a growing, industry-wide shift in terms of what success looks like. If organizations and product creators get rewarded for thinking about a broader set of people when developing products, the more companies start fostering a diverse workforce, consult external experts and think about whose voices are being represented at the table. We need to remember we’re talking about real people's experiences, and while working on these issues can sometimes be emotionally difficult, it’s so important to get right. 

Timnit:A general challenge is that people who are the most negatively affected are often the ones whose voices are not heard. Representation is an important issue, and while there’s a lot of opportunities with ML technology in society, it’s important to have a diverse set of people and perspectives involved when working on the development so you don’t end up enhancing a gap between different groups.

This is not an issue that is specific to ML. As an example, let’s think of DNA sequencing. The African continent has the most diverse DNA in the world, but I read that it consists of less than 1 percent of the DNA studied in DNA sequencing, so there are examples of researchers who have come to the wrong conclusions based on data that was not representative. Now imagine someone is looking to develop the next generation of drugs, and the result could be that they don’t work for certain groups because their DNA hasn’t been rightly represented. 

Do you think ML has the potential to help complement human decision making, and drive the world to become more fair?

Timnit:It’s important to recognize the complexity of the human mind, and that humans should not be replaced when it comes to decision making. I don’t think ML can make the world more fair: Only humans can do that. And humans choose how to use this technology. In terms of opportunities, there are many ways in which we have already used ML systems to uncover societal bias, and this is something I work on as well. For example, studies by Jennifer Eberhardt and her collaborators at Stanford University including Vinodkumar Prabhakaran, who has since joined our team, used natural language processing to analyze body camera recordings of police stops in Oakland. They found a pattern of police speaking less respectfully to Black people than white people. A lot of times when you show these issues backed up by data and scientific analysis, it can help make a case. At the same time, the history of scientific racism also shows that data can be used to propagate the most harmful societal biases of the day. Blindly trusting data driven studies or decisions can be dangerous. It’s important to understand the context under which these studies are conducted and to work with affected communities and other domain experts to formulate the questions that need to be addressed.

Tiffany:I think ML will be incredibly important to help with things like climate change, sustainability and helping save endangered animals. Timinit’s work on using AI to help identify diseased cassava plants is an incredible use of AI, especially in the developing world. The range of problems AI can aid humans with is endless—we just have to ensure we continue to build technological solutions with ethics and inclusion at the forefront of our conversations.