Tag Archives: Innovation & Technology

Paws meet machine learning with Pet Portraits

According to John Steinbeck, “I’ve seen a look in dogs’ eyes, a quickly vanishing look of amazed contempt, and I am convinced that basically dogs think humans are nuts.”

Perhaps Steinbeck’s dogs would have really thought we were nuts back in 2018 when people around the world used Art Selfie to search for their doppelgängers from across art history — with over 120 million selfies taken so far.

But now, pets can get in on the fun too! Today we are introducing Pet Portraits, a way for your dog, cat, fish, bird, reptile, horse, or rabbit to discover their very own art doubles among tens of thousands of works from partner institutions around the world. Your animal companion could be matched with ancient Egyptian figurines, vibrant Mexican street art, serene Chinese watercolors, and more. Just open the rainbow camera tab in the free Google Arts & Culture app for Android and iOS to get started and find out if your pet’s look-alikes are as fun as some of our favorite animal companions and their matches:

When you take a photo in Pet Portraits, our trained computer vision algorithm recognizes where your pet is, crops the image and puts them where they belong: front and center. Once that is done, a machine learning algorithm matches your pet’s photo with over tens of thousands of artworks from our partners’ outstanding collections to find the ones that look most similar. Now it’s time for them to enter the spotlight: Share your pet’s #PetPortraits as a single still image or select multiple images to animate together as a GIF slideshow.

Additionally, Pet Portraits invites you to tap on your result to learn about the stories and artists behind each artwork. Keep on exploring Google Arts & Culture and discover more about our pawed, winged, and hooved friends throughout history. Get to know the 10 coolest cats or dogs of art history, dive into wonders of the natural world, or find out more about fantastic beasts in fiction and nature.

Ready to find your pet in art? Open up the free Google Arts & Culture app for Android or iOS and tap the rainbow camera button at the bottom of the page. Discover and share your most paw-fect #PetPortraits and don’t forget to tag us @googleartsculture on Instagram or @googlearts on Twitter! 🐾

How machine learning revived long lost masterpieces by Klimt

Few artists enjoy such worldwide fame as Gustav Klimt. The new Google Arts & Culture online retrospective "Klimt vs. Klimt - The Man of Contradictions" puts the spotlight on the artist's eclectic work and life. A Machine Learning experiment recolored photographs of lost Klimt paintings, while a “Pocket Gallery” brings some of his most iconic works into your living room in augmented reality and 3D. Together with more than 120 stories about his art and personality, a virtual tour of his studio, and many more highlights from the collections of over 30 cultural institutions around the world, "Klimt vs. Klimt" forms one of the most comprehensive online experiences about the artist.

Klimt’s legacy poses many unsolved questions, not least due to the fact that approximately 20% of his artworks were lost over the course of history. Among the most prominent and painful losses are the so-called Faculty Paintings, created on behalf of the University of Vienna and rejected by the latter for being overly critical towards science. In 1945, only days before the Second World War ended, the paintings were lost to a fire at Immendorf Castle in Austria. What these major works looked like could only be guessed at from black and white photographs taken in the early 1900s, unable were they to convey the magic that makes Klimt’s artworks so captivating — the bold colours, the revolutionary approach to textures, the shocking directness of his figures. Until today.

Using the opportunities offered by machine learning, enhanced by the knowledge of internationally renowned Klimt expert and curator at the Belvedere, Dr. Franz Smola, the team at the Google Arts & Culture Lab was able to reconstruct the colours that Klimt might have used for the Faculty Paintings, thus restoring them to their fully colored beauty. For the first time in 70 years, people can experience his artworks in the colors he might have used.

Experience the art of Klimt in new ways online

The paintings are the true centerpiece of “Klimt vs. Klimt”. The retrospective brings together more than 120 of the artist’s most famous masterpieces, as well as lesser known works, and assembles an expertly curated selection in an immersive Pocket Gallery that you can experience in augmented reality on mobile or in 3D on web. This was made possible thanks to a collaboration between Google Arts & Culture and over 30 partners and institutions - with the Belvedere, the Albertina, the Klimt Foundation, the Neue Galerie New York and the Metropolitan Museum of Arts among them. Over 60 masterworks by Klimt have also been captured in ultra high resolution with Google’s Art Camera. Come in closer to see “The Kiss” like never before!

Klimt expert Dr. Franz Smola

Meet the expert — Dr. Franz Smola


While creating “Klimt vs. Klimt” the Google Arts & Culture team was advised and guided by Dr. Franz Smola, curator at the Belvedere and acknowledged around the world as one of the foremost Klimt experts. He shared some of his thoughts on working on the project:

Why are Klimt’s Faculty Paintings so important?

Klimt´s three Faculty Paintings were among the largest artworks Klimt ever created and in the field of Symbolist painting they represent Klimt’s masterpieces.

What do you think about the recolored versions?

The colors were essential for the overwhelming effect of these paintings, and they caused quite a stir among Klimt´s contemporaries. Therefore the reconstruction of the colors is synonymous with recognizing the true value and significance of these outstanding artworks.

Is there something the digital presentation adds to how Klimt and his artworks can be perceived?

I am deeply impressed by the fantastic images taken with Google’s Art Camera. They allow you to really explore a work of art, to jump into its texture and color application and to discover every detail in the easiest way possible. I also like how technology allows ideas to come to life that have always been merely hypothetical — I am thinking of the Pocket Gallery we created, which contains a highlight selection of Klimt’s paintings including some of which were lost.

If Klimt was still alive - how do you think he would engage with digital technologies?

Klimt was a highly visual figure. He rarely commented on his work, rather inviting people to look at the work alone and draw their own conclusions. The “Klimt vs. Klimt” project primarily uses visual, non-verbal tools to convey Klimt’s work, which is very much in line with Klimt’s character. Klimt liked to lead a rather secluded life within the walls of his studio, to which only a few had access. I am certain he would have liked the idea of jumping from this remote and quiet place into the World Wide Web, having access to millions of artworks and seeing his art distributed and communicated around the world.

To explore “Klimt vs. Klimt - The Man of Contradictions” visit g.co/klimtvsklimt or download the free Google Arts & Culture app for iOS or Android.

A new dimension for cultural artifacts

At Google Arts & Culture we are always looking for ways to help people understand and learn about culture in new and engaging ways. Starting today, we are launching a new feature through which our 2,000 plus cultural partner institutions can create guided 3D tours about buildings, sculptures, furniture, and more from their collections. With the help of 3D Tours you can easily whiz around historic sites, monuments and places of interest while learning about their hidden details and historical backgrounds - all courtesy of 3D data from Google Earth.


So how about a personal guided tour through Tokyo’s tallest towers, Florence’s beautiful basilicas or South Africa’s historical halls? These and 16 other 3D Tours make use of ModelViewer —  a tool through which interactive 3D models can easily be displayed on the web and in augmented reality. Not only will you be able to navigate smoothly to each stop of the tour but objects along the way can also be viewed in AR. So while you explore the heights of Tokyo Tower, you can discover its historic inspiration in your own home.

Take a tour of Florence’s Basilica of Santa Croces

Take a tour of Florence’s Basilica of Santa Croces

Climb into a famous artwork

Another way we are bringing art and culture to life is through Art Filter, a feature in the Google Arts & Culture camera tab that applies machine learning and augmented reality to turn you into a masterpiece. Today we have added five new artworks and artifacts to Art Filter for you to immerse yourself in. For example, become the Roman god of seasons as Arcimboldo’s Vertumnus, or cast a stony glare through the head of Medusa. 


How does it work?

Art Filter’s machine learning-based image processing positions the artifacts organically and smoothly on your head, or reacts to your facial expressions to make the filters as realistic as possible. What’s more, you can learn about each artwork from the fun facts that appear before the effect is applied. 


We hope these 3D tours and new filter options will help you explore the hidden details of these historic artifacts and feel connected to cultural heritage around the world.  


Find the tours on the Google Arts & Culture site or app. Art Filter is available in the Camera Tab of the free Google Arts & Culture app for Android and iOS.

A whale of a tale about responsibility and AI

A couple of years ago, Google AI for Social Good’s Bioacoustics team created a ML model that helps the scientific community detect the presence of humpback whale sounds using acoustic recordings. This tool, developed in partnership with the National Oceanic and Atmospheric Association, helps biologists study whale behaviors, patterns, population and potential human interactions. 

We realized other researchers could use this model for their work, too — it could help them better understand the oceans and protect key biodiversity areas. We wanted to freely share this model, but  struggled with a big dilemma: On one hand, it could help ocean scientists. On the other, though, we worried about whale poachers or other bad actors. What if they used our shared knowledge in a way we didn’t intend? 

We decided to consult with experts in the field in order to help us responsibly open source this machine learning model. We worked with Google's Responsible Innovation team to use our AI Principles — a guide to responsibly developing technology — to make a decision.

The team gave us the guidance we needed to open source a machine learning model that could be socially beneficial and was built and tested for safety, while also upholding high standards of scientific excellence for the marine biologists and researchers worldwide. 

On Earth Day — and every day — putting the AI Principles into practice is important to the communities we serve, on land and in the sea. 

Curious about diving deeper? You can use AI to explore thousands of hours of humpback whale songs and make your own discoveries with our Pattern Radio and see our collaboration with the National Oceanic and Atmospheric Association of the United States as well as our work with Fisheries and Oceans Canada (DFO) to apply machine learning to protect killer whales in the Salish Sea.

How we built a new tool without ever meeting in person

A little over a year ago, a group of us within Area 120, Google’s internal incubator, wanted to explore whether recorded video could help remote teams work better. Little did we know at the time that COVID-19 would soon send us all home, and we'd actually have to build the product remotely as well. That project became Threadit, short video recordings to share your work and connect with your team. 

Once we had a working  prototype, we started using Threadit to take back control of our working hours. Threadit, available from your browser or as a Chrome extension, helps you say and show more with a video message than with an email or chat. We use Threadit to show each other our progress, ask questions or request feedback without needing to coordinate schedules. This helps us reduce unnecessary meetings while still becoming a tighter-knit team. We have more time to think and do focused work, and the meetings we keep are more effective and easier to schedule for everyone. 

Today, Threadit is available to anyone who wants to try it. 

Threadit screenshots

Record yourself and your screen

To use Threadit, simply speak straight to the camera or share your screen; if you don’t like how it sounded, just hit record and try it again. Record as many short clips as you’d like, and Threadit will stitch them all together into one cohesive video message. When you’re done, send it off to your team. Anyone can reply with their own video message when they’re ready — it’s all part of one conversation.
Threadit screenshots

We know Threadit works because we used it ourselves. Our team has still never met in person. Instead of team whiteboarding sessions or quick updates around someone’s desk, we had to juggle work and family schedules. This meant more virtual meetings and lengthy text exchanges just to stay on the same page.


Show up how you want, when you want

People from all over the world helped us build Threadit, so using the tool became a great way to see one another without having to schedule live meetings across time zones. I’d send a Threadit to my colleagues in Japan during my normal working hours in Seattle; they’d respond during the hours that worked for them in Tokyo. Threadit helped us feel like we were working together in person, even though we were responding at different times from across the world — it built connections that email couldn’t. The best part? Nobody had to get up early or stay up late.

This became our new norm, whether with teammates in Tokyo or in their homes just down the street. I could record replies around putting my son down for a nap or cooking dinner, and review what I said so I came across how I wanted. Threadit gave us an opportunity to hear from everyone on our team, not just the loudest voices in a live meeting. We had more control over our time and could contribute when we were each ready.

Threadit screenshots

How will you use Threadit? 

Since we started, we’ve seen teams use Threadit in different ways, from sharing sales presentations to recording product tutorials to sending leadership updates. We even started using Threadit as a way of celebrating team birthdays! 

Because we all have enough productivity tools to manage as is, we built Threadit to work the way you do. Access Threadit directly from your web browser or mobile device. If you get our Chrome extension, you can record yourself and anything on your screen at any time, even from within Gmail. Send a Threadit to anyone by simply sharing the link — no  download necessary. 


Threadit screenshots

We’re excited for you to see how Threadit can help your team. Get started at threadit.area120.com.  

17,572 singers, in perfect harmony (from their own homes)

When you think of a choir, you likely put a descriptor before it: a school choir, a church choir, a community choir. Singing in a chorus usually means you’re standing within a large group of people, belting out songs and nailing those harmonies together. But what happens when you can’t gather in person to sing? 


That’s where virtual choirs come in. Composer and conductor Eric Whitacre has been putting them together for more than a decade, long before the pandemic left us stuck at home—and his most recent collaboration, which debuted on YouTube July 19, is his biggest project yet. 


Whitacre started organizing Virtual Choirs in 2009, when a fan uploaded a video of herself singing one of his choral compositions. He saw the video, then asked others to record themselves singing the other parts of the same composition to form a “choir.” That first group featured 185 singers, and each one since has grown larger and larger, to more than 8,000 voices for the fifth performance in 2018.

Eric Whitacre Credit Marc Royce.jpg

Eric Whitacre (Photo by Marc Royce)

This year, signups for Virtual Choir have skyrocketed. More than 17,000 singers from around the world found a way to participate in the sixth recording from the isolation of their own homes. They all learned “Sing Gently,” a song Whitacre composed during the pandemic. “Even early on, you’d be walking down the street in masks and you’d go out of your way to not pass someone,” Whitacre says. “A random stranger would become a threat. That was hard to see, and I was feeling that all over.” So the lyrics to “Sing Gently” encourage people to “live with compassion and empathy, and do this together,” he says. 


The Virtual Choir team uses every video submitted, unless there’s a technical problem with the recording. That means there are thousands of videos to sync together, and thousands of sound recordings to edit so the result sounds seamless. This time around, the team featured three sound editors, six people reviewing each submission and two executive producers; the team was scattered through the U.S., the U.K. and South Africa. Across three different continents, they used Google Docs and Google Sheets to keep track of their progress, Google’s webmaster tools to manage thousands of email addresses and Google Translate to keep in touch with singers around the world. Singers checked the choir’s YouTube channels for rehearsal videos, footage of Whitacre conducting the song and Q&As with other singers and composers.

Sing Gently.jpeg

The video for "Sing Gently" features the song's lyrics and footage of the singers, who recorded from their homes.

It was also significant that these singers came together (figuratively speaking) at a time when musicians are suddenly out of work. “It’s an especially surreal moment for singers, because we’ve been labeled as superspreaders,” Whitacre laments, referring to a term for people who spread the disease more than others; in one instance, dozens of singers in Washington state were infected after a choir practice.  “Even just the act of singing is dangerous for other people.” He says he was struck by the number of participants who told him it felt good to sing with others again—even though they weren’t actually performing in the same room. 


Molly Jenkins, a choir lover based in North Carolina, was one of the 6,262 sopranos who took part in “Sing Gently.” She had always wanted to join a virtual choir, but never found the perfect time to give it a try. But since there’s no such thing as a perfect moment in a pandemic, she decided to figure out a way to make it work. 

This, I think, is the best of the promise of the Internet. Eric Whitacre
composer and conductor

With her phone in hand to hear the guide tracks, Molly practiced whenever and wherever she could: in the shower, at the kitchen table while working from home, in her front yard and while burping her baby. When it came time to record her track, there was one problem: finding a quiet place to record. “There was no space to record where a shrieking, gurgling baby wouldn’t interrupt the take,” she says. 

She ended up in her car on a rainy day, playing the conductor track on her laptop and recording her vocals on her phone. Sound engineers were able to isolate her vocal track from the background noise of the rain tapping on her windshield. “I’m just so glad I went for it,” Molly says.

Whitacre says that improvisational spirit is key to creating his choirs, and he’s grateful that technology can enable great collaborations despite social distancing. “It really speaks to the best of technology,” he says. “This, I think, is the best of the promise of the Internet.”

When fashion and choreography meet artificial intelligence

At the Google Arts & Culture Lab in Paris, we’re all about exploring the relationship between art and technology. Since 2012, we’ve worked with artists and creators from many fields, developing experiments that let you design patterns in augmented reality, co-create poetry, or experience multisensory art installations. Today we’re launching two experiments to test the potential of artificial intelligence in the worlds of contemporary dance and fashion.

For our first experiment, Runway Palette, we came together with The Business of Fashion, whose collection includes 140,000 photos of runway looks from almost 4,000 fashion shows. If you could attend one fashion show per day, it would take you more than ten years to see them all. By extracting the main colors of each look, we used machine learning to organize the images by color palette, resulting in an interactive visualization of four years of fashion by almost 1,000 designers.

Everyone can now use the color palette visualization to explore colors, designers, seasons, and trends that come from Fashion Weeks worldwide.  You can even snap or upload a picture of, let’s say, your closet, or autumn leaves, and discover how designers used a similar color palette in fashion.

For our second experiment, Living Archive, we continued our collaboration with Wayne McGregor to create an AI-driven choreography tool. Trained on over 100 hours of dance performances from Wayne’s 25-year archive, the experiment uses machine learning to predict and generate movement in the style of Wayne’s dancers. In July of this year, they used the tool in his creative process for a new work that premiered at the LA Music Center


Today, we are making this experiment available to everyone. Living Archive lets you explore almost half a million poses from Wayne’s extensive archive, organized by visual similarity. Use the experiment to make connections between poses, or capture  your own movement to create your very own choreography.

You can try our new experiments on the Google Arts & Culture experiments page or via our free app for iOS and Android.

Solving problems with AI for everyone

Today, we’re kicking off our annual I/O developer conference, which brings together more than 7,000 developers for a three-day event. I/O gives us a great chance to share some of Google’s latest innovations and show how they’re helping us solve problems for our users. We’re at an important inflection point in computing, and it’s exciting to be driving technology forward. It’s clear that technology can be a positive force and improve the quality of life for billions of people around the world. But it’s equally clear that we can’t just be wide-eyed about what we create. There are very real and important questions being raised about the impact of technology and the role it will play in our lives. We know the path ahead needs to be navigated carefully and deliberately—and we feel a deep sense of responsibility to get this right. It’s in that spirit that we’re approaching our core mission.

The need for useful and accessible information is as urgent today as it was when Google was founded nearly two decades ago. What’s changed is our ability to organize information and solve complex, real-world problems thanks to advances in AI.

Pushing the boundaries of AI to solve real-world problems

There’s a huge opportunity for AI to transform many fields. Already we’re seeing some encouraging applications in healthcare. Two years ago, Google developed a neural net that could detect signs of diabetic retinopathy using medical images of the eye. This year, the AI team showed our deep learning model could use those same images to predict a patient’s risk of a heart attack or stroke with a surprisingly high degree of accuracy. We published a paper on this research in February and look forward to working closely with the medical community to understand its potential. We’ve also found that our AI models are able to predict medical events, such as hospital readmissions and length of stays, by analyzing the pieces of information embedded in de-identified health records. These are powerful tools in a doctor’s hands and could have a profound impact on health outcomes for patients. We’re going to be publishing a paper on this research today and are working with hospitals and medical institutions to see how to use these insights in practice.

Another area where AI can solve important problems is accessibility. Take the example of captions. When you turn on the TV it's not uncommon to see people talking over one another. This makes a conversation hard to follow, especially if you’re hearing-impaired. But using audio and visual cues together, our researchers were able to isolate voices and caption each speaker separately. We call this technology Looking to Listen and are excited about its potential to improve captions for everyone.

Saving time across Gmail, Photos, and the Google Assistant

AI is working hard across Google products to save you time. One of the best examples of this is the new Smart Compose feature in Gmail. By understanding the context of an email, we can suggest phrases to help you write quickly and efficiently. In Photos, we make it easy to share a photo instantly via smart, inline suggestions. We’re also rolling out new features that let you quickly brighten a photo, give it a color pop, or even colorize old black and white pictures.

One of the biggest time-savers of all is the Google Assistant, which we announced two years ago at I/O. Today we shared our plans to make the Google Assistant more visual, more naturally conversational, and more helpful.

Thanks to our progress in language understanding, you’ll soon be able to have a natural back-and-forth conversation with the Google Assistant without repeating “Hey Google” for each follow-up request. We’re also adding a half a dozen new voices to personalize your Google Assistant, plus one very recognizable one—John Legend (!). So, next time you ask Google to tell you the forecast or play “All of Me,” don’t be surprised if John Legend himself is around to help.

We’re also making the Assistant more visually assistive with new experiences for Smart Displays and phones. On mobile, we’ll give you a quick snapshot of your day with suggestions based on location, time of day, and recent interactions. And we’re bringing the Google Assistant to navigation in Google Maps, so you can get information while keeping your hands on the wheel and your eyes on the road.

Someday soon, your Google Assistant might be able to help with tasks that still require a phone call, like booking a haircut or verifying a store’s holiday hours. We call this new technology Google Duplex. It’s still early, and we need to get the experience right, but done correctly we believe this will save time for people and generate value for small businesses.

Understanding the world so we can help you navigate yours

AI’s progress in understanding the physical world has dramatically improved Google Maps and created new applications like Google Lens. Maps can now tell you if the business you’re looking for is open, how busy it is, and whether parking is easy to find before you arrive. Lens lets you just point your camera and get answers about everything from that building in front of you ... to the concert poster you passed ... to that lamp you liked in the store window.

Bringing you the top news from top sources

We know people turn to Google to provide dependable, high-quality information, especially in breaking news situations—and this is another area where AI can make a big difference. Using the latest technology, we set out to create a product that surfaces the news you care about from trusted sources while still giving you a full range of perspectives on events. Today, we’re launching the new Google News. It uses artificial intelligence to bring forward the best of human intelligence—great reporting done by journalists around the globe—and will help you stay on top of what’s important to you.

Overview - News.gif

The new Google News uses AI to bring forward great reporting done by journalists around the globe and help you stay on top of what’s important to you.

Helping you focus on what matters

Advances in computing are helping us solve complex problems and deliver valuable time back to our users—which has been a big goal of ours from the beginning. But we also know technology creates its own challenges. For example, many of us feel tethered to our phones and worry about what we’ll miss if we’re not connected. We want to help people find the right balance and gain a sense of digital wellbeing. To that end, we’re going to release a series of features to help people understand their usage habits and use simple cues to disconnect when they want to, such as turning a phone over on a table to put it in “shush” mode, or “taking a break” from watching YouTube when a reminder pops up. We're also kicking off a longer-term effort to support digital wellbeing, including a user education site which is launching today.

These are just a few of the many, many announcements at Google I/O—for Android, the Google Assistant, Google News, Photos, Lens, Maps and more, please see our latest stories.

Making music using new sounds generated with machine learning

Technology has always played a role in inspiring musicians in new and creative ways. The guitar amp gave rock musicians a new palette of sounds to play with in the form of feedback and distortion. And the sounds generated by synths helped shape the sound of electronic music. But what about new technologies like machine learning models and algorithms? How might they play a role in creating new tools and possibilities for a musician’s creative process? Magenta, a research project within Google, is currently exploring answers to these questions.

Building upon past research in the field of machine learning and music, last year Magenta released NSynth (Neural Synthesizer). It’s a machine learning algorithm that uses deep neural networks to learn the characteristics of sounds, and then create a completely new sound based on these characteristics. Rather than combining or blending the sounds, NSynth synthesizes an entirely new sound using the acoustic qualities of the original sounds—so you could get a sound that’s part flute and part sitar all at once.

Since then, Magenta has continued to experiment with different musical interfaces and tools to make the algorithm more easily accessible and playable. As part of this exploration, Google Creative Lab and Magenta collaborated to create NSynth Super. It’s an open source experimental instrument which gives musicians the ability to explore new sounds generated with the NSynth algorithm.

Making music using new sounds generated with machine learning

To create our prototype, we recorded 16 original source sounds across a range of 15 pitches and fed them into the NSynth algorithm. The outputs, over 100,000 new sounds, were then loaded into NSynth Super to precompute the new sounds. Using the dials, musicians can select the source sounds they would like to explore between, and drag their finger across the touchscreen to navigate the new, unique sounds which combine their acoustic qualities. NSynth Super can be played via any MIDI source, like a DAW, sequencer or keyboard.

03. NSynth-Super-Bathing_2880x1800.jpg

Part of the goal of Magenta is to close the gap between artistic creativity and machine learning. It’s why we work with a community of artists, coders and machine learning researchers to learn more about how machine learning tools might empower creators. It’s also why we create everything, including NSynth Super, with open source libraries, including TensorFlow and openFrameworks. If you’re maker, musician, or both, all of the source code, schematics, and design templates are available for download on GitHub.

04. Open-NSynth-Super-Parts-2880x1800.jpg

New sounds are powerful. They can inspire musicians in creative and unexpected ways, and sometimes they might go on to define an entirely new musical style or genre. It’s impossible to predict where the new sounds generated by machine learning tools might take a musician, but we're hoping they lead to even more musical experimentation and creativity.


Learn more about NSynth Super at g.co/nsynthsuper.

The #MyFutureMe winner is often the only girl—but she’s going to change that

Editor’s note: Earlier this year, Made with Code teamed up with Snap Inc. to host #MyFutureMe, a competition for teens to code their own Snapchat geofilters and write their vision for the future. 22,000 teens submitted designs and shared their visions, and Zoe Lynch—a ninth-grader from South Orange, NJ—was recently named the winner by a panel of judges, including Malala Yousafzai, Lilly Singh, Snap CEO Evan Spiegel and our own CFO Ruth Porat. We chatted with Zoe about her experience, how she made her filter, and why it’s important for more girls to get into coding.

What was the inspiration behind your filter?

z

The brain has fascinated me since I was younger—it’s where creativity and ideas come from so I wanted to use that. The coding project had peace signs, so I had the idea to manipulate the peace signs to look like a brain. The idea for my filter was what can happen when everyone puts their brain power together. When we do that, we are unstoppable.

After you became a finalist, you attended TEDWomen. What was that like?

It was crazy inspiring. It showed me how many powerful and cool women are out there opening paths for girls like me. I got to meet the other finalists, and we created a group chat on Snap, so that we can follow each other and stay connected. We’ve been each other’s biggest cheerleaders. All these girls are going to do awesome things. Tech mogul alert!

How did you feel when you found out that you were selected as the final winner?

I couldn’t believe it! Everyone was so talented and worked hard, but I was so happy that my ideas and creativity were recognized. To win a trip to visit Google and Snapchat was like a dream!

What advice do you have for other girls who want to learn how to code?

I know a lot of girls who think they’re not good at this kind of stuff, but most of them haven’t even tried it. So you have to try it because otherwise you won’t know if you’ll like it. I loved #MyFutureMe because teens are really into Snapchat and the different filters you can use. When you have an opportunity to make a filter, you realize that coding is behind it all.

My vision for the future is one where innovation is accessible to all. As a multiracial girl, I believe it’s important for everyone to be included. Excerpt from Zoe's vision for the future

You care a lot about inclusion—have you faced situations when inclusion has been a challenge?

When I go to camps or explore things in the engineering field, I’m often the only girl and the only person of color. Usually all the guys go together and it’s kind of discouraging, but I want to try to change that for other girls, so we don’t have to feel this way anymore.

What do you like to do outside of school?

I love to play video games—my favorite is “Uncharted”—but many of them are not really targeted to women. For women, the game is fun but you know deep down that it’s not really made for you. If I was going to make a video game, it would be an engineering game but you’re helping people. Say you want to build a bridge in the game, you’d need to use mathematics and engineering to make it work.

Who are your role models?

My mom. Hands down. She’s a Hispanic woman and and there are only white males at her level at her company, which is where my passion for inclusion started. She’s also pushed me and has always supported me.

You recently visited Snapchat and Google. What was the coolest part of the tour?

Beside the amazing offices (free food!), the coolest part was meeting the engineers. I was so inspired by their journeys and how different they all were. One was an actress, the other a gamer and the other wasn't even sure of her major until she took her first CS class in college. It showed me that there are many paths to getting into tech.

MFM121917_KeywordSelects_inline-2.png
Zoe on her tour at Snapchat in Venice, CA.

If you could have any job at Google, what would it be?

I’d want to be an engineer in artificial intelligence—I think that technology and machine learning could change the world. I’d like to see more women and people of color in the field, too.

MFM121917_KeywordSelects_inline-4.png
Zoe chats with an engineer at Google.

What do you think the future will look like when you’re 30?

I’m hoping that in the future, everyone works together. And it’ll be cool to live through new technology breakthroughs!

Source: Education