Tag Archives: Innovation & Technology

A whale of a tale about responsibility and AI

A couple of years ago, Google AI for Social Good’s Bioacoustics team created a ML model that helps the scientific community detect the presence of humpback whale sounds using acoustic recordings. This tool, developed in partnership with the National Oceanic and Atmospheric Association, helps biologists study whale behaviors, patterns, population and potential human interactions. 

We realized other researchers could use this model for their work, too — it could help them better understand the oceans and protect key biodiversity areas. We wanted to freely share this model, but  struggled with a big dilemma: On one hand, it could help ocean scientists. On the other, though, we worried about whale poachers or other bad actors. What if they used our shared knowledge in a way we didn’t intend? 

We decided to consult with experts in the field in order to help us responsibly open source this machine learning model. We worked with Google's Responsible Innovation team to use our AI Principles — a guide to responsibly developing technology — to make a decision.

The team gave us the guidance we needed to open source a machine learning model that could be socially beneficial and was built and tested for safety, while also upholding high standards of scientific excellence for the marine biologists and researchers worldwide. 

On Earth Day — and every day — putting the AI Principles into practice is important to the communities we serve, on land and in the sea. 

Curious about diving deeper? You can use AI to explore thousands of hours of humpback whale songs and make your own discoveries with our Pattern Radio and see our collaboration with the National Oceanic and Atmospheric Association of the United States as well as our work with Fisheries and Oceans Canada (DFO) to apply machine learning to protect killer whales in the Salish Sea.

How we built a new tool without ever meeting in person

A little over a year ago, a group of us within Area 120, Google’s internal incubator, wanted to explore whether recorded video could help remote teams work better. Little did we know at the time that COVID-19 would soon send us all home, and we'd actually have to build the product remotely as well. That project became Threadit, short video recordings to share your work and connect with your team. 

Once we had a working  prototype, we started using Threadit to take back control of our working hours. Threadit, available from your browser or as a Chrome extension, helps you say and show more with a video message than with an email or chat. We use Threadit to show each other our progress, ask questions or request feedback without needing to coordinate schedules. This helps us reduce unnecessary meetings while still becoming a tighter-knit team. We have more time to think and do focused work, and the meetings we keep are more effective and easier to schedule for everyone. 

Today, Threadit is available to anyone who wants to try it. 

Threadit screenshots

Record yourself and your screen

To use Threadit, simply speak straight to the camera or share your screen; if you don’t like how it sounded, just hit record and try it again. Record as many short clips as you’d like, and Threadit will stitch them all together into one cohesive video message. When you’re done, send it off to your team. Anyone can reply with their own video message when they’re ready — it’s all part of one conversation.
Threadit screenshots

We know Threadit works because we used it ourselves. Our team has still never met in person. Instead of team whiteboarding sessions or quick updates around someone’s desk, we had to juggle work and family schedules. This meant more virtual meetings and lengthy text exchanges just to stay on the same page.


Show up how you want, when you want

People from all over the world helped us build Threadit, so using the tool became a great way to see one another without having to schedule live meetings across time zones. I’d send a Threadit to my colleagues in Japan during my normal working hours in Seattle; they’d respond during the hours that worked for them in Tokyo. Threadit helped us feel like we were working together in person, even though we were responding at different times from across the world — it built connections that email couldn’t. The best part? Nobody had to get up early or stay up late.

This became our new norm, whether with teammates in Tokyo or in their homes just down the street. I could record replies around putting my son down for a nap or cooking dinner, and review what I said so I came across how I wanted. Threadit gave us an opportunity to hear from everyone on our team, not just the loudest voices in a live meeting. We had more control over our time and could contribute when we were each ready.

Threadit screenshots

How will you use Threadit? 

Since we started, we’ve seen teams use Threadit in different ways, from sharing sales presentations to recording product tutorials to sending leadership updates. We even started using Threadit as a way of celebrating team birthdays! 

Because we all have enough productivity tools to manage as is, we built Threadit to work the way you do. Access Threadit directly from your web browser or mobile device. If you get our Chrome extension, you can record yourself and anything on your screen at any time, even from within Gmail. Send a Threadit to anyone by simply sharing the link — no  download necessary. 


Threadit screenshots

We’re excited for you to see how Threadit can help your team. Get started at threadit.area120.com.  

17,572 singers, in perfect harmony (from their own homes)

When you think of a choir, you likely put a descriptor before it: a school choir, a church choir, a community choir. Singing in a chorus usually means you’re standing within a large group of people, belting out songs and nailing those harmonies together. But what happens when you can’t gather in person to sing? 


That’s where virtual choirs come in. Composer and conductor Eric Whitacre has been putting them together for more than a decade, long before the pandemic left us stuck at home—and his most recent collaboration, which debuted on YouTube July 19, is his biggest project yet. 


Whitacre started organizing Virtual Choirs in 2009, when a fan uploaded a video of herself singing one of his choral compositions. He saw the video, then asked others to record themselves singing the other parts of the same composition to form a “choir.” That first group featured 185 singers, and each one since has grown larger and larger, to more than 8,000 voices for the fifth performance in 2018.

Eric Whitacre Credit Marc Royce.jpg

Eric Whitacre (Photo by Marc Royce)

This year, signups for Virtual Choir have skyrocketed. More than 17,000 singers from around the world found a way to participate in the sixth recording from the isolation of their own homes. They all learned “Sing Gently,” a song Whitacre composed during the pandemic. “Even early on, you’d be walking down the street in masks and you’d go out of your way to not pass someone,” Whitacre says. “A random stranger would become a threat. That was hard to see, and I was feeling that all over.” So the lyrics to “Sing Gently” encourage people to “live with compassion and empathy, and do this together,” he says. 


The Virtual Choir team uses every video submitted, unless there’s a technical problem with the recording. That means there are thousands of videos to sync together, and thousands of sound recordings to edit so the result sounds seamless. This time around, the team featured three sound editors, six people reviewing each submission and two executive producers; the team was scattered through the U.S., the U.K. and South Africa. Across three different continents, they used Google Docs and Google Sheets to keep track of their progress, Google’s webmaster tools to manage thousands of email addresses and Google Translate to keep in touch with singers around the world. Singers checked the choir’s YouTube channels for rehearsal videos, footage of Whitacre conducting the song and Q&As with other singers and composers.

Sing Gently.jpeg

The video for "Sing Gently" features the song's lyrics and footage of the singers, who recorded from their homes.

It was also significant that these singers came together (figuratively speaking) at a time when musicians are suddenly out of work. “It’s an especially surreal moment for singers, because we’ve been labeled as superspreaders,” Whitacre laments, referring to a term for people who spread the disease more than others; in one instance, dozens of singers in Washington state were infected after a choir practice.  “Even just the act of singing is dangerous for other people.” He says he was struck by the number of participants who told him it felt good to sing with others again—even though they weren’t actually performing in the same room. 


Molly Jenkins, a choir lover based in North Carolina, was one of the 6,262 sopranos who took part in “Sing Gently.” She had always wanted to join a virtual choir, but never found the perfect time to give it a try. But since there’s no such thing as a perfect moment in a pandemic, she decided to figure out a way to make it work. 

This, I think, is the best of the promise of the Internet. Eric Whitacre
composer and conductor

With her phone in hand to hear the guide tracks, Molly practiced whenever and wherever she could: in the shower, at the kitchen table while working from home, in her front yard and while burping her baby. When it came time to record her track, there was one problem: finding a quiet place to record. “There was no space to record where a shrieking, gurgling baby wouldn’t interrupt the take,” she says. 

She ended up in her car on a rainy day, playing the conductor track on her laptop and recording her vocals on her phone. Sound engineers were able to isolate her vocal track from the background noise of the rain tapping on her windshield. “I’m just so glad I went for it,” Molly says.

Whitacre says that improvisational spirit is key to creating his choirs, and he’s grateful that technology can enable great collaborations despite social distancing. “It really speaks to the best of technology,” he says. “This, I think, is the best of the promise of the Internet.”

When fashion and choreography meet artificial intelligence

At the Google Arts & Culture Lab in Paris, we’re all about exploring the relationship between art and technology. Since 2012, we’ve worked with artists and creators from many fields, developing experiments that let you design patterns in augmented reality, co-create poetry, or experience multisensory art installations. Today we’re launching two experiments to test the potential of artificial intelligence in the worlds of contemporary dance and fashion.

For our first experiment, Runway Palette, we came together with The Business of Fashion, whose collection includes 140,000 photos of runway looks from almost 4,000 fashion shows. If you could attend one fashion show per day, it would take you more than ten years to see them all. By extracting the main colors of each look, we used machine learning to organize the images by color palette, resulting in an interactive visualization of four years of fashion by almost 1,000 designers.

Everyone can now use the color palette visualization to explore colors, designers, seasons, and trends that come from Fashion Weeks worldwide.  You can even snap or upload a picture of, let’s say, your closet, or autumn leaves, and discover how designers used a similar color palette in fashion.

For our second experiment, Living Archive, we continued our collaboration with Wayne McGregor to create an AI-driven choreography tool. Trained on over 100 hours of dance performances from Wayne’s 25-year archive, the experiment uses machine learning to predict and generate movement in the style of Wayne’s dancers. In July of this year, they used the tool in his creative process for a new work that premiered at the LA Music Center


Today, we are making this experiment available to everyone. Living Archive lets you explore almost half a million poses from Wayne’s extensive archive, organized by visual similarity. Use the experiment to make connections between poses, or capture  your own movement to create your very own choreography.

You can try our new experiments on the Google Arts & Culture experiments page or via our free app for iOS and Android.

Solving problems with AI for everyone

Today, we’re kicking off our annual I/O developer conference, which brings together more than 7,000 developers for a three-day event. I/O gives us a great chance to share some of Google’s latest innovations and show how they’re helping us solve problems for our users. We’re at an important inflection point in computing, and it’s exciting to be driving technology forward. It’s clear that technology can be a positive force and improve the quality of life for billions of people around the world. But it’s equally clear that we can’t just be wide-eyed about what we create. There are very real and important questions being raised about the impact of technology and the role it will play in our lives. We know the path ahead needs to be navigated carefully and deliberately—and we feel a deep sense of responsibility to get this right. It’s in that spirit that we’re approaching our core mission.

The need for useful and accessible information is as urgent today as it was when Google was founded nearly two decades ago. What’s changed is our ability to organize information and solve complex, real-world problems thanks to advances in AI.

Pushing the boundaries of AI to solve real-world problems

There’s a huge opportunity for AI to transform many fields. Already we’re seeing some encouraging applications in healthcare. Two years ago, Google developed a neural net that could detect signs of diabetic retinopathy using medical images of the eye. This year, the AI team showed our deep learning model could use those same images to predict a patient’s risk of a heart attack or stroke with a surprisingly high degree of accuracy. We published a paper on this research in February and look forward to working closely with the medical community to understand its potential. We’ve also found that our AI models are able to predict medical events, such as hospital readmissions and length of stays, by analyzing the pieces of information embedded in de-identified health records. These are powerful tools in a doctor’s hands and could have a profound impact on health outcomes for patients. We’re going to be publishing a paper on this research today and are working with hospitals and medical institutions to see how to use these insights in practice.

Another area where AI can solve important problems is accessibility. Take the example of captions. When you turn on the TV it's not uncommon to see people talking over one another. This makes a conversation hard to follow, especially if you’re hearing-impaired. But using audio and visual cues together, our researchers were able to isolate voices and caption each speaker separately. We call this technology Looking to Listen and are excited about its potential to improve captions for everyone.

Saving time across Gmail, Photos, and the Google Assistant

AI is working hard across Google products to save you time. One of the best examples of this is the new Smart Compose feature in Gmail. By understanding the context of an email, we can suggest phrases to help you write quickly and efficiently. In Photos, we make it easy to share a photo instantly via smart, inline suggestions. We’re also rolling out new features that let you quickly brighten a photo, give it a color pop, or even colorize old black and white pictures.

One of the biggest time-savers of all is the Google Assistant, which we announced two years ago at I/O. Today we shared our plans to make the Google Assistant more visual, more naturally conversational, and more helpful.

Thanks to our progress in language understanding, you’ll soon be able to have a natural back-and-forth conversation with the Google Assistant without repeating “Hey Google” for each follow-up request. We’re also adding a half a dozen new voices to personalize your Google Assistant, plus one very recognizable one—John Legend (!). So, next time you ask Google to tell you the forecast or play “All of Me,” don’t be surprised if John Legend himself is around to help.

We’re also making the Assistant more visually assistive with new experiences for Smart Displays and phones. On mobile, we’ll give you a quick snapshot of your day with suggestions based on location, time of day, and recent interactions. And we’re bringing the Google Assistant to navigation in Google Maps, so you can get information while keeping your hands on the wheel and your eyes on the road.

Someday soon, your Google Assistant might be able to help with tasks that still require a phone call, like booking a haircut or verifying a store’s holiday hours. We call this new technology Google Duplex. It’s still early, and we need to get the experience right, but done correctly we believe this will save time for people and generate value for small businesses.

Understanding the world so we can help you navigate yours

AI’s progress in understanding the physical world has dramatically improved Google Maps and created new applications like Google Lens. Maps can now tell you if the business you’re looking for is open, how busy it is, and whether parking is easy to find before you arrive. Lens lets you just point your camera and get answers about everything from that building in front of you ... to the concert poster you passed ... to that lamp you liked in the store window.

Bringing you the top news from top sources

We know people turn to Google to provide dependable, high-quality information, especially in breaking news situations—and this is another area where AI can make a big difference. Using the latest technology, we set out to create a product that surfaces the news you care about from trusted sources while still giving you a full range of perspectives on events. Today, we’re launching the new Google News. It uses artificial intelligence to bring forward the best of human intelligence—great reporting done by journalists around the globe—and will help you stay on top of what’s important to you.

Overview - News.gif

The new Google News uses AI to bring forward great reporting done by journalists around the globe and help you stay on top of what’s important to you.

Helping you focus on what matters

Advances in computing are helping us solve complex problems and deliver valuable time back to our users—which has been a big goal of ours from the beginning. But we also know technology creates its own challenges. For example, many of us feel tethered to our phones and worry about what we’ll miss if we’re not connected. We want to help people find the right balance and gain a sense of digital wellbeing. To that end, we’re going to release a series of features to help people understand their usage habits and use simple cues to disconnect when they want to, such as turning a phone over on a table to put it in “shush” mode, or “taking a break” from watching YouTube when a reminder pops up. We're also kicking off a longer-term effort to support digital wellbeing, including a user education site which is launching today.

These are just a few of the many, many announcements at Google I/O—for Android, the Google Assistant, Google News, Photos, Lens, Maps and more, please see our latest stories.

Making music using new sounds generated with machine learning

Technology has always played a role in inspiring musicians in new and creative ways. The guitar amp gave rock musicians a new palette of sounds to play with in the form of feedback and distortion. And the sounds generated by synths helped shape the sound of electronic music. But what about new technologies like machine learning models and algorithms? How might they play a role in creating new tools and possibilities for a musician’s creative process? Magenta, a research project within Google, is currently exploring answers to these questions.

Building upon past research in the field of machine learning and music, last year Magenta released NSynth (Neural Synthesizer). It’s a machine learning algorithm that uses deep neural networks to learn the characteristics of sounds, and then create a completely new sound based on these characteristics. Rather than combining or blending the sounds, NSynth synthesizes an entirely new sound using the acoustic qualities of the original sounds—so you could get a sound that’s part flute and part sitar all at once.

Since then, Magenta has continued to experiment with different musical interfaces and tools to make the algorithm more easily accessible and playable. As part of this exploration, Google Creative Lab and Magenta collaborated to create NSynth Super. It’s an open source experimental instrument which gives musicians the ability to explore new sounds generated with the NSynth algorithm.

Making music using new sounds generated with machine learning

To create our prototype, we recorded 16 original source sounds across a range of 15 pitches and fed them into the NSynth algorithm. The outputs, over 100,000 new sounds, were then loaded into NSynth Super to precompute the new sounds. Using the dials, musicians can select the source sounds they would like to explore between, and drag their finger across the touchscreen to navigate the new, unique sounds which combine their acoustic qualities. NSynth Super can be played via any MIDI source, like a DAW, sequencer or keyboard.

03. NSynth-Super-Bathing_2880x1800.jpg

Part of the goal of Magenta is to close the gap between artistic creativity and machine learning. It’s why we work with a community of artists, coders and machine learning researchers to learn more about how machine learning tools might empower creators. It’s also why we create everything, including NSynth Super, with open source libraries, including TensorFlow and openFrameworks. If you’re maker, musician, or both, all of the source code, schematics, and design templates are available for download on GitHub.

04. Open-NSynth-Super-Parts-2880x1800.jpg

New sounds are powerful. They can inspire musicians in creative and unexpected ways, and sometimes they might go on to define an entirely new musical style or genre. It’s impossible to predict where the new sounds generated by machine learning tools might take a musician, but we're hoping they lead to even more musical experimentation and creativity.


Learn more about NSynth Super at g.co/nsynthsuper.

The #MyFutureMe winner is often the only girl—but she’s going to change that

Editor’s note: Earlier this year, Made with Code teamed up with Snap Inc. to host #MyFutureMe, a competition for teens to code their own Snapchat geofilters and write their vision for the future. 22,000 teens submitted designs and shared their visions, and Zoe Lynch—a ninth-grader from South Orange, NJ—was recently named the winner by a panel of judges, including Malala Yousafzai, Lilly Singh, Snap CEO Evan Spiegel and our own CFO Ruth Porat. We chatted with Zoe about her experience, how she made her filter, and why it’s important for more girls to get into coding.

What was the inspiration behind your filter?

z

The brain has fascinated me since I was younger—it’s where creativity and ideas come from so I wanted to use that. The coding project had peace signs, so I had the idea to manipulate the peace signs to look like a brain. The idea for my filter was what can happen when everyone puts their brain power together. When we do that, we are unstoppable.

After you became a finalist, you attended TEDWomen. What was that like?

It was crazy inspiring. It showed me how many powerful and cool women are out there opening paths for girls like me. I got to meet the other finalists, and we created a group chat on Snap, so that we can follow each other and stay connected. We’ve been each other’s biggest cheerleaders. All these girls are going to do awesome things. Tech mogul alert!

How did you feel when you found out that you were selected as the final winner?

I couldn’t believe it! Everyone was so talented and worked hard, but I was so happy that my ideas and creativity were recognized. To win a trip to visit Google and Snapchat was like a dream!

What advice do you have for other girls who want to learn how to code?

I know a lot of girls who think they’re not good at this kind of stuff, but most of them haven’t even tried it. So you have to try it because otherwise you won’t know if you’ll like it. I loved #MyFutureMe because teens are really into Snapchat and the different filters you can use. When you have an opportunity to make a filter, you realize that coding is behind it all.

My vision for the future is one where innovation is accessible to all. As a multiracial girl, I believe it’s important for everyone to be included. Excerpt from Zoe's vision for the future

You care a lot about inclusion—have you faced situations when inclusion has been a challenge?

When I go to camps or explore things in the engineering field, I’m often the only girl and the only person of color. Usually all the guys go together and it’s kind of discouraging, but I want to try to change that for other girls, so we don’t have to feel this way anymore.

What do you like to do outside of school?

I love to play video games—my favorite is “Uncharted”—but many of them are not really targeted to women. For women, the game is fun but you know deep down that it’s not really made for you. If I was going to make a video game, it would be an engineering game but you’re helping people. Say you want to build a bridge in the game, you’d need to use mathematics and engineering to make it work.

Who are your role models?

My mom. Hands down. She’s a Hispanic woman and and there are only white males at her level at her company, which is where my passion for inclusion started. She’s also pushed me and has always supported me.

You recently visited Snapchat and Google. What was the coolest part of the tour?

Beside the amazing offices (free food!), the coolest part was meeting the engineers. I was so inspired by their journeys and how different they all were. One was an actress, the other a gamer and the other wasn't even sure of her major until she took her first CS class in college. It showed me that there are many paths to getting into tech.

MFM121917_KeywordSelects_inline-2.png
Zoe on her tour at Snapchat in Venice, CA.

If you could have any job at Google, what would it be?

I’d want to be an engineer in artificial intelligence—I think that technology and machine learning could change the world. I’d like to see more women and people of color in the field, too.

MFM121917_KeywordSelects_inline-4.png
Zoe chats with an engineer at Google.

What do you think the future will look like when you’re 30?

I’m hoping that in the future, everyone works together. And it’ll be cool to live through new technology breakthroughs!

Source: Education


The #MyFutureMe winner is often the only girl—but she’s going to change that

Editor’s note: Earlier this year, Made with Code teamed up with Snap Inc. to host #MyFutureMe, a competition for teens to code their own Snapchat geofilters and write their vision for the future. 22,000 teens submitted designs and shared their visions, and Zoe Lynch—a ninth-grader from South Orange, NJ—was recently named the winner by a panel of judges, including Malala Yousafzai, Lilly Singh, Snap CEO Evan Spiegel and our own CFO Ruth Porat. We chatted with Zoe about her experience, how she made her filter, and why it’s important for more girls to get into coding.

What was the inspiration behind your filter?

z

The brain has fascinated me since I was younger—it’s where creativity and ideas come from so I wanted to use that. The coding project had peace signs, so I had the idea to manipulate the peace signs to look like a brain. The idea for my filter was what can happen when everyone puts their brain power together. When we do that, we are unstoppable.

After you became a finalist, you attended TEDWomen. What was that like?

It was crazy inspiring. It showed me how many powerful and cool women are out there opening paths for girls like me. I got to meet the other finalists, and we created a group chat on Snap, so that we can follow each other and stay connected. We’ve been each other’s biggest cheerleaders. All these girls are going to do awesome things. Tech mogul alert!

How did you feel when you found out that you were selected as the final winner?

I couldn’t believe it! Everyone was so talented and worked hard, but I was so happy that my ideas and creativity were recognized. To win a trip to visit Google and Snapchat was like a dream!

What advice do you have for other girls who want to learn how to code?

I know a lot of girls who think they’re not good at this kind of stuff, but most of them haven’t even tried it. So you have to try it because otherwise you won’t know if you’ll like it. I loved #MyFutureMe because teens are really into Snapchat and the different filters you can use. When you have an opportunity to make a filter, you realize that coding is behind it all.

My vision for the future is one where innovation is accessible to all. As a multiracial girl, I believe it’s important for everyone to be included. Excerpt from Zoe's vision for the future

You care a lot about inclusion—have you faced situations when inclusion has been a challenge?

When I go to camps or explore things in the engineering field, I’m often the only girl and the only person of color. Usually all the guys go together and it’s kind of discouraging, but I want to try to change that for other girls, so we don’t have to feel this way anymore.

What do you like to do outside of school?

I love to play video games—my favorite is “Uncharted”—but many of them are not really targeted to women. For women, the game is fun but you know deep down that it’s not really made for you. If I was going to make a video game, it would be an engineering game but you’re helping people. Say you want to build a bridge in the game, you’d need to use mathematics and engineering to make it work.

Who are your role models?

My mom. Hands down. She’s a Hispanic woman and and there are only white males at her level at her company, which is where my passion for inclusion started. She’s also pushed me and has always supported me.

You recently visited Snapchat and Google. What was the coolest part of the tour?

Beside the amazing offices (free food!), the coolest part was meeting the engineers. I was so inspired by their journeys and how different they all were. One was an actress, the other a gamer and the other wasn't even sure of her major until she took her first CS class in college. It showed me that there are many paths to getting into tech.

MFM121917_KeywordSelects_inline-2.png
Zoe on her tour at Snapchat in Venice, CA.

If you could have any job at Google, what would it be?

I’d want to be an engineer in artificial intelligence—I think that technology and machine learning could change the world. I’d like to see more women and people of color in the field, too.

MFM121917_KeywordSelects_inline-4.png
Zoe chats with an engineer at Google.

What do you think the future will look like when you’re 30?

I’m hoping that in the future, everyone works together. And it’ll be cool to live through new technology breakthroughs!

There’s no failure, just opportunity: a teacher’s journey to code

Computer Science Education Week is an annual event to get kids excited about the possibilities of coding. As a part of CSEdWeek this year, we unveiled a new coding activity that lets students create their own Google logo, using block-based coding and video tutorials. Abigail Ramirez, a middle school teacher from Pomona Unified School District, tried out the activity in her computer science classroom, and spoke to us about the activity, as well as the importance of computer science in her students’ lives.

Tell us about how you got started with coding.

When I was in the third grade, my dad bought an old computer (the kind that required giant floppy disks) and challenged my siblings and me to set it up. “Reader Rabbit”—a reading and spelling game that we liked—didn’t work properly, so we had to take out the manual, read the code, fix the code, then fix the game. We didn’t even know we were programming, we just wanted to play on the computer! Fast forward years later, my congregation needed support with our website, so I turned to YouTube and Udacity to learn more. And two months after that, I attended a week-long CS institute at Harvey Mudd College, which is where my CS education officially began.

And now you teach computer science—how did you end up doing that?

I’m probably the least likely CS teacher. I’m originally an English teacher, and have the privilege of teaching at the school that I attended, which happens to be 94 percent Title I (meaning the majority of the kids have free or reduced lunch). Most of my students have college and career dreams, and they’ll be the first in their family to go down that path. While attending the CS Institute at Harvey Mudd, I realized there was so much potential in computer science. It could help build a positive future for kids who can’t see the light at the end of the tunnel, have untapped potential, or simply need access to 21st century skills.

I realized there was so much potential in computer science. It could help build a positive future for kids who can’t see the light at the end of the tunnel.

Eventually, with the support of my administrator, I got the greenlight to pilot a couple of CS classes at my school. Now I teach a class called Middle Years Computer Science, which is where I tried out this year’s CSEdWeek coding activity.

How did the kids react to the coding activity?

When they found out they could design and program their own Google logo, the excitement went through the roof. Both seasoned coders and those who were new to coding came away with a sense of community and purpose. They expressed that their simple logos had the possibility of changing someone's day, putting some joy in someone's heart, inspiring people to act, and creating awareness.

What are some of the most creative approaches that the kids took to completing the activity?

Kids are imaginative and innovative by nature, and when they get access to a creative tool like programming, the sky's the limit. The students created some really heartfelt logos featuring concepts celebrating foster care and adoption using broadcasting codes (this means that letters in the logo will move in some way, based on a command that you give another letter). Others created music videos, complete with Google-themed fidget spinners. Some daring students even created motion-sensor interactive games using their webcam, and experimented with food-shaped logos.

How did the students work together to problem-solve during the activity?

I encourage my students to think of themselves as “lead learners,” meaning each individual has a skill, expertise, or idea to share with their classmates—and when they talk through each other’s ideas, it usually leads to an even better result. Coding gives students the flexibility to see what others are doing and immediately apply it, yet expand on it to increase their own skill. Besides, this shared experience is too awesome to keep to oneself—collaboration is a natural outcome. When something didn’t work in a manner they intended, you could see that students were using persistence and critical thinking to debug the block errors. When they were stuck, they would seek each other out as expert help.

Did this activity change any perceptions of coding the kids had before doing the activity?

Coding can be scary. But if you eliminate the doubt, mix in lots of fun, and allow for collaboration, coding barriers can be debugged. From the start, we established that there is no failure in their code, just an opportunity to increase their coding and debugging abilities. In the end, the students felt a sense of accomplishment from creating a program that sprung from their imagination.

How do the kids envision using computer science in the future? Have you seen their skills progress over time?

A lot of students have decided that’s the field that they want to go into. I get to be their hypemaster—I help keep the momentum going, to inspire them to pursue computer science. I also try to show them how these skills would be used out in the real world. I start each class with a “CS moment,” which is a video clip of a company that uses computer science—video gaming, for example, shows the kids that they could apply CS to things they’re already doing.

How have you noticed that learning about CS has positively impacted your students?

I can see the joy radiate out of them when they’re learning and practicing. A student once said to me, “I can change the world right now, I just need to figure out the source code.” So for me, it’s all about getting them to the next step.

As an English teacher, I gave my kids a voice. As a computer science teacher, I help them create their future.

And they get to decide what it is, and where they’ll go.

There’s no failure, just opportunity: a teacher’s journey to code

Computer Science Education Week is an annual event to get kids excited about the possibilities of coding. As a part of CSEdWeek this year, we unveiled a new coding activity that lets students create their own Google logo, using block-based coding and video tutorials. Abigail Ramirez, a middle school teacher from Pomona Unified School District, tried out the activity in her computer science classroom, and spoke to us about the activity, as well as the importance of computer science in her students’ lives.

Tell us about how you got started with coding.

When I was in the third grade, my dad bought an old computer (the kind that required giant floppy disks) and challenged my siblings and me to set it up. “Reader Rabbit”—a reading and spelling game that we liked—didn’t work properly, so we had to take out the manual, read the code, fix the code, then fix the game. We didn’t even know we were programming, we just wanted to play on the computer! Fast forward years later, my congregation needed support with our website, so I turned to YouTube and Udacity to learn more. And two months after that, I attended a week-long CS institute at Harvey Mudd College, which is where my CS education officially began.

And now you teach computer science—how did you end up doing that?

I’m probably the least likely CS teacher. I’m originally an English teacher, and have the privilege of teaching at the school that I attended, which happens to be 94 percent Title I (meaning the majority of the kids have free or reduced lunch). Most of my students have college and career dreams, and they’ll be the first in their family to go down that path. While attending the CS Institute at Harvey Mudd, I realized there was so much potential in computer science. It could help build a positive future for kids who can’t see the light at the end of the tunnel, have untapped potential, or simply need access to 21st century skills.

I realized there was so much potential in computer science. It could help build a positive future for kids who can’t see the light at the end of the tunnel.

Eventually, with the support of my administrator, I got the greenlight to pilot a couple of CS classes at my school. Now I teach a class called Middle Years Computer Science, which is where I tried out this year’s CSEdWeek coding activity.

How did the kids react to the coding activity?

When they found out they could design and program their own Google logo, the excitement went through the roof. Both seasoned coders and those who were new to coding came away with a sense of community and purpose. They expressed that their simple logos had the possibility of changing someone's day, putting some joy in someone's heart, inspiring people to act, and creating awareness.

What are some of the most creative approaches that the kids took to completing the activity?

Kids are imaginative and innovative by nature, and when they get access to a creative tool like programming, the sky's the limit. The students created some really heartfelt logos featuring concepts celebrating foster care and adoption using broadcasting codes (this means that letters in the logo will move in some way, based on a command that you give another letter). Others created music videos, complete with Google-themed fidget spinners. Some daring students even created motion-sensor interactive games using their webcam, and experimented with food-shaped logos.

How did the students work together to problem-solve during the activity?

I encourage my students to think of themselves as “lead learners,” meaning each individual has a skill, expertise, or idea to share with their classmates—and when they talk through each other’s ideas, it usually leads to an even better result. Coding gives students the flexibility to see what others are doing and immediately apply it, yet expand on it to increase their own skill. Besides, this shared experience is too awesome to keep to oneself—collaboration is a natural outcome. When something didn’t work in a manner they intended, you could see that students were using persistence and critical thinking to debug the block errors. When they were stuck, they would seek each other out as expert help.

Did this activity change any perceptions of coding the kids had before doing the activity?

Coding can be scary. But if you eliminate the doubt, mix in lots of fun, and allow for collaboration, coding barriers can be debugged. From the start, we established that there is no failure in their code, just an opportunity to increase their coding and debugging abilities. In the end, the students felt a sense of accomplishment from creating a program that sprung from their imagination.

How do the kids envision using computer science in the future? Have you seen their skills progress over time?

A lot of students have decided that’s the field that they want to go into. I get to be their hypemaster—I help keep the momentum going, to inspire them to pursue computer science. I also try to show them how these skills would be used out in the real world. I start each class with a “CS moment,” which is a video clip of a company that uses computer science—video gaming, for example, shows the kids that they could apply CS to things they’re already doing.

How have you noticed that learning about CS has positively impacted your students?

I can see the joy radiate out of them when they’re learning and practicing. A student once said to me, “I can change the world right now, I just need to figure out the source code.” So for me, it’s all about getting them to the next step.

As an English teacher, I gave my kids a voice. As a computer science teacher, I help them create their future.

And they get to decide what it is, and where they’ll go.

Source: Education