Tag Archives: Innovation & Technology

When fashion and choreography meet artificial intelligence

At the Google Arts & Culture Lab in Paris, we’re all about exploring the relationship between art and technology. Since 2012, we’ve worked with artists and creators from many fields, developing experiments that let you design patterns in augmented reality, co-create poetry, or experience multisensory art installations. Today we’re launching two experiments to test the potential of artificial intelligence in the worlds of contemporary dance and fashion.

For our first experiment, Runway Palette, we came together with The Business of Fashion, whose collection includes 140,000 photos of runway looks from almost 4,000 fashion shows. If you could attend one fashion show per day, it would take you more than ten years to see them all. By extracting the main colors of each look, we used machine learning to organize the images by color palette, resulting in an interactive visualization of four years of fashion by almost 1,000 designers.

Everyone can now use the color palette visualization to explore colors, designers, seasons, and trends that come from Fashion Weeks worldwide.  You can even snap or upload a picture of, let’s say, your closet, or autumn leaves, and discover how designers used a similar color palette in fashion.

For our second experiment, Living Archive, we continued our collaboration with Wayne McGregor to create an AI-driven choreography tool. Trained on over 100 hours of dance performances from Wayne’s 25-year archive, the experiment uses machine learning to predict and generate movement in the style of Wayne’s dancers. In July of this year, they used the tool in his creative process for a new work that premiered at the LA Music Center


Today, we are making this experiment available to everyone. Living Archive lets you explore almost half a million poses from Wayne’s extensive archive, organized by visual similarity. Use the experiment to make connections between poses, or capture  your own movement to create your very own choreography.

You can try our new experiments on the Google Arts & Culture experiments page or via our free app for iOS and Android.

Solving problems with AI for everyone

Today, we’re kicking off our annual I/O developer conference, which brings together more than 7,000 developers for a three-day event. I/O gives us a great chance to share some of Google’s latest innovations and show how they’re helping us solve problems for our users. We’re at an important inflection point in computing, and it’s exciting to be driving technology forward. It’s clear that technology can be a positive force and improve the quality of life for billions of people around the world. But it’s equally clear that we can’t just be wide-eyed about what we create. There are very real and important questions being raised about the impact of technology and the role it will play in our lives. We know the path ahead needs to be navigated carefully and deliberately—and we feel a deep sense of responsibility to get this right. It’s in that spirit that we’re approaching our core mission.

The need for useful and accessible information is as urgent today as it was when Google was founded nearly two decades ago. What’s changed is our ability to organize information and solve complex, real-world problems thanks to advances in AI.

Pushing the boundaries of AI to solve real-world problems

There’s a huge opportunity for AI to transform many fields. Already we’re seeing some encouraging applications in healthcare. Two years ago, Google developed a neural net that could detect signs of diabetic retinopathy using medical images of the eye. This year, the AI team showed our deep learning model could use those same images to predict a patient’s risk of a heart attack or stroke with a surprisingly high degree of accuracy. We published a paper on this research in February and look forward to working closely with the medical community to understand its potential. We’ve also found that our AI models are able to predict medical events, such as hospital readmissions and length of stays, by analyzing the pieces of information embedded in de-identified health records. These are powerful tools in a doctor’s hands and could have a profound impact on health outcomes for patients. We’re going to be publishing a paper on this research today and are working with hospitals and medical institutions to see how to use these insights in practice.

Another area where AI can solve important problems is accessibility. Take the example of captions. When you turn on the TV it's not uncommon to see people talking over one another. This makes a conversation hard to follow, especially if you’re hearing-impaired. But using audio and visual cues together, our researchers were able to isolate voices and caption each speaker separately. We call this technology Looking to Listen and are excited about its potential to improve captions for everyone.

Saving time across Gmail, Photos, and the Google Assistant

AI is working hard across Google products to save you time. One of the best examples of this is the new Smart Compose feature in Gmail. By understanding the context of an email, we can suggest phrases to help you write quickly and efficiently. In Photos, we make it easy to share a photo instantly via smart, inline suggestions. We’re also rolling out new features that let you quickly brighten a photo, give it a color pop, or even colorize old black and white pictures.

One of the biggest time-savers of all is the Google Assistant, which we announced two years ago at I/O. Today we shared our plans to make the Google Assistant more visual, more naturally conversational, and more helpful.

Thanks to our progress in language understanding, you’ll soon be able to have a natural back-and-forth conversation with the Google Assistant without repeating “Hey Google” for each follow-up request. We’re also adding a half a dozen new voices to personalize your Google Assistant, plus one very recognizable one—John Legend (!). So, next time you ask Google to tell you the forecast or play “All of Me,” don’t be surprised if John Legend himself is around to help.

We’re also making the Assistant more visually assistive with new experiences for Smart Displays and phones. On mobile, we’ll give you a quick snapshot of your day with suggestions based on location, time of day, and recent interactions. And we’re bringing the Google Assistant to navigation in Google Maps, so you can get information while keeping your hands on the wheel and your eyes on the road.

Someday soon, your Google Assistant might be able to help with tasks that still require a phone call, like booking a haircut or verifying a store’s holiday hours. We call this new technology Google Duplex. It’s still early, and we need to get the experience right, but done correctly we believe this will save time for people and generate value for small businesses.

Understanding the world so we can help you navigate yours

AI’s progress in understanding the physical world has dramatically improved Google Maps and created new applications like Google Lens. Maps can now tell you if the business you’re looking for is open, how busy it is, and whether parking is easy to find before you arrive. Lens lets you just point your camera and get answers about everything from that building in front of you ... to the concert poster you passed ... to that lamp you liked in the store window.

Bringing you the top news from top sources

We know people turn to Google to provide dependable, high-quality information, especially in breaking news situations—and this is another area where AI can make a big difference. Using the latest technology, we set out to create a product that surfaces the news you care about from trusted sources while still giving you a full range of perspectives on events. Today, we’re launching the new Google News. It uses artificial intelligence to bring forward the best of human intelligence—great reporting done by journalists around the globe—and will help you stay on top of what’s important to you.

Overview - News.gif

The new Google News uses AI to bring forward great reporting done by journalists around the globe and help you stay on top of what’s important to you.

Helping you focus on what matters

Advances in computing are helping us solve complex problems and deliver valuable time back to our users—which has been a big goal of ours from the beginning. But we also know technology creates its own challenges. For example, many of us feel tethered to our phones and worry about what we’ll miss if we’re not connected. We want to help people find the right balance and gain a sense of digital wellbeing. To that end, we’re going to release a series of features to help people understand their usage habits and use simple cues to disconnect when they want to, such as turning a phone over on a table to put it in “shush” mode, or “taking a break” from watching YouTube when a reminder pops up. We're also kicking off a longer-term effort to support digital wellbeing, including a user education site which is launching today.

These are just a few of the many, many announcements at Google I/O—for Android, the Google Assistant, Google News, Photos, Lens, Maps and more, please see our latest stories.

Making music using new sounds generated with machine learning

Technology has always played a role in inspiring musicians in new and creative ways. The guitar amp gave rock musicians a new palette of sounds to play with in the form of feedback and distortion. And the sounds generated by synths helped shape the sound of electronic music. But what about new technologies like machine learning models and algorithms? How might they play a role in creating new tools and possibilities for a musician’s creative process? Magenta, a research project within Google, is currently exploring answers to these questions.

Building upon past research in the field of machine learning and music, last year Magenta released NSynth (Neural Synthesizer). It’s a machine learning algorithm that uses deep neural networks to learn the characteristics of sounds, and then create a completely new sound based on these characteristics. Rather than combining or blending the sounds, NSynth synthesizes an entirely new sound using the acoustic qualities of the original sounds—so you could get a sound that’s part flute and part sitar all at once.

Since then, Magenta has continued to experiment with different musical interfaces and tools to make the algorithm more easily accessible and playable. As part of this exploration, Google Creative Lab and Magenta collaborated to create NSynth Super. It’s an open source experimental instrument which gives musicians the ability to explore new sounds generated with the NSynth algorithm.

Making music using new sounds generated with machine learning

To create our prototype, we recorded 16 original source sounds across a range of 15 pitches and fed them into the NSynth algorithm. The outputs, over 100,000 new sounds, were then loaded into NSynth Super to precompute the new sounds. Using the dials, musicians can select the source sounds they would like to explore between, and drag their finger across the touchscreen to navigate the new, unique sounds which combine their acoustic qualities. NSynth Super can be played via any MIDI source, like a DAW, sequencer or keyboard.

03. NSynth-Super-Bathing_2880x1800.jpg

Part of the goal of Magenta is to close the gap between artistic creativity and machine learning. It’s why we work with a community of artists, coders and machine learning researchers to learn more about how machine learning tools might empower creators. It’s also why we create everything, including NSynth Super, with open source libraries, including TensorFlow and openFrameworks. If you’re maker, musician, or both, all of the source code, schematics, and design templates are available for download on GitHub.

04. Open-NSynth-Super-Parts-2880x1800.jpg

New sounds are powerful. They can inspire musicians in creative and unexpected ways, and sometimes they might go on to define an entirely new musical style or genre. It’s impossible to predict where the new sounds generated by machine learning tools might take a musician, but we're hoping they lead to even more musical experimentation and creativity.


Learn more about NSynth Super at g.co/nsynthsuper.

The #MyFutureMe winner is often the only girl—but she’s going to change that

Editor’s note: Earlier this year, Made with Code teamed up with Snap Inc. to host #MyFutureMe, a competition for teens to code their own Snapchat geofilters and write their vision for the future. 22,000 teens submitted designs and shared their visions, and Zoe Lynch—a ninth-grader from South Orange, NJ—was recently named the winner by a panel of judges, including Malala Yousafzai, Lilly Singh, Snap CEO Evan Spiegel and our own CFO Ruth Porat. We chatted with Zoe about her experience, how she made her filter, and why it’s important for more girls to get into coding.

What was the inspiration behind your filter?

z

The brain has fascinated me since I was younger—it’s where creativity and ideas come from so I wanted to use that. The coding project had peace signs, so I had the idea to manipulate the peace signs to look like a brain. The idea for my filter was what can happen when everyone puts their brain power together. When we do that, we are unstoppable.

After you became a finalist, you attended TEDWomen. What was that like?

It was crazy inspiring. It showed me how many powerful and cool women are out there opening paths for girls like me. I got to meet the other finalists, and we created a group chat on Snap, so that we can follow each other and stay connected. We’ve been each other’s biggest cheerleaders. All these girls are going to do awesome things. Tech mogul alert!

How did you feel when you found out that you were selected as the final winner?

I couldn’t believe it! Everyone was so talented and worked hard, but I was so happy that my ideas and creativity were recognized. To win a trip to visit Google and Snapchat was like a dream!

What advice do you have for other girls who want to learn how to code?

I know a lot of girls who think they’re not good at this kind of stuff, but most of them haven’t even tried it. So you have to try it because otherwise you won’t know if you’ll like it. I loved #MyFutureMe because teens are really into Snapchat and the different filters you can use. When you have an opportunity to make a filter, you realize that coding is behind it all.

My vision for the future is one where innovation is accessible to all. As a multiracial girl, I believe it’s important for everyone to be included. Excerpt from Zoe's vision for the future

You care a lot about inclusion—have you faced situations when inclusion has been a challenge?

When I go to camps or explore things in the engineering field, I’m often the only girl and the only person of color. Usually all the guys go together and it’s kind of discouraging, but I want to try to change that for other girls, so we don’t have to feel this way anymore.

What do you like to do outside of school?

I love to play video games—my favorite is “Uncharted”—but many of them are not really targeted to women. For women, the game is fun but you know deep down that it’s not really made for you. If I was going to make a video game, it would be an engineering game but you’re helping people. Say you want to build a bridge in the game, you’d need to use mathematics and engineering to make it work.

Who are your role models?

My mom. Hands down. She’s a Hispanic woman and and there are only white males at her level at her company, which is where my passion for inclusion started. She’s also pushed me and has always supported me.

You recently visited Snapchat and Google. What was the coolest part of the tour?

Beside the amazing offices (free food!), the coolest part was meeting the engineers. I was so inspired by their journeys and how different they all were. One was an actress, the other a gamer and the other wasn't even sure of her major until she took her first CS class in college. It showed me that there are many paths to getting into tech.

MFM121917_KeywordSelects_inline-2.png
Zoe on her tour at Snapchat in Venice, CA.

If you could have any job at Google, what would it be?

I’d want to be an engineer in artificial intelligence—I think that technology and machine learning could change the world. I’d like to see more women and people of color in the field, too.

MFM121917_KeywordSelects_inline-4.png
Zoe chats with an engineer at Google.

What do you think the future will look like when you’re 30?

I’m hoping that in the future, everyone works together. And it’ll be cool to live through new technology breakthroughs!

Source: Education


The #MyFutureMe winner is often the only girl—but she’s going to change that

Editor’s note: Earlier this year, Made with Code teamed up with Snap Inc. to host #MyFutureMe, a competition for teens to code their own Snapchat geofilters and write their vision for the future. 22,000 teens submitted designs and shared their visions, and Zoe Lynch—a ninth-grader from South Orange, NJ—was recently named the winner by a panel of judges, including Malala Yousafzai, Lilly Singh, Snap CEO Evan Spiegel and our own CFO Ruth Porat. We chatted with Zoe about her experience, how she made her filter, and why it’s important for more girls to get into coding.

What was the inspiration behind your filter?

z

The brain has fascinated me since I was younger—it’s where creativity and ideas come from so I wanted to use that. The coding project had peace signs, so I had the idea to manipulate the peace signs to look like a brain. The idea for my filter was what can happen when everyone puts their brain power together. When we do that, we are unstoppable.

After you became a finalist, you attended TEDWomen. What was that like?

It was crazy inspiring. It showed me how many powerful and cool women are out there opening paths for girls like me. I got to meet the other finalists, and we created a group chat on Snap, so that we can follow each other and stay connected. We’ve been each other’s biggest cheerleaders. All these girls are going to do awesome things. Tech mogul alert!

How did you feel when you found out that you were selected as the final winner?

I couldn’t believe it! Everyone was so talented and worked hard, but I was so happy that my ideas and creativity were recognized. To win a trip to visit Google and Snapchat was like a dream!

What advice do you have for other girls who want to learn how to code?

I know a lot of girls who think they’re not good at this kind of stuff, but most of them haven’t even tried it. So you have to try it because otherwise you won’t know if you’ll like it. I loved #MyFutureMe because teens are really into Snapchat and the different filters you can use. When you have an opportunity to make a filter, you realize that coding is behind it all.

My vision for the future is one where innovation is accessible to all. As a multiracial girl, I believe it’s important for everyone to be included. Excerpt from Zoe's vision for the future

You care a lot about inclusion—have you faced situations when inclusion has been a challenge?

When I go to camps or explore things in the engineering field, I’m often the only girl and the only person of color. Usually all the guys go together and it’s kind of discouraging, but I want to try to change that for other girls, so we don’t have to feel this way anymore.

What do you like to do outside of school?

I love to play video games—my favorite is “Uncharted”—but many of them are not really targeted to women. For women, the game is fun but you know deep down that it’s not really made for you. If I was going to make a video game, it would be an engineering game but you’re helping people. Say you want to build a bridge in the game, you’d need to use mathematics and engineering to make it work.

Who are your role models?

My mom. Hands down. She’s a Hispanic woman and and there are only white males at her level at her company, which is where my passion for inclusion started. She’s also pushed me and has always supported me.

You recently visited Snapchat and Google. What was the coolest part of the tour?

Beside the amazing offices (free food!), the coolest part was meeting the engineers. I was so inspired by their journeys and how different they all were. One was an actress, the other a gamer and the other wasn't even sure of her major until she took her first CS class in college. It showed me that there are many paths to getting into tech.

MFM121917_KeywordSelects_inline-2.png
Zoe on her tour at Snapchat in Venice, CA.

If you could have any job at Google, what would it be?

I’d want to be an engineer in artificial intelligence—I think that technology and machine learning could change the world. I’d like to see more women and people of color in the field, too.

MFM121917_KeywordSelects_inline-4.png
Zoe chats with an engineer at Google.

What do you think the future will look like when you’re 30?

I’m hoping that in the future, everyone works together. And it’ll be cool to live through new technology breakthroughs!

There’s no failure, just opportunity: a teacher’s journey to code

Computer Science Education Week is an annual event to get kids excited about the possibilities of coding. As a part of CSEdWeek this year, we unveiled a new coding activity that lets students create their own Google logo, using block-based coding and video tutorials. Abigail Ramirez, a middle school teacher from Pomona Unified School District, tried out the activity in her computer science classroom, and spoke to us about the activity, as well as the importance of computer science in her students’ lives.

Tell us about how you got started with coding.

When I was in the third grade, my dad bought an old computer (the kind that required giant floppy disks) and challenged my siblings and me to set it up. “Reader Rabbit”—a reading and spelling game that we liked—didn’t work properly, so we had to take out the manual, read the code, fix the code, then fix the game. We didn’t even know we were programming, we just wanted to play on the computer! Fast forward years later, my congregation needed support with our website, so I turned to YouTube and Udacity to learn more. And two months after that, I attended a week-long CS institute at Harvey Mudd College, which is where my CS education officially began.

And now you teach computer science—how did you end up doing that?

I’m probably the least likely CS teacher. I’m originally an English teacher, and have the privilege of teaching at the school that I attended, which happens to be 94 percent Title I (meaning the majority of the kids have free or reduced lunch). Most of my students have college and career dreams, and they’ll be the first in their family to go down that path. While attending the CS Institute at Harvey Mudd, I realized there was so much potential in computer science. It could help build a positive future for kids who can’t see the light at the end of the tunnel, have untapped potential, or simply need access to 21st century skills.

I realized there was so much potential in computer science. It could help build a positive future for kids who can’t see the light at the end of the tunnel.

Eventually, with the support of my administrator, I got the greenlight to pilot a couple of CS classes at my school. Now I teach a class called Middle Years Computer Science, which is where I tried out this year’s CSEdWeek coding activity.

How did the kids react to the coding activity?

When they found out they could design and program their own Google logo, the excitement went through the roof. Both seasoned coders and those who were new to coding came away with a sense of community and purpose. They expressed that their simple logos had the possibility of changing someone's day, putting some joy in someone's heart, inspiring people to act, and creating awareness.

What are some of the most creative approaches that the kids took to completing the activity?

Kids are imaginative and innovative by nature, and when they get access to a creative tool like programming, the sky's the limit. The students created some really heartfelt logos featuring concepts celebrating foster care and adoption using broadcasting codes (this means that letters in the logo will move in some way, based on a command that you give another letter). Others created music videos, complete with Google-themed fidget spinners. Some daring students even created motion-sensor interactive games using their webcam, and experimented with food-shaped logos.

How did the students work together to problem-solve during the activity?

I encourage my students to think of themselves as “lead learners,” meaning each individual has a skill, expertise, or idea to share with their classmates—and when they talk through each other’s ideas, it usually leads to an even better result. Coding gives students the flexibility to see what others are doing and immediately apply it, yet expand on it to increase their own skill. Besides, this shared experience is too awesome to keep to oneself—collaboration is a natural outcome. When something didn’t work in a manner they intended, you could see that students were using persistence and critical thinking to debug the block errors. When they were stuck, they would seek each other out as expert help.

Did this activity change any perceptions of coding the kids had before doing the activity?

Coding can be scary. But if you eliminate the doubt, mix in lots of fun, and allow for collaboration, coding barriers can be debugged. From the start, we established that there is no failure in their code, just an opportunity to increase their coding and debugging abilities. In the end, the students felt a sense of accomplishment from creating a program that sprung from their imagination.

How do the kids envision using computer science in the future? Have you seen their skills progress over time?

A lot of students have decided that’s the field that they want to go into. I get to be their hypemaster—I help keep the momentum going, to inspire them to pursue computer science. I also try to show them how these skills would be used out in the real world. I start each class with a “CS moment,” which is a video clip of a company that uses computer science—video gaming, for example, shows the kids that they could apply CS to things they’re already doing.

How have you noticed that learning about CS has positively impacted your students?

I can see the joy radiate out of them when they’re learning and practicing. A student once said to me, “I can change the world right now, I just need to figure out the source code.” So for me, it’s all about getting them to the next step.

As an English teacher, I gave my kids a voice. As a computer science teacher, I help them create their future.

And they get to decide what it is, and where they’ll go.

There’s no failure, just opportunity: a teacher’s journey to code

Computer Science Education Week is an annual event to get kids excited about the possibilities of coding. As a part of CSEdWeek this year, we unveiled a new coding activity that lets students create their own Google logo, using block-based coding and video tutorials. Abigail Ramirez, a middle school teacher from Pomona Unified School District, tried out the activity in her computer science classroom, and spoke to us about the activity, as well as the importance of computer science in her students’ lives.

Tell us about how you got started with coding.

When I was in the third grade, my dad bought an old computer (the kind that required giant floppy disks) and challenged my siblings and me to set it up. “Reader Rabbit”—a reading and spelling game that we liked—didn’t work properly, so we had to take out the manual, read the code, fix the code, then fix the game. We didn’t even know we were programming, we just wanted to play on the computer! Fast forward years later, my congregation needed support with our website, so I turned to YouTube and Udacity to learn more. And two months after that, I attended a week-long CS institute at Harvey Mudd College, which is where my CS education officially began.

And now you teach computer science—how did you end up doing that?

I’m probably the least likely CS teacher. I’m originally an English teacher, and have the privilege of teaching at the school that I attended, which happens to be 94 percent Title I (meaning the majority of the kids have free or reduced lunch). Most of my students have college and career dreams, and they’ll be the first in their family to go down that path. While attending the CS Institute at Harvey Mudd, I realized there was so much potential in computer science. It could help build a positive future for kids who can’t see the light at the end of the tunnel, have untapped potential, or simply need access to 21st century skills.

I realized there was so much potential in computer science. It could help build a positive future for kids who can’t see the light at the end of the tunnel.

Eventually, with the support of my administrator, I got the greenlight to pilot a couple of CS classes at my school. Now I teach a class called Middle Years Computer Science, which is where I tried out this year’s CSEdWeek coding activity.

How did the kids react to the coding activity?

When they found out they could design and program their own Google logo, the excitement went through the roof. Both seasoned coders and those who were new to coding came away with a sense of community and purpose. They expressed that their simple logos had the possibility of changing someone's day, putting some joy in someone's heart, inspiring people to act, and creating awareness.

What are some of the most creative approaches that the kids took to completing the activity?

Kids are imaginative and innovative by nature, and when they get access to a creative tool like programming, the sky's the limit. The students created some really heartfelt logos featuring concepts celebrating foster care and adoption using broadcasting codes (this means that letters in the logo will move in some way, based on a command that you give another letter). Others created music videos, complete with Google-themed fidget spinners. Some daring students even created motion-sensor interactive games using their webcam, and experimented with food-shaped logos.

How did the students work together to problem-solve during the activity?

I encourage my students to think of themselves as “lead learners,” meaning each individual has a skill, expertise, or idea to share with their classmates—and when they talk through each other’s ideas, it usually leads to an even better result. Coding gives students the flexibility to see what others are doing and immediately apply it, yet expand on it to increase their own skill. Besides, this shared experience is too awesome to keep to oneself—collaboration is a natural outcome. When something didn’t work in a manner they intended, you could see that students were using persistence and critical thinking to debug the block errors. When they were stuck, they would seek each other out as expert help.

Did this activity change any perceptions of coding the kids had before doing the activity?

Coding can be scary. But if you eliminate the doubt, mix in lots of fun, and allow for collaboration, coding barriers can be debugged. From the start, we established that there is no failure in their code, just an opportunity to increase their coding and debugging abilities. In the end, the students felt a sense of accomplishment from creating a program that sprung from their imagination.

How do the kids envision using computer science in the future? Have you seen their skills progress over time?

A lot of students have decided that’s the field that they want to go into. I get to be their hypemaster—I help keep the momentum going, to inspire them to pursue computer science. I also try to show them how these skills would be used out in the real world. I start each class with a “CS moment,” which is a video clip of a company that uses computer science—video gaming, for example, shows the kids that they could apply CS to things they’re already doing.

How have you noticed that learning about CS has positively impacted your students?

I can see the joy radiate out of them when they’re learning and practicing. A student once said to me, “I can change the world right now, I just need to figure out the source code.” So for me, it’s all about getting them to the next step.

As an English teacher, I gave my kids a voice. As a computer science teacher, I help them create their future.

And they get to decide what it is, and where they’ll go.

Source: Education


Identifying credible content online, with help from the Trust Project

Every day approximately 50,000 web pages filled with information come online—ranging from the weird, the wonderful and the wacky to the serious, the subjective, and the spectacular.

With a plethora of choices out there, we rely on algorithms to sort and rank all this information to help us find content that is authoritative and comes from credible sources. A constantly changing web means we won’t ever achieve perfection, but we’re investing in helping people understand what they’re reading by providing visual signposts and labels.  

We add clear labelling to stories in Google News (e.g., opinion, local, highly cited, in depth), and over year ago we launched the Fact Check tag globally in Google News and Search. And just recently we added information to our Knowledge Panels to help people get a quick insight into publishers.

Today, we’re announcing a move toward a similar labeling effort by the Trust Project, which is hosted at the Markkula Center for Applied Ethics at Santa Clara University. The Project, which is funded by Google among others, has been working with more than 75 news organizations from around the world to come up with indicators to help people distinguish the difference between quality journalism and promotional content or misinformation.

In a first step, the Project has released eight trust indicators that newsrooms can add to their content. This information will help readers understand more about what type of story they’re reading, who wrote it, and how the article was put together.

These eight indicators include:

  • Best Practices: Who funds the news outlet and their mission, plus an outlet’s commitments to ethics, diverse voices, accuracy, making corrections, and other standards.
  • Author Expertise: Details about the journalist, including their expertise and other stories they have worked on.
  • Type of Work: Labels to distinguish opinion, analysis, and advertiser (or sponsored) content from news reports.
  • Citations and References: For investigative or in-depth stories, access to the sources behind the facts and assertions in a news story.
  • Methods: For in-depth stories, information about why reporters chose to pursue a story and how they went about the process.
  • Locally Sourced: Lets people know that the story has local roots, origin, or expertise.
  • Diverse Voices: A newsroom’s efforts to bring in diverse perspectives.
  • Actionable Feedback: A newsroom’s efforts to engage the public in setting coverage priorities, contributing to the reporting process, and ensuring accuracy.
1
The publishers involved in this work include the BBC, dpa, The Economist, The Globe and Mail, Hearst Television, Mic, La Repubblica, La Stampa, The Washington Post, the New York Times and more. (Photo courtesy of the Trust Project.)

News publishers embed markup from schema.org into the HTML code of their articles and on their website. When tech platforms like Google crawl the content, we can easily parse out the information (such as Best Practices, Author Info, Citations & References, Type of Work). This works like the ClaimReview schema tag we use for fact-checking articles. Once we’ve done that, we can analyze the information and present it directly to the user in our various products.


Our next step is to figure out how to display these trust indicators next to articles that may appear on Google News, Google Search, and other Google products where news can be found. Some possible treatments could include using the “Type of Work” indicator to improve the accuracy of article labels in Google News, and indicators such as “Best Practices” and “Author Info” in our Knowledge Panels.


We believe this is a great first step for the Trust Project and look forward to future efforts as well.

Identifying credible content online, with help from the Trust Project

Every day approximately 50,000 web pages filled with information come online—ranging from the weird, the wonderful and the wacky to the serious, the subjective, and the spectacular.

With a plethora of choices out there, we rely on algorithms to sort and rank all this information to help us find content that is authoritative and comes from credible sources. A constantly changing web means we won’t ever achieve perfection, but we’re investing in helping people understand what they’re reading by providing visual signposts and labels.  

We add clear labelling to stories in Google News (e.g., opinion, local, highly cited, in depth), and over year ago we launched the Fact Check tag globally in Google News and Search. And just recently we added information to our Knowledge Panels to help people get a quick insight into publishers.

Today, we’re announcing a move toward a similar labeling effort by the Trust Project, which is hosted at the Markkula Center for Applied Ethics at Santa Clara University. The Project, which is funded by Google among others, has been working with more than 75 news organizations from around the world to come up with indicators to help people distinguish the difference between quality journalism and promotional content or misinformation.

In a first step, the Project has released eight trust indicators that newsrooms can add to their content. This information will help readers understand more about what type of story they’re reading, who wrote it, and how the article was put together.

These eight indicators include:

  • Best Practices: Who funds the news outlet and their mission, plus an outlet’s commitments to ethics, diverse voices, accuracy, making corrections, and other standards.
  • Author Expertise: Details about the journalist, including their expertise and other stories they have worked on.
  • Type of Work: Labels to distinguish opinion, analysis, and advertiser (or sponsored) content from news reports.
  • Citations and References: For investigative or in-depth stories, access to the sources behind the facts and assertions in a news story.
  • Methods: For in-depth stories, information about why reporters chose to pursue a story and how they went about the process.
  • Locally Sourced: Lets people know that the story has local roots, origin, or expertise.
  • Diverse Voices: A newsroom’s efforts to bring in diverse perspectives.
  • Actionable Feedback: A newsroom’s efforts to engage the public in setting coverage priorities, contributing to the reporting process, and ensuring accuracy.
1
The publishers involved in this work include the BBC, dpa, The Economist, The Globe and Mail, Hearst Television, Mic, La Repubblica, La Stampa, The Washington Post, the New York Times and more. (Photo courtesy of the Trust Project.)

News publishers embed markup from schema.org into the HTML code of their articles and on their website. When tech platforms like Google crawl the content, we can easily parse out the information (such as Best Practices, Author Info, Citations & References, Type of Work). This works like the ClaimReview schema tag we use for fact-checking articles. Once we’ve done that, we can analyze the information and present it directly to the user in our various products.


Our next step is to figure out how to display these trust indicators next to articles that may appear on Google News, Google Search, and other Google products where news can be found. Some possible treatments could include using the “Type of Work” indicator to improve the accuracy of article labels in Google News, and indicators such as “Best Practices” and “Author Info” in our Knowledge Panels.


We believe this is a great first step for the Trust Project and look forward to future efforts as well.

Identifying credible content online, with help from the Trust Project

Every day approximately 50,000 web pages filled with information come online—ranging from the weird, the wonderful and the wacky to the serious, the subjective, and the spectacular.

With a plethora of choices out there, we rely on algorithms to sort and rank all this information to help us find content that is authoritative and comes from credible sources. A constantly changing web means we won’t ever achieve perfection, but we’re investing in helping people understand what they’re reading by providing visual signposts and labels.  

We add clear labelling to stories in Google News (e.g., opinion, local, highly cited, in depth), and over year ago we launched the Fact Check tag globally in Google News and Search. And just recently we added information to our Knowledge Panels to help people get a quick insight into publishers.

Today, we’re announcing a move toward a similar labeling effort by the Trust Project, which is hosted at the Markkula Center for Applied Ethics at Santa Clara University. The Project, which is funded by Google among others, has been working with more than 75 news organizations from around the world to come up with indicators to help people distinguish the difference between quality journalism and promotional content or misinformation.

In a first step, the Project has released eight trust indicators that newsrooms can add to their content. This information will help readers understand more about what type of story they’re reading, who wrote it, and how the article was put together.

These eight indicators include:

  • Best Practices: Who funds the news outlet and their mission, plus an outlet’s commitments to ethics, diverse voices, accuracy, making corrections, and other standards.
  • Author Expertise: Details about the journalist, including their expertise and other stories they have worked on.
  • Type of Work: Labels to distinguish opinion, analysis, and advertiser (or sponsored) content from news reports.
  • Citations and References: For investigative or in-depth stories, access to the sources behind the facts and assertions in a news story.
  • Methods: For in-depth stories, information about why reporters chose to pursue a story and how they went about the process.
  • Locally Sourced: Lets people know that the story has local roots, origin, or expertise.
  • Diverse Voices: A newsroom’s efforts to bring in diverse perspectives.
  • Actionable Feedback: A newsroom’s efforts to engage the public in setting coverage priorities, contributing to the reporting process, and ensuring accuracy.
1
The publishers involved in this work include the BBC, dpa, The Economist, The Globe and Mail, Hearst Television, Mic, La Repubblica, La Stampa, The Washington Post, the New York Times and more. (Photo courtesy of the Trust Project.)

News publishers embed markup from schema.org into the HTML code of their articles and on their website. When tech platforms like Google crawl the content, we can easily parse out the information (such as Best Practices, Author Info, Citations & References, Type of Work). This works like the ClaimReview schema tag we use for fact-checking articles. Once we’ve done that, we can analyze the information and present it directly to the user in our various products.


Our next step is to figure out how to display these trust indicators next to articles that may appear on Google News, Google Search, and other Google products where news can be found. Some possible treatments could include using the “Type of Work” indicator to improve the accuracy of article labels in Google News, and indicators such as “Best Practices” and “Author Info” in our Knowledge Panels.


We believe this is a great first step for the Trust Project and look forward to future efforts as well.

Source: Search