Tag Archives: machine learning

AlphaGo’s next move

Cross-posted from the DeepMind blog


With just three stones on the board, it was clear that this was going to be no ordinary game of Go

Chinese Go Grandmaster and world number one Ke Jie departed from his typical style of play and opened with a “3:3 point” strategy—a highly unusual approach aimed at quickly claiming corner territory at the start of the game. The placement is rare amongst Go players, but it’s a favoured position of our program AlphaGo. Ke Jie was playing it at its own game. 

Ke Jie’s thoughtful positioning of that single black stone was a fitting motif for the opening match of The Future of Go Summit in Wuzhen, China, an event dedicated to exploring the truth of this beautiful and ancient game. Over the last five days we have been honoured to witness games of the highest calibre.

Ke Jie after game two
Ke Jie has a laugh after game two against AlphaGo on May 25, 2017 (Photo credit: Google)  

We have always believed in the potential for AI to help society discover new knowledge and benefit from it, and AlphaGo has given us an early glimpse that this may indeed be possible. More than a competitor, AlphaGo has been a tool to inspire Go players to try new strategies and uncover new ideas in this 3,000 year-old game. 

Team Go
The 9 dan player team of (left to right): Shi Yue, Mi Yuting, Tang Weixing, Chen Yaoye, and Zhou Ruiyang strategize their next move during the Team Go game against AlphaGo on May 26, 2017 (Photo credit: Google)

The creative moves it played against the legendary Lee Sedol in Seoul in 2016 brought completely new knowledge to the Go world, while the unofficial online games it played under the moniker Magister (Master) earlier this year have influenced many of Go’s leading professionals—including the genius Ke Jie himself. Events like this week’s Pair Go, in which two of the world’s top players partnered with AlphaGo, showed the great potential for people to use AI systems to generate new insights in complex fields.


This week’s series of thrilling games with the world’s best players, in the country where Go originated, has been the highest possible pinnacle for AlphaGo as a competitive program. For that reason, the Future of Go Summit is our final match event with AlphaGo.

The research team behind AlphaGo will now throw their energy into the next set of grand challenges, developing advanced general algorithms that could one day help scientists as they tackle some of our most complex problems, such as finding new cures for diseases, dramatically reducing energy consumption, or inventing revolutionary new materials. If AI systems prove they are able to unearth significant new knowledge and strategies in these domains too, the breakthroughs could be truly remarkable. We can’t wait to see what comes next.  

While AlphaGo is stepping back from competitive play, it’s certainly not the end of our work with the Go community, to which we owe a huge debt of gratitude for their encouragement and motivation over the past few years. We plan to publish one final academic paper later this year that will detail the extensive set of improvements we made to the algorithms’ efficiency and potential to be generalised across a broader set of problems. Just like our first AlphaGo paper, we hope that other developers will pick up the baton, and use these new advances to build their own set of strong Go programs.

We’re also working on a teaching tool—one of the top requests we’ve received throughout this week. The tool will show AlphaGo’s analysis of Go positions, providing an insight into how the program thinks, and hopefully giving all players and fans the opportunity to see the game through the lens of AlphaGo. We’re particularly honoured that our first collaborator in this effort will be the great Ke Jie, who has agreed to work with us on a study of his match with AlphaGo. We’re excited to hear his insights into these amazing games, and to have the chance to share some of AlphaGo’s own analysis too.

Finally, to mark the end of the Future of Go Summit, we wanted to give a special gift to fans of Go around the world. Since our match with Lee Sedol, AlphaGo has become its own teacher, playing millions of high level training games against itself to continually improve. We’re now publishing a special set of 50 AlphaGo vs AlphaGo games, played at full length time controls, which we believe contain many new and interesting ideas and strategies.

We took the opportunity this week in Wuzhen to show some of these games to a handful of top professionals. Shi Yue, 9 Dan Professional and World Champion said the games were “Like nothing I’ve ever seen before—they’re how I imagine games from far in the future.” Gu Li, 9 Dan Professional and World Champion, said that “AlphaGo’s self play games are incredible—we can learn many things from them.” We hope that all Go players will now enjoy trying out some of the moves in the set. The first ten games are now available here, and we’ll publish another ten each day until all 50 have been released.

We have been humbled by the Go community’s reaction to AlphaGo, and the way professional and amateur players have embraced its insights about this ancient game. We plan to bring that same excitement and insight to a range of new fields, and try to address some of the most important and urgent scientific challenges of our time. We hope that the story of AlphaGo is just the beginning.

AlphaGo’s next move

Cross-posted from the DeepMind blog


With just three stones on the board, it was clear that this was going to be no ordinary game of Go

Chinese Go Grandmaster and world number one Ke Jie departed from his typical style of play and opened with a “3:3 point” strategy—a highly unusual approach aimed at quickly claiming corner territory at the start of the game. The placement is rare amongst Go players, but it’s a favoured position of our program AlphaGo. Ke Jie was playing it at its own game. 

Ke Jie’s thoughtful positioning of that single black stone was a fitting motif for the opening match of The Future of Go Summit in Wuzhen, China, an event dedicated to exploring the truth of this beautiful and ancient game. Over the last five days we have been honoured to witness games of the highest calibre.

Ke Jie after game two
Ke Jie has a laugh after game two against AlphaGo on May 25, 2017 (Photo credit: Google)  

We have always believed in the potential for AI to help society discover new knowledge and benefit from it, and AlphaGo has given us an early glimpse that this may indeed be possible. More than a competitor, AlphaGo has been a tool to inspire Go players to try new strategies and uncover new ideas in this 3,000 year-old game. 

Team Go
The 9 dan player team of (left to right): Shi Yue, Mi Yuting, Tang Weixing, Chen Yaoye, and Zhou Ruiyang strategize their next move during the Team Go game against AlphaGo on May 26, 2017 (Photo credit: Google)

The creative moves it played against the legendary Lee Sedol in Seoul in 2016 brought completely new knowledge to the Go world, while the unofficial online games it played under the moniker Magister (Master) earlier this year have influenced many of Go’s leading professionals—including the genius Ke Jie himself. Events like this week’s Pair Go, in which two of the world’s top players partnered with AlphaGo, showed the great potential for people to use AI systems to generate new insights in complex fields.


This week’s series of thrilling games with the world’s best players, in the country where Go originated, has been the highest possible pinnacle for AlphaGo as a competitive program. For that reason, the Future of Go Summit is our final match event with AlphaGo.

The research team behind AlphaGo will now throw their energy into the next set of grand challenges, developing advanced general algorithms that could one day help scientists as they tackle some of our most complex problems, such as finding new cures for diseases, dramatically reducing energy consumption, or inventing revolutionary new materials. If AI systems prove they are able to unearth significant new knowledge and strategies in these domains too, the breakthroughs could be truly remarkable. We can’t wait to see what comes next.  

While AlphaGo is stepping back from competitive play, it’s certainly not the end of our work with the Go community, to which we owe a huge debt of gratitude for their encouragement and motivation over the past few years. We plan to publish one final academic paper later this year that will detail the extensive set of improvements we made to the algorithms’ efficiency and potential to be generalised across a broader set of problems. Just like our first AlphaGo paper, we hope that other developers will pick up the baton, and use these new advances to build their own set of strong Go programs.

We’re also working on a teaching tool—one of the top requests we’ve received throughout this week. The tool will show AlphaGo’s analysis of Go positions, providing an insight into how the program thinks, and hopefully giving all players and fans the opportunity to see the game through the lens of AlphaGo. We’re particularly honoured that our first collaborator in this effort will be the great Ke Jie, who has agreed to work with us on a study of his match with AlphaGo. We’re excited to hear his insights into these amazing games, and to have the chance to share some of AlphaGo’s own analysis too.

Finally, to mark the end of the Future of Go Summit, we wanted to give a special gift to fans of Go around the world. Since our match with Lee Sedol, AlphaGo has become its own teacher, playing millions of high level training games against itself to continually improve. We’re now publishing a special set of 50 AlphaGo vs AlphaGo games, played at full length time controls, which we believe contain many new and interesting ideas and strategies.

We took the opportunity this week in Wuzhen to show some of these games to a handful of top professionals. Shi Yue, 9 Dan Professional and World Champion said the games were “Like nothing I’ve ever seen before—they’re how I imagine games from far in the future.” Gu Li, 9 Dan Professional and World Champion, said that “AlphaGo’s self play games are incredible—we can learn many things from them.” We hope that all Go players will now enjoy trying out some of the moves in the set. The first ten games are now available here, and we’ll publish another ten each day until all 50 have been released.

We have been humbled by the Go community’s reaction to AlphaGo, and the way professional and amateur players have embraced its insights about this ancient game. We plan to bring that same excitement and insight to a range of new fields, and try to address some of the most important and urgent scientific challenges of our time. We hope that the story of AlphaGo is just the beginning.

AI in the newsroom: What’s happening and what’s next?

Bringing people together to discuss the forces shaping journalism is central to our mission at the Google News Lab. Earlier this month, we invited Nick Rockwell, the Chief Technology Officer from the New York Times, and Luca D’Aniello, the Chief Technology Officer at the Associated Press, to Google’s New York office to talk about the future of artificial intelligence in journalism and the challenges and opportunities it presents for newsrooms.

The event opened with an overview of the AP's recent report, "The Future of Augmented Journalism: a guide for newsrooms in the age of smart machines,” which was based on interviews with dozens of journalists, technologists, and academics (and compiled with the help of a robot, of course). As early adopters of this technology, the AP highlighted a number of their earlier experiments:

Boxing match image captured by one of AP’s AI-powered cameras
This image of a boxing match was captured by one of AP’s AI-powered cameras.
  • Deploying more than a dozen AI-powered robotic cameras at the 2016 Summer Olympics to capture angles not easily available to journalists
  • Using Google’s Cloud Vision API to classify and tag photos automatically throughout the report
  • Increasing news coverage of quarterly earnings reports from 400 to 4,000 companies using automation

The report also addressed key concerns, including risks associated with unchecked algorithms, potential for workflow disruption, and the growing gap in skill sets.

Here are three themes that emerged from the conversation with Rockwell and D’Aniello:

1. AI will increase a news organization's ability to focus on content creation

D’Aniello noted that journalists, often “pressed for resources,” are forced to “spend most of their time creating multiple versions of the same content for different outlets.” AI can reduce monotonous tasks like these and allow journalists to to spend more of their time on their core expertise: reporting.

For Rockwell, AI could also be leveraged to power new reporting, helping journalists analyze massive data sets to surface untold stories. Rockwell noted that “the big stories will be found in data, and whether we can find them or not will depend on our sophistication using large datasets.”

2. AI can help improve the quality of dialogue online and help organizations better understand their readers' needs.

Given the increasing abuse and harassment found in online conversations, many publishers are backing away from allowing comments on articles. For the Times, the Perspective API tool developed by Jigsaw (part of Google’s parent company Alphabet), is creating an opportunity to encourage constructive discussions online by using machine learning to increase the efficiency of comment moderation. Previously, the Times could only moderate comments on 10 percent of articles. The Times aspires to use Perspective to enable commenting on all its articles.

The Times is also thinking about using AI to increase the relevance of what they deliver to readers. As Rockwell notes, “Our readers have always looked to us to filter the world, but to do that only through editorial curation is a one-size-fits-all approach. There is a lot we can do to better serve them.”

3. Applying journalistic standards is essential to AI’s successful implementation in newsrooms

Both panelists agreed that the editorial standards that go into creating quality journalism should be applied to AI-fueled journalism. As Francesco Marconi, the author of the AP report, remarked, “Humans make mistakes. Algorithms make mistakes. All the editorial standards should be applied to the technology.”

Here are a few approaches we’ve seen for how those standards can be applied to the technology:

  • Pairing up journalists with the tech. At the AP, business journalists trained software to understand how to write an earnings report.
  • Serving as editorial gatekeepers. News editors should play a role in synthesizing and framing the information AI produces.
  • Ensuring more inclusive reporting. In 2016, Google.org, USC and the Geena Davis Foundation used machine learning to create a tool that collects data on gender portrayals in media.

What’s ahead

What will it take for AI to be a positive force in journalism? The conversation showed that while the path wasn’t certain, getting to the right answers would require close collaboration between the technology industry, news organizations, and journalists.

“There is a lot of work to do, but it’s about the mindset,” D’Aniello said. “Technology was seen as a disruptor of the newsroom, and it was difficult to introduce things. I don’t think this is the case anymore. The urgency and the need is perceived at the editorial level.”

We look forward to continuing to host more conversations on important topics like this one. Learn more about the Google News Lab on our website.

Header image of robotic camera courtesy of Associated Press.

AI in the newsroom: What’s happening and what’s next?

Bringing people together to discuss the forces shaping journalism is central to our mission at the Google News Lab. Earlier this month, we invited Nick Rockwell, the Chief Technology Officer from the New York Times, and Luca D’Aniello, the Chief Technology Officer at the Associated Press, to Google’s New York office to talk about the future of artificial intelligence in journalism and the challenges and opportunities it presents for newsrooms.

The event opened with an overview of the AP's recent report, "The Future of Augmented Journalism: a guide for newsrooms in the age of smart machines,” which was based on interviews with dozens of journalists, technologists, and academics (and compiled with the help of a robot, of course). As early adopters of this technology, the AP highlighted a number of their earlier experiments:

Boxing match image captured by one of AP’s AI-powered cameras
This image of a boxing match was captured by one of AP’s AI-powered cameras.
  • Deploying more than a dozen AI-powered robotic cameras at the 2016 Summer Olympics to capture angles not easily available to journalists
  • Using Google’s Cloud Vision API to classify and tag photos automatically throughout the report
  • Increasing news coverage of quarterly earnings reports from 400 to 4,000 companies using automation

The report also addressed key concerns, including risks associated with unchecked algorithms, potential for workflow disruption, and the growing gap in skill sets.

Here are three themes that emerged from the conversation with Rockwell and D’Aniello:

1. AI will increase a news organization's ability to focus on content creation

D’Aniello noted that journalists, often “pressed for resources,” are forced to “spend most of their time creating multiple versions of the same content for different outlets.” AI can reduce monotonous tasks like these and allow journalists to to spend more of their time on their core expertise: reporting.

For Rockwell, AI could also be leveraged to power new reporting, helping journalists analyze massive data sets to surface untold stories. Rockwell noted that “the big stories will be found in data, and whether we can find them or not will depend on our sophistication using large datasets.”

2. AI can help improve the quality of dialogue online and help organizations better understand their readers' needs.

Given the increasing abuse and harassment found in online conversations, many publishers are backing away from allowing comments on articles. For the Times, the Perspective API tool developed by Jigsaw (part of Google’s parent company Alphabet), is creating an opportunity to encourage constructive discussions online by using machine learning to increase the efficiency of comment moderation. Previously, the Times could only moderate comments on 10 percent of articles. Now, the technology has allowed them to allow commenting on all articles.

The Times is also thinking about using AI to increase the relevance of what they deliver to readers. As Rockwell notes, “Our readers have always looked to us to filter the world, but to do that only through editorial curation is a one-size-fits-all approach. There is a lot we can do to better serve them.”

3. Applying journalistic standards is essential to AI’s successful implementation in newsrooms

Both panelists agreed that the editorial standards that go into creating quality journalism should be applied to AI-fueled journalism. As Francesco Marconi, the author of the AP report, remarked, “Humans make mistakes. Algorithms make mistakes. All the editorial standards should be applied to the technology.”

Here are a few approaches we’ve seen for how those standards can be applied to the technology:

  • Pairing up journalists with the tech. At the AP, business journalists trained software to understand how to write an earnings report.
  • Serving as editorial gatekeepers. News editors should play a role in synthesizing and framing the information AI produces.
  • Ensuring more inclusive reporting. In 2016, Google.org, USC and the Geena Davis Foundation used machine learning to create a tool that collects data on gender portrayals in media.

What’s ahead

What will it take for AI to be a positive force in journalism? The conversation showed that while the path wasn’t certain, getting to the right answers would require close collaboration between the technology industry, news organizations, and journalists.

“There is a lot of work to do, but it’s about the mindset,” D’Aniello said. “Technology was seen as a disruptor of the newsroom, and it was difficult to introduce things. I don’t think this is the case anymore. The urgency and the need is perceived at the editorial level.”

We look forward to continuing to host more conversations on important topics like this one. Learn more about the Google News Lab on our website.

Header image of robotic camera courtesy of Associated Press.

AI in the newsroom: What’s happening and what’s next?

Bringing people together to discuss the forces shaping journalism is central to our mission at the Google News Lab. Earlier this month, we invited Nick Rockwell, the Chief Technology Officer from the New York Times, and Luca D’Aniello, the Chief Technology Officer at the Associated Press, to Google’s New York office to talk about the future of artificial intelligence in journalism and the challenges and opportunities it presents for newsrooms.

The event opened with an overview of the AP's recent report, "The Future of Augmented Journalism: a guide for newsrooms in the age of smart machines,” which was based on interviews with dozens of journalists, technologists, and academics (and compiled with the help of a robot, of course). As early adopters of this technology, the AP highlighted a number of their earlier experiments:

Boxing match image captured by one of AP’s AI-powered cameras
This image of a boxing match was captured by one of AP’s AI-powered cameras.
  • Deploying more than a dozen AI-powered robotic cameras at the 2016 Summer Olympics to capture angles not easily available to journalists
  • Using Google’s Cloud Vision API to classify and tag photos automatically throughout the report
  • Increasing news coverage of quarterly earnings reports from 400 to 4,000 companies using automation

The report also addressed key concerns, including risks associated with unchecked algorithms, potential for workflow disruption, and the growing gap in skill sets.

Here are three themes that emerged from the conversation with Rockwell and D’Aniello:

1. AI will increase a news organization's ability to focus on content creation

D’Aniello noted that journalists, often “pressed for resources,” are forced to “spend most of their time creating multiple versions of the same content for different outlets.” AI can reduce monotonous tasks like these and allow journalists to to spend more of their time on their core expertise: reporting.

For Rockwell, AI could also be leveraged to power new reporting, helping journalists analyze massive data sets to surface untold stories. Rockwell noted that “the big stories will be found in data, and whether we can find them or not will depend on our sophistication using large datasets.”

2. AI can help improve the quality of dialogue online and help organizations better understand their readers' needs.

Given the increasing abuse and harassment found in online conversations, many publishers are backing away from allowing comments on articles. For the Times, the Perspective API tool developed by Jigsaw (part of Google’s parent company Alphabet), is creating an opportunity to encourage constructive discussions online by using machine learning to increase the efficiency of comment moderation. Previously, the Times could only moderate comments on 10 percent of articles. Now, the technology has allowed them to allow commenting on all articles.

The Times is also thinking about using AI to increase the relevance of what they deliver to readers. As Rockwell notes, “Our readers have always looked to us to filter the world, but to do that only through editorial curation is a one-size-fits-all approach. There is a lot we can do to better serve them.”

3. Applying journalistic standards is essential to AI’s successful implementation in newsrooms

Both panelists agreed that the editorial standards that go into creating quality journalism should be applied to AI-fueled journalism. As Francesco Marconi, the author of the AP report, remarked, “Humans make mistakes. Algorithms make mistakes. All the editorial standards should be applied to the technology.”

Here are a few approaches we’ve seen for how those standards can be applied to the technology:

  • Pairing up journalists with the tech. At the AP, business journalists trained software to understand how to write an earnings report.
  • Serving as editorial gatekeepers. News editors should play a role in synthesizing and framing the information AI produces.
  • Ensuring more inclusive reporting. In 2016, Google.org, USC and the Geena Davis Foundation used machine learning to create a tool that collects data on gender portrayals in media.

What’s ahead

What will it take for AI to be a positive force in journalism? The conversation showed that while the path wasn’t certain, getting to the right answers would require close collaboration between the technology industry, news organizations, and journalists.

“There is a lot of work to do, but it’s about the mindset,” D’Aniello said. “Technology was seen as a disruptor of the newsroom, and it was difficult to introduce things. I don’t think this is the case anymore. The urgency and the need is perceived at the editorial level.”

We look forward to continuing to host more conversations on important topics like this one. Learn more about the Google News Lab on our website.

Header image of robotic camera courtesy of Associated Press.

Source: Google Cloud


AI in the newsroom: What’s happening and what’s next?

Bringing people together to discuss the forces shaping journalism is central to our mission at the Google News Lab. Earlier this month, we invited Nick Rockwell, the Chief Technology Officer from the New York Times, and Luca D’Aniello, the Chief Technology Officer at the Associated Press, to Google’s New York office to talk about the future of artificial intelligence in journalism and the challenges and opportunities it presents for newsrooms.

The event opened with an overview of the AP's recent report, "The Future of Augmented Journalism: a guide for newsrooms in the age of smart machines,” which was based on interviews with dozens of journalists, technologists, and academics (and compiled with the help of a robot, of course). As early adopters of this technology, the AP highlighted a number of their earlier experiments:

Boxing match image captured by one of AP’s AI-powered cameras
This image of a boxing match was captured by one of AP’s AI-powered cameras.
  • Deploying more than a dozen AI-powered robotic cameras at the 2016 Summer Olympics to capture angles not easily available to journalists
  • Using Google’s Cloud Vision API to classify and tag photos automatically throughout the report
  • Increasing news coverage of quarterly earnings reports from 400 to 4,000 companies using automation

The report also addressed key concerns, including risks associated with unchecked algorithms, potential for workflow disruption, and the growing gap in skill sets.

Here are three themes that emerged from the conversation with Rockwell and D’Aniello:

1. AI will increase a news organization's ability to focus on content creation

D’Aniello noted that journalists, often “pressed for resources,” are forced to “spend most of their time creating multiple versions of the same content for different outlets.” AI can reduce monotonous tasks like these and allow journalists to to spend more of their time on their core expertise: reporting.

For Rockwell, AI could also be leveraged to power new reporting, helping journalists analyze massive data sets to surface untold stories. Rockwell noted that “the big stories will be found in data, and whether we can find them or not will depend on our sophistication using large datasets.”

2. AI can help improve the quality of dialogue online and help organizations better understand their readers' needs.

Given the increasing abuse and harassment found in online conversations, many publishers are backing away from allowing comments on articles. For the Times, the Perspective API tool developed by Jigsaw (part of Google’s parent company Alphabet), is creating an opportunity to encourage constructive discussions online by using machine learning to increase the efficiency of comment moderation. Previously, the Times could only moderate comments on 10 percent of articles. The Times aspires to use Perspective to enable commenting on all its articles.

The Times is also thinking about using AI to increase the relevance of what they deliver to readers. As Rockwell notes, “Our readers have always looked to us to filter the world, but to do that only through editorial curation is a one-size-fits-all approach. There is a lot we can do to better serve them.”

3. Applying journalistic standards is essential to AI’s successful implementation in newsrooms

Both panelists agreed that the editorial standards that go into creating quality journalism should be applied to AI-fueled journalism. As Francesco Marconi, the author of the AP report, remarked, “Humans make mistakes. Algorithms make mistakes. All the editorial standards should be applied to the technology.”

Here are a few approaches we’ve seen for how those standards can be applied to the technology:

  • Pairing up journalists with the tech. At the AP, business journalists trained software to understand how to write an earnings report.
  • Serving as editorial gatekeepers. News editors should play a role in synthesizing and framing the information AI produces.
  • Ensuring more inclusive reporting. In 2016, Google.org, USC and the Geena Davis Foundation used machine learning to create a tool that collects data on gender portrayals in media.

What’s ahead

What will it take for AI to be a positive force in journalism? The conversation showed that while the path wasn’t certain, getting to the right answers would require close collaboration between the technology industry, news organizations, and journalists.

“There is a lot of work to do, but it’s about the mindset,” D’Aniello said. “Technology was seen as a disruptor of the newsroom, and it was difficult to introduce things. I don’t think this is the case anymore. The urgency and the need is perceived at the editorial level.”

We look forward to continuing to host more conversations on important topics like this one. Learn more about the Google News Lab on our website.

Header image of robotic camera courtesy of Associated Press.

Powering ads and analytics with machine learning

Today in San Francisco, we’re bringing together a thousand marketers from around the world to Google Marketing Next, our annual event to discuss what’s coming next for ads, what’s needed now to grow your business and what we can achieve together.

The ubiquity of mobile has dramatically changed the game in the ads world over the past few years. People expect to be able to immediately turn to their device to know, go, do, and buy—and marketers need to be able to meet those consumers in the moment. But that’s not enough. As people continue to embrace new, natural ways of interacting with devices, ads need to get even smarter and more frictionless—otherwise people will just move on.

That’s why a big focus of today is how machine learning technology–the same tech that is making Gmail replies smarter and helping you get things done around the house with the Google Assistant—will be critical to advertising. It can help marketers analyze countess signals in real time to anticipate consumer needs and reach them with more tailored ads–right at the moment they're looking to go somewhere, buy, or do something. Machine learning is also key to measuring the consumer journeys that now span multiple devices and channels, across both the digital and physical worlds. It’s something we believe will shape the future of marketing for years to come.

Check out the AdWords blog for more detail on Google Marketing Next and all these announcements.

Google Attribution: measure the impact of your marketing

 With so many ways to connect with consumers, it's hard for advertisers to answer what should be a simple question—is my marketing working? To truly understand how your different marketing efforts lead to sales, you need to connect the steps of the customer journey as people move between devices—AND value every customer moment, whether it occurs on display, video, search, social, email or another channel.

Google Attribution is a new product that helps you do just that. It helps you understand how all of your customer touchpoints work together to drive sales, even when people research across multiple devices before making a purchase. By integrating AdWords, DoubleClick Search and Google Analytics, it brings together data from all your marketing channels. The end result is a complete view of your performance. Google Attribution is currently in beta and will roll out to more advertisers over the coming months.

Helping marketers bridge the digital and physical worlds

Mobile has blurred the line between the digital and physical worlds. While most purchases still happen in-store, people are increasingly going on their smartphones to do research beforehand. That’s why marketers are using tools like Promoted Places on Google Maps and local inventory ads on Google Shopping to showcase special offers and what’s in-stock at nearby stores to help consumers decide where to go.

To help marketers gain more insight about consumer journeys that start online and end in a store, and deliver better ad experiences based on that data, we introduced store visits measurement back in 2014. This is no easy thing—especially in places with multi-story malls or dense cities like Tokyo, Japan and São Paulo, Brazil where many business locations are situated close together. So we use advanced machine learning and mapping technology to tackle these challenges. We’ve recently upgraded our deep learning model to train on larger data sets and confidently measure more store visits in challenging scenarios.

Store visits measurement is already available today for Search, Shopping and Display. And soon this technology will be available for YouTube TrueView campaigns, along with new location extensions for video ads.

Measuring store visits is just one part of the equation. You also need to know if your online ads are ringing your cash register. So in the coming months, we’ll be rolling out store sales measurement so you can measure in-store revenue in addition to the store visits delivered by your search ads. 

Powerful audience insights for Search Ads

Finally, people are often searching with the intent to buy. So we’re also bringing in-market audiences to Search Ads to help you reach people who are ready to purchase the products and services you offer. For example, if you’re a car dealership, you can increase your reach among users who have already searched for “SUVs with best gas mileage” and “spacious SUVs.” In-market audiences use the power of machine learning to better understand when people are close to buying something. 

The convergence of mobile, data and machine learning are unlocking new opportunities for marketers. See the AdWords blog for more detail. 

Powering ads and analytics with machine learning

Today in San Francisco, we’re bringing together a thousand marketers from around the world to Google Marketing Next, our annual event to discuss what’s coming next for ads, what’s needed now to grow your business and what we can achieve together.

The ubiquity of mobile has dramatically changed the game in the ads world over the past few years. People expect to be able to immediately turn to their device to know, go, do, and buy—and marketers need to be able to meet those consumers in the moment. But that’s not enough. As people continue to embrace new, natural ways of interacting with devices, ads need to get even smarter and more frictionless—otherwise people will just move on.

That’s why a big focus of today is how machine learning technology–the same tech that is making Gmail replies smarter and helping you get things done around the house with the Google Assistant—will be critical to advertising. It can help marketers analyze countess signals in real time to anticipate consumer needs and reach them with more tailored ads–right at the moment they're looking to go somewhere, buy, or do something. Machine learning is also key to measuring the consumer journeys that now span multiple devices and channels, across both the digital and physical worlds. It’s something we believe will shape the future of marketing for years to come.

Check out the AdWords blog for more detail on Google Marketing Next and all these announcements.

Google Attribution: measure the impact of your marketing

 With so many ways to connect with consumers, it's hard for advertisers to answer what should be a simple question—is my marketing working? To truly understand how your different marketing efforts lead to sales, you need to connect the steps of the customer journey as people move between devices—AND value every customer moment, whether it occurs on display, video, search, social, email or another channel.

Google Attribution is a new product that helps you do just that. It helps you understand how all of your customer touchpoints work together to drive sales, even when people research across multiple devices before making a purchase. By integrating AdWords, DoubleClick Search and Google Analytics, it brings together data from all your marketing channels. The end result is a complete view of your performance. Google Attribution is currently in beta and will roll out to more advertisers over the coming months.

Helping marketers bridge the digital and physical worlds

Mobile has blurred the line between the digital and physical worlds. While most purchases still happen in-store, people are increasingly going on their smartphones to do research beforehand. That’s why marketers are using tools like Promoted Places on Google Maps and local inventory ads on Google Shopping to showcase special offers and what’s in-stock at nearby stores to help consumers decide where to go.

To help marketers gain more insight about consumer journeys that start online and end in a store, and deliver better ad experiences based on that data, we introduced store visits measurement back in 2014. This is no easy thing—especially in places with multi-story malls or dense cities like Tokyo, Japan and São Paulo, Brazil where many business locations are situated close together. So we use advanced machine learning and mapping technology to tackle these challenges. We’ve recently upgraded our deep learning model to train on larger data sets and confidently measure more store visits in challenging scenarios.

Store visits measurement is already available today for Search, Shopping and Display. And soon this technology will be available for YouTube TrueView campaigns, along with new location extensions for video ads.

Measuring store visits is just one part of the equation. You also need to know if your online ads are ringing your cash register. So in the coming months, we’ll be rolling out store sales measurement so you can measure in-store revenue in addition to the store visits delivered by your search ads. 

Powerful audience insights for Search Ads

Finally, people are often searching with the intent to buy. So we’re also bringing in-market audiences to Search Ads to help you reach people who are ready to purchase the products and services you offer. For example, if you’re a car dealership, you can increase your reach among users who have already searched for “SUVs with best gas mileage” and “spacious SUVs.” In-market audiences use the power of machine learning to better understand when people are close to buying something. 

The convergence of mobile, data and machine learning are unlocking new opportunities for marketers. See the AdWords blog for more detail. 

Making AI work for everyone

I’ve now been at Google for 13 years, and it’s remarkable how the company’s founding mission of making information universally accessible and useful is as relevant today as it was when I joined. From the start, we’ve looked to solve complex problems using deep computer science and insights, even as the technology around us forces dramatic change.

The most complex problems tend to be ones that affect people’s daily lives, and it’s exciting to see how many people have made Google a part of their day—we’ve just passed 2 billion monthly active Android devices; YouTube has not only 1 billion users but also 1 billion hours of watchtime every day; people find their way along 1 billion kilometers across the planet using Google Maps each day. This growth would have been unthinkable without computing’s shift to mobile, which made us rethink all of our products—reinventing them to reflect new models of interaction like multi-touch screens.

We are now witnessing a new shift in computing: the move from a mobile-first to an AI-first world. And as before, it is forcing us to reimagine our products for a world that allows a more natural, seamless way of interacting with technology. Think about Google Search: it was built on our ability to understand text in webpages. But now, thanks to advances in deep learning, we’re able to make images, photos and videos useful to people in a way they simply haven’t been before. Your camera can “see”; you can speak to your phone and get answers back—speech and vision are becoming as important to computing as the keyboard or multi-touch screens.  

The Assistant is a powerful example of these advances at work. It’s already across 100 million devices, and getting more useful every day. We can now distinguish between different voices in Google Home, making it possible for people to have a more personalized experience when they interact with the device. We are now also in a position to make the smartphone camera a tool to get things done. Google Lens is a set of vision-based computing capabilities that can understand what you’re looking at and help you take action based on that information. If you have crawled down on a friend’s apartment floor to see a long, complicated Wi-Fi password on the back of a router, your phone can now recognize the password, see that you’re trying to log into a Wi-Fi network and automatically log you in. The key thing is, you don’t need to learn anything new to make this work—the interface and the experience can be much more intuitive than, for example, copying and pasting across apps on a smartphone. We’ll first be bringing Google Lens capabilities to the Assistant and Google Photos and you can expect it to make its way to other products as well.

[Warning, geeky stuff ahead!!!]

All of this requires the right computational architecture. Last year at I/O, we announced the first generation of our TPUs, which allow us to run our machine learning algorithms faster and more efficiently. Today we announced our next generation of TPUs—Cloud TPUs, which are optimized for both inference and training and can process a LOT of information. We’ll be bringing Cloud TPUs to the Google Compute Engine so that companies and developers can take advantage of it.

It’s important to us to make these advances work better for everyone—not just for the users of Google products. We believe huge breakthroughs in complex social problems will be possible if scientists and engineers can have better, more powerful computing tools and research at their fingertips. But today, there are too many barriers to making this happen. 

That’s the motivation behind Google.ai, which pulls all our AI initiatives into one effort that can lower these barriers and accelerate how researchers, developers and companies work in this field.

One way we hope to make AI more accessible is by simplifying the creation of machine learning models called neural networks. Today, designing neural nets is extremely time intensive, and requires an expertise that limits its use to a smaller community of scientists and engineers. That’s why we’ve created an approach called AutoML, showing that it’s possible for neural nets to design neural nets. We hope AutoML will take an ability that a few PhDs have today and will make it possible in three to five years for hundreds of thousands of developers to design new neural nets for their particular needs. 

In addition, Google.ai has been teaming Google researchers with scientists and developers to tackle problems across a range of disciplines, with promising results. We’ve used ML to improve the algorithm that detects the spread of breast cancer to adjacent lymph nodes. We've also seen AI make strides in the time and accuracy with which researchers can guess the properties of molecules and even sequence the human genome.

This shift isn’t just about building futuristic devices or conducting cutting-edge research. We also think it can help millions of people today by democratizing access to information and surfacing new opportunities. For example, almost half of U.S. employers say they still have issues filling open positions. Meanwhile, job seekers often don’t know there’s a job opening just around the corner from them, because the nature of job posts—high turnover, low traffic, inconsistency in job titles—have made them hard for search engines to classify. Through a new initiative, Google for Jobs, we hope to connect companies with potential employees, and help job seekers find new opportunities. As part of this effort, we will be launching a new feature in Search in the coming weeks that helps people look for jobs across experience and wage levels—including jobs that have traditionally been much harder to search for and classify, like service and retail jobs. 

It’s inspiring to see how AI is starting to bear fruit that people can actually taste. There is still a long way to go before we are truly an AI-first world, but the more we can work to democratize access to the technology—both in terms of the tools people can use and the way we apply it—the sooner everyone will benefit. 

To read more about the many, many other announcements at Google I/O—for Android, and Photos, and VR, and more, please see our latest stories.

Introducing the TensorFlow Research Cloud

Posted by Zak Stone, Product Manager for TensorFlow
Researchers require enormous computational resources to train the machine learning (ML) models that have delivered recent breakthroughs in medical imaging, neural machine translation, game playing, and many other domains. We believe that significantly larger amounts of computation will make it possible for researchers to invent new types of ML models that will be even more accurate and useful.
To accelerate the pace of open machine-learning research, we are introducing the TensorFlow Research Cloud (TFRC), a cluster of 1,000 Cloud TPUs that will be made available free of charge to support a broad range of computationally-intensive research projects that might not be possible otherwise.
The TensorFlow Research Cloud offers researchers the following benefits:
  • Access to Google's all-new Cloud TPUs that accelerate both training and inference
  • Up to 180 teraflops of floating-point performance per Cloud TPU
  • 64 GB of ultra-high-bandwidth memory per Cloud TPU
  • Familiar TensorFlow programming interfaces
You can sign up here to request to be notified when the TensorFlow Research Cloud application process opens, and you can optionally share more information about your computational needs. We plan to evaluate applications on a rolling basis in search of the most creative and ambitious proposals.
The TensorFlow Research Cloud program is not limited to academia — we recognize that people with a wide range of affiliations, roles, and expertise are making major machine learning research contributions, and we especially encourage those with non-traditional backgrounds to apply. Access will be granted to selected individuals for limited amounts of compute time, and researchers are welcome to apply multiple times with multiple projects.
Since the main goal of the TensorFlow Research Cloud is to benefit the open machine learning research community as a whole, successful applicants will be expected to do the following:
  • Share their TFRC-supported research with the world through peer-reviewed publications, open-source code, blog posts, or other open media
  • Share concrete, constructive feedback with Google to help us improve the TFRC program and the underlying Cloud TPU platform over time
  • Imagine a future in which ML acceleration is abundant and develop new kinds of machine learning models in anticipation of that future
For businesses interested in using Cloud TPUs for proprietary research and development, we will offer a parallel Cloud TPU Alpha program. You can sign up here to learn more about this program. We recommend participating in the Cloud TPU Alpha program if you are interested in any of the following:
  • Accelerating training of proprietary ML models; models that take weeks to train on other hardware can be trained in days or even hours on Cloud TPUs
  • Accelerating batch processing of industrial-scale datasets: images, videos, audio, unstructured text, structured data, etc.
  • Processing live requests in production using larger and more complex ML models than ever before
We hope the TensorFlow Research Cloud will allow as many researchers as possible to explore the frontier of machine learning research and extend it with new discoveries! We encourage you to sign up today to be among the first to know as more information becomes available.