Tag Archives: machine learning

How publishers can take advantage of machine learning

As the publishing world continues to face new challenges amidst the shift to digital, news media and publishers are tasked with unlocking new opportunities. With online news consumption continuing to grow, it’s crucial that publishers take advantage of new technologies to sustain and grow their business. Machine learning yields tremendous value for media and can help them tackle the hardest problems: engaging readers, increasing profits, and making newsrooms more efficient. Google has a suite of machine learning tools and services that are easy to use—here are a few ways they can help newsrooms and reporters do their jobs

1. Improve your newsroom's efficiency 

Editors want to make their stories appealing and to stand out so that people will read them. So finding just the right photograph or video can be key in bringing a story to life. But with ever-pressing deadlines, there’s often not enough time to find that perfect image. This is where Google Cloud Vision and Video Intelligence can simplify the process by tagging images and videos based on the content inside the actual image. This metadata can then be used to make it easier and quicker to find the right visual.

2.  Better understand your audience

News publishers use analytics tools to grow their audiences, and understand what that audience is reading and how they’re discovering content. Google Cloud Natural Language uses machine learning to understand what your content is about, independent of a website’s section and subsection structure (i.e. Sports, Local, etc.) Today, Cloud Natural Language announced a new content classifier and entity sentiment that digs into the detail of what a story is actually about. For example, an article about a high-tech stadium for the Golden State Warriors may be classified under the “technology” section of a paper, when its content should fall under “technology” and “sports.” This section-independent tagging can increase readership by driving smarter article recommendations and provides better data around trending topics. Naveed Ahmad, Senior Director of Data at Hearst has emphasized that precision and speed are critical to engaging readers: “Google Cloud Natural Language is unmatched in its accuracy for content classification. At Hearst, we publish several thousand articles a day across 30+ properties and, with natural language processing, we're able to quickly gain insight into what content is being published and how it resonates with our audiences."

3. Engage with new audiences

As publications expand their reach into more countries, they have to write for multiple audiences in different languages and many cannot afford multi-language desks. Google Cloud Translation makes translating for different audiences easier by providing a simple interface to translate content into more than 100 languages. Vice launched GoogleFish earlier this year to help editors quickly translate existing Vice articles into the language of their market. Once text was auto-translated, an editor could then push the translation to a local editor to ensure tone and local slang were accurate. Early translation results are very positive and Vice is also uncovering new insights around global content sharing they could not previously identify.

DB Corp, India’s largest newspaper group, publishes 62 editions in four languages and sells about 6 million newspaper copies per day. To address its growing customers and its diverse readership, reporters use Google Cloud Translation to capture and document interviews and source material for articles, with accuracy rates of 95 percent for Hindi alone.

4. Monetize your audience

So far we’ve primarily outlined ways to improve content creation and engagement with readers, however monetization is a critical piece for all publishers. Using Cloud Datalab, publishers can identify new subscription opportunities and offerings. The metadata collected from image, video, and content tagging creates an invaluable dataset to advertisers, such as audiences interested in local events or personal finance, or those who watch videos about cars or travel. The Washington Post has seen success with their in-house solution through the ability to target native ads to likely interested readers. Lastly, improved content recommendation drives consumption, ultimately improving the bottom line.

5. Experiment with new formats

The ability to share news quickly and efficiently is a major concern for newsrooms across the world. However today more than ever, readers are reading the news in different ways across different platforms and the “one format fits all” method is not always best. TensorFlow’s “summary.text” feature can help publishers quickly experiment with creating short form content from longer stories. This helps them quickly test the best way to share their content across different platforms. Reddit recently launched a similar “tl;dr bot” that summarizes long posts into digestible snippets.

6. Keep your content safe for everyone

The comments section can be a place of both fruitful discussion as well as toxicity. Users who comment are frequently the most highly engaged on the site overall, and while publishers want to keep sharing open, it can frequently spiral out of control into offensive speech and bad language. Jigsaw’s Perspective is an API that uses machine learning to spot harmful comments which can be flagged for moderators. Publishers like the New York Times have leveraged Perspective's technology to improve the way all readers engage with comments. By making the task of moderating conversations at scale easier, this frees up valuable time for editors and improves online discussion.

8
Example of New York Time’s moderator dashboard. Each dot represents a negative comment

From the printing press to machine learning, technology continues to spur new opportunities for publishers to reach more people, create engaging content and operate efficiently. We're only beginning to scratch the surface of what machine learning can do for publishers. Keep tabs on The Keyword for the latest developments.

Search more intuitively using natural language processing in Google Cloud Search

Earlier this year, we launched Google Cloud Search, a new G Suite tool that uses machine learning to help organizations find and access information quickly.

Just like in Google Search, which lets you search queries in a natural, intuitive way, we want to make it easy for you to find information in the workplace using everyday language. According to Gartner research, by 2018, 30 percent or more of enterprise search queries will start with a "what," "who," "how" or "when.”*

Today, we’re making it possible to use natural language processing (NLP) technology in Cloud Search so you can track down information—like documents, presentations or meeting details—fast.

Related Article

Introducing Google Cloud Search: Bringing the power of Google Search to G Suite customers

Every day, people around the globe rely on the power of Google Search to access the world’s information. In fact, we see more than one tr...

Read Article

Find information fast with Cloud Search

If you’re looking for a Google Doc, you’re more likely to remember who shared it with you than the exact name of a file. Now, you can use NLP technology, an intuitive way to search, to find information quickly in Cloud Search.

Type queries into Cloud Search using natural, everyday language. Ask questions like “Docs shared by Mary,” “Who’s Bob’s manager?” or “What docs need my attention?” and Cloud Search will show you answer cards with relevant information.

NLP Cloud Search GIF

Having access to information quicker can help you make better and faster decisions in the workplace. If your organization runs on G Suite Business or Enterprise edition, start using Cloud Search now. If you’re new to Cloud Search, learn more on our website or check out this video to see it in action.

Introducing Google Cloud Search

*Gartner, ‘Insight Engines’ Will Power Enterprise Search That is Natural, Total and Proactive, 09 December 2015, refreshed 05 April 2017

Source: Google Cloud


How publishers can take advantage of machine learning

As the publishing world continues to face new challenges amidst the shift to digital, news media and publishers are tasked with unlocking new opportunities. With online news consumption continuing to grow, it’s crucial that publishers take advantage of new technologies to sustain and grow their business. Machine learning yields tremendous value for media and can help them tackle the hardest problems: engaging readers, increasing profits, and making newsrooms more efficient. Google has a suite of machine learning tools and services that are easy to use—here are a few ways they can help newsrooms and reporters do their jobs

1. Improve your newsroom's efficiency 

Editors want to make their stories appealing and to stand out so that people will read them. So finding just the right photograph or video can be key in bringing a story to life. But with ever-pressing deadlines, there’s often not enough time to find that perfect image. This is where Google Cloud Vision and Video Intelligence can simplify the process by tagging images and videos based on the content inside the actual image. This metadata can then be used to make it easier and quicker to find the right visual.

2.  Better understand your audience

News publishers use analytics tools to grow their audiences, and understand what that audience is reading and how they’re discovering content. Google Cloud Natural Language uses machine learning to understand what your content is about, independent of a website’s section and subsection structure (i.e. Sports, Local, etc.) Today, Cloud Natural Language announced a new content classifier and entity sentiment that digs into the detail of what a story is actually about. For example, an article about a high-tech stadium for the Golden State Warriors may be classified under the “technology” section of a paper, when its content should fall under “technology” and “sports.” This section-independent tagging can increase readership by driving smarter article recommendations and provides better data around trending topics. Naveed Ahmad, Senior Director of Data at Hearst has emphasized that precision and speed are critical to engaging readers: “Google Cloud Natural Language is unmatched in its accuracy for content classification. At Hearst, we publish several thousand articles a day across 30+ properties and, with natural language processing, we're able to quickly gain insight into what content is being published and how it resonates with our audiences."

3. Engage with new audiences

As publications expand their reach into more countries, they have to write for multiple audiences in different languages and many cannot afford multi-language desks. Google Cloud Translation makes translating for different audiences easier by providing a simple interface to translate content into more than 100 languages. Vice launched GoogleFish earlier this year to help editors quickly translate existing Vice articles into the language of their market. Once text was auto-translated, an editor could then push the translation to a local editor to ensure tone and local slang were accurate. Early translation results are very positive and Vice is also uncovering new insights around global content sharing they could not previously identify.

DB Corp, India’s largest newspaper group, publishes 62 editions in four languages and sells about 6 million newspaper copies per day. To address its growing customers and its diverse readership, reporters use Google Cloud Translation to capture and document interviews and source material for articles, with accuracy rates of 95 percent for Hindi alone.

4. Monetize your audience

So far we’ve primarily outlined ways to improve content creation and engagement with readers, however monetization is a critical piece for all publishers. Using Cloud Datalab, publishers can identify new subscription opportunities and offerings. The metadata collected from image, video, and content tagging creates an invaluable dataset to advertisers, such as audiences interested in local events or personal finance, or those who watch videos about cars or travel. The Washington Post has seen success with their in-house solution through the ability to target native ads to likely interested readers. Lastly, improved content recommendation drives consumption, ultimately improving the bottom line.

5. Experiment with new formats

The ability to share news quickly and efficiently is a major concern for newsrooms across the world. However today more than ever, readers are reading the news in different ways across different platforms and the “one format fits all” method is not always best. TensorFlow’s “summary.text” feature can help publishers quickly experiment with creating short form content from longer stories. This helps them quickly test the best way to share their content across different platforms. Reddit recently launched a similar “tl;dr bot” that summarizes long posts into digestible snippets.

6. Keep your content safe for everyone

The comments section can be a place of both fruitful discussion as well as toxicity. Users who comment are frequently the most highly engaged on the site overall, and while publishers want to keep sharing open, it can frequently spiral out of control into offensive speech and bad language. Jigsaw’s Perspective is an API that uses machine learning to spot harmful comments which can be flagged for moderators. Publishers like the New York Times have leveraged Perspective's technology to improve the way all readers engage with comments. By making the task of moderating conversations at scale easier, this frees up valuable time for editors and improves online discussion.

8
Example of New York Time’s moderator dashboard. Each dot represents a negative comment

From the printing press to machine learning, technology continues to spur new opportunities for publishers to reach more people, create engaging content and operate efficiently. We're only beginning to scratch the surface of what machine learning can do for publishers. Keep tabs on The Keyword for the latest developments.

Source: Google Cloud


Build your own Machine Learning Visualizations with the new TensorBoard API



When we open-sourced TensorFlow in 2015, it included TensorBoard, a suite of visualizations for inspecting and understanding your TensorFlow models and runs. Tensorboard included a small, predetermined set of visualizations that are generic and applicable to nearly all deep learning applications such as observing how loss changes over time or exploring clusters in high-dimensional spaces. However, in the absence of reusable APIs, adding new visualizations to TensorBoard was prohibitively difficult for anyone outside of the TensorFlow team, leaving out a long tail of potentially creative, beautiful and useful visualizations that could be built by the research community.

To allow the creation of new and useful visualizations, we announce the release of a consistent set of APIs that allows developers to add custom visualization plugins to TensorBoard. We hope that developers use this API to extend TensorBoard and ensure that it covers a wider variety of use cases.

We have updated the existing dashboards (tabs) in TensorBoard to use the new API, so they serve as examples for plugin creators. For the current listing of plugins included within TensorBoard, you can explore the tensorboard/plugins directory on GitHub. For instance, observe the new plugin that generates precision-recall curves:
The plugin demonstrates the 3 parts of a standard TensorBoard plugin:
  • A TensorFlow summary op used to collect data for later visualization. [GitHub]
  • A Python backend that serves custom data. [GitHub]
  • A dashboard within TensorBoard built with TypeScript and polymer. [GitHub]
Additionally, like other plugins, the “pr_curves” plugin provides a demo that (1) users can look over in order to learn how to use the plugin and (2) the plugin author can use to generate example data during development. To further clarify how plugins work, we’ve also created a barebones TensorBoard “Greeter” plugin. This simple plugin collects greetings (simple strings preceded by “Hello, ”) during model runs and displays them. We recommend starting by exploring (or forking) the Greeter plugin as well as other existing plugins.

A notable example of how contributors are already using the TensorBoard API is Beholder, which was recently created by Chris Anderson while working on his master’s degree. Beholder shows a live video feed of data (e.g. gradients and convolution filters) as a model trains. You can watch the demo video here.
We look forward to seeing what innovations will come out of the community. If you plan to contribute a plugin to TensorBoard’s repository, you should get in touch with us first through the issue tracker with your idea so that we can help out and possibly guide you.

Acknowledgements
Dandelion Mané and William Chargin played crucial roles in building this API.



Analyze your business data with Explore in Google Sheets, use BigQuery too

A few months back, we announced a new way for you to analyze data in Google Sheets using machine learning. Instead of relying on lengthy formulas to crunch your numbers, now you can use Explore in Sheets to ask questions and quickly gather insights. Check it out.

Quicker data → problems solved

When you have easier access to data—and can figure out what it means quickly—you can solve problems for your business faster. You might use Explore in Sheets to analyze profit from last year, or look for trends in how your customers sign up for your company’s services. Explore in Sheets can help you track down this information, and more importantly, visualize it.

Getting started is easy. Just click the “Explore” button on the bottom right corner of your screen in Sheets. Type in a question about your data in the search box and Explore responds to your query. Here’s an example of how Sheets can build charts for you.

Sheets Explore GIF

Syncing Sheets with BigQuery for deeper insights

For those of you who want to take data analysis one step further, you can sync Sheets with BigQuery—Google Cloud’s low cost data warehouse for analytics.

Compare publicly-available datasets in BigQuery, like U.S. Census Data or World Bank: Global Health, Nutrition, and Population data, to your company’s data in Sheets and gather information. For example, you can see how sales of your medical product compared with last year’s disease trends, or cross-reference average inflation prices in key markets of interest to your business.  

Check out this post to see how you might query an example.

Source: Google Cloud


Analyze your business data with Explore in Google Sheets, use BigQuery too

A few months back, we announced a new way for you to analyze data in Google Sheets using machine learning. Instead of relying on lengthy formulas to crunch your numbers, now you can use Explore in Sheets to ask questions and quickly gather insights. Check it out.

Analyze easily with Explore in Sheets | The G Suite Show

Quicker data → problems solved

When you have easier access to data—and can figure out what it means quickly—you can solve problems for your business faster. You might use Explore in Sheets to analyze profit from last year, or look for trends in how your customers sign up for your company’s services. Explore in Sheets can help you track down this information, and more importantly, visualize it.

Getting started is easy. Just click the “Explore” button on the bottom right corner of your screen in Sheets. Type in a question about your data in the search box and Explore responds to your query. Here’s an example of how Sheets can build charts for you.

Sheets Explore GIF

Syncing Sheets with BigQuery for deeper insights

For those of you who want to take data analysis one step further, you can sync Sheets with BigQuery—Google Cloud’s low cost data warehouse for analytics.

Compare publicly-available datasets in BigQuery, like U.S. Census Data or World Bank: Global Health, Nutrition, and Population data, to your company’s data in Sheets and gather information. For example, you can see how sales of your medical product compared with last year’s disease trends, or cross-reference average inflation prices in key markets of interest to your business.  

Check out this post to see how you might query an example.

Source: Google Cloud


Seminal Ideas from 2007



It is not everyday we have the chance to pause and think about how previous work has led to current successes, how it influenced other advances and reinterpret it in today’s context. That’s what the ICML Test-of-Time Award is meant to achieve, and this year it was given to the work Sylvain Gelly, now a researcher on the Google Brain team in our Zurich office, and David Silver, now at DeepMind and lead researcher on AlphaGo, for their 2007 paper Combining Online and Offline Knowledge in UCT. This paper presented new approaches to incorporate knowledge, learned offline or created online on the fly, into a search algorithm to augment its effectiveness.

The Game of Go is an ancient Chinese board game, which has tremendous popularity with millions of players worldwide. Since the success of Deep Blue in the game of Chess in the late 90’s, Go has been considered as the next benchmark for machine learning and games. Indeed, it has simple rules, can be efficiently simulated, and progress can be measured objectively. However, due to the vast search space of possible moves, making an ML system capable of playing Go well represented a considerable challenge. Over the last two years, DeepMind’s AlphaGo has pushed the limit of what is possible with machine learning in games, bringing many innovations and technological advances in order to successfully defeat some of the best players in the world [1], [2], [3].

A little more than 10 years before the success of AlphaGo, the classical tree search techniques that were so successful in Chess were reigning in computer Go programs, but only reaching weak amateur level for human Go players. Thanks to Monte-Carlo Tree Search — a (then) new type of search algorithm based on sampling possible outcomes of the game from a position, and incrementally improving the search tree from the results of those simulations — computers were able to search much deeper in the game. This is important because it made it possible to incorporate less human knowledge in the programs — a task which is very hard to do right. Indeed, any missing knowledge that a human expert either cannot express or did not think about may create errors in the computer evaluation of the game position, and lead to blunders*.

In 2007, Sylvain and David augmented the Monte Carlo Tree Search techniques by exploring two types of knowledge incorporation: (i) online, where the decision for the next move is taken from the current position, using compute resources at the time when the next move is needed, and (ii) offline, where the learning process happens entirely before the game starts, and is summarized into a model that can be applied to all possible positions of a game (even though not all possible positions have been seen during the learning process). This ultimately led to the computer program MoGo, which showed an improvement in performance over previous Go algorithms.


For the online part, they adapted the simple idea that some actions don’t necessarily depend on each other. For example, if you need to book a vacation, the choice of the hotel, flight and car rental is obviously dependent on the choice of your destination. However, once given a destination, these things can be chosen (mostly) independently of each other. The same idea can be applied to Go, where some moves can be estimated partially independently of each other to get a very quick, albeit imprecise, estimate. Of course, when time is available, the exact dependencies are also analysed.

For offline knowledge incorporation, they explored the impact of learning an approximation of the position value with the computer playing against itself using reinforcement learning, adding that knowledge in the tree search algorithm. They also looked at how expert play patterns, based on human knowledge of the game, can be used in a similar way. That offline knowledge was used in two places; first, it helped focus the program on moves that looked similar to good moves it learned offline. Second, it helped simulate more realistic games when the program tried to estimate a given position value.

These improvements led to good success on the smaller version of the game of Go (9x9), even beating one professional player in an exhibition game, and also reaching a stronger amateur level on the full game (19x19). And in the years since 2007, we’ve seen many rapid advances (almost on a monthly basis) from researchers all over the world that have allowed the development of algorithms culminating in AlphaGo (which itself introduced many innovations).

Importantly, these algorithms and techniques are not limited to applications towards games, but also enable improvements in many domains. The contributions introduced by David and Sylvain in their collaboration 10 years ago were an important piece to many of the improvements and advancements in machine learning that benefit our lives daily, and we offer our sincere congratulations to both authors on this well-deserved award.


* As a side note, that’s why machine learning as a whole is such a powerful tool: replacing expert knowledge with algorithms that can more fully explore potential outcomes.

Experimenting with machine learning in media

From the Gutenberg printing press in 1440 to virtual reality today, advances in technology have made it possible to discover new audiences and new ways of expressing. And there’s more to come.

Machine learning is the latest technology to change how news, entertainment, lifestyle and sports content is created, distributed and monetized. YouTube, for example, has used machine learning to automatically caption more than one billion videos to make them more accessible to the 300 million+ people who are deaf or hard of hearing.

While many media executives are increasingly aware of machine learning, it's not always apparent which problems are most suited for machine learning and whose solutions will result in maximum impact.

Machine learning can help transform your business with new user experiences, better monetization of your content and reduce your operational cost.

Executives, here are three things to keep in mind as you consider and experiment with machine learning to transform your  digital business:

  1. The time to experiment with machine learning is right now. The barriers to using machine learning have never been lower. In the same way companies started thinking about investing in mobile 10 years ago, the time to start exploring machine learning is right now. Solutions like Google Cloud Machine Learning Engine have made powerful machine learning infrastructure available to all without the need for investment in dedicated hardware. Companies can start experimenting today with Google Cloud Machine Learning APIs at no charge—and even developers with no machine learning expertise can do it. For example, in less than a day, Time Inc. used a combination of Cloud Machine Learning APIs to prototype a personalized date night assistant that integrated fashion, lifestyle and events recommendations powered by its vast corpus of editorial content.

  2. Bring together key stakeholders from diverse teams to identify the top problems to solve before you start. Machine learning is not the answer to all of your business woes, but a toolkit that can help solve specific, data-intensive problems at scale. With limited time and people to dedicate to machine learning applications, start by  bringing together the right decision makers across your business, product and engineering teams to identify the top problems to solve. Once the top challenges are identified, teams need to work closely with their engineering leads to determine technical feasibility and prioritize where machine learning could have the highest impact. Key questions to answer that will help prioritize efforts are: Can current technology reasonably solve the problem? What does success look like? What training data is needed, and is that data currently available or does it need to be generated. This was the approach that was taken during a recent Machine Learning for Media hackathon hosted by Google and the NYC Media lab, and it paid off with clearer design objectives and better prototypes. For example, for the Associated Press, there was an opportunity to quickly generate sports highlights from analysis of video footage. So they created an automated, real-time sports highlights tool for editors using Cloud Video Intelligence API.

  3. Machine learning has a vibrant community that can help you get started. Companies can kickstart their machine learning endeavors by plugging into the vibrant and growing machine learnig community. TensorFlow, an open source machine learning framework, offers resources, meetups, and more. And if your company needs more hands-on assistance, Google offers a suite of services through the Advanced Solutions Lab to work side-by-side with companies to build bespoke machine learning solutions. There are also partners with deep technical expertise in machine learning that can help. For example, Quantiphi, a machine learning specialist, has been working closely with media companies to extract meaningful insights from their video content using a hybrid of the Cloud Video Intelligence API and custom models created using TensorFlow. However you decide to integrate machine learning technologies into your business, there's a growing ecosystem of solutions and subject matter experts that are available to help.

We hope this provided some insight into ways media companies can leverage machine learning—and what executives can do to bring machine learning to their organizations. We look forward to seeing the full potential of machine learning unfold.

Source: Google Cloud


Experimenting with machine learning in media

From the Gutenberg printing press in 1440 to virtual reality today, advances in technology have made it possible to discover new audiences and new ways of expressing. And there’s more to come.

Machine learning is the latest technology to change how news, entertainment, lifestyle and sports content is created, distributed and monetized. YouTube, for example, has used machine learning to automatically caption more than one billion videos to make them more accessible to the 300 million+ people who are deaf or hard of hearing.

While many media executives are increasingly aware of machine learning, it's not always apparent which problems are most suited for machine learning and whose solutions will result in maximum impact.

Machine learning can help transform your business with new user experiences, better monetization of your content and reduce your operational cost.

Executives, here are three things to keep in mind as you consider and experiment with machine learning to transform your  digital business:

  1. The time to experiment with machine learning is right now. The barriers to using machine learning have never been lower. In the same way companies started thinking about investing in mobile 10 years ago, the time to start exploring machine learning is right now. Solutions like Google Cloud Machine Learning Engine have made powerful machine learning infrastructure available to all without the need for investment in dedicated hardware. Companies can start experimenting today with Google Cloud Machine Learning APIs at no charge—and even developers with no machine learning expertise can do it. For example, in less than a day, Time Inc. used a combination of Cloud Machine Learning APIs to prototype a personalized date night assistant that integrated fashion, lifestyle and events recommendations powered by its vast corpus of editorial content.

  2. Bring together key stakeholders from diverse teams to identify the top problems to solve before you start. Machine learning is not the answer to all of your business woes, but a toolkit that can help solve specific, data-intensive problems at scale. With limited time and people to dedicate to machine learning applications, start by  bringing together the right decision makers across your business, product and engineering teams to identify the top problems to solve. Once the top challenges are identified, teams need to work closely with their engineering leads to determine technical feasibility and prioritize where machine learning could have the highest impact. Key questions to answer that will help prioritize efforts are: Can current technology reasonably solve the problem? What does success look like? What training data is needed, and is that data currently available or does it need to be generated. This was the approach that was taken during a recent Machine Learning for Media hackathon hosted by Google and the NYC Media lab, and it paid off with clearer design objectives and better prototypes. For example, for the Associated Press, there was an opportunity to quickly generate sports highlights from analysis of video footage. So they created an automated, real-time sports highlights tool for editors using Cloud Video Intelligence API.

  3. Machine learning has a vibrant community that can help you get started. Companies can kickstart their machine learning endeavors by plugging into the vibrant and growing machine learnig community. TensorFlow, an open source machine learning framework, offers resources, meetups, and more. And if your company needs more hands-on assistance, Google offers a suite of services through the Advanced Solutions Lab to work side-by-side with companies to build bespoke machine learning solutions. There are also partners with deep technical expertise in machine learning that can help. For example, Quantiphi, a machine learning specialist, has been working closely with media companies to extract meaningful insights from their video content using a hybrid of the Cloud Video Intelligence API and custom models created using TensorFlow. However you decide to integrate machine learning technologies into your business, there's a growing ecosystem of solutions and subject matter experts that are available to help.

We hope this provided some insight into ways media companies can leverage machine learning—and what executives can do to bring machine learning to their organizations. We look forward to seeing the full potential of machine learning unfold.

Source: Google Cloud


AIY Projects update: new maker projects, new partners, new kits

Posted by Billy Rutledge, Director, AIY Projects

Makers are hands-on when it comes to making change. We're explorers, hackers and problem solvers who build devices, ecosystems, art (sometimes a combination of the three) on the basis of our own (often unconventional) ideas. So when my team first sought out to empower makers of all types and ages with the AI technology we've honed at Google, we knew whatever we built had to be open and accessible. We stayed clear of limitations that come from platform and software stack requirements, high cost and complex set up, and fixed our focus on the curiosity and inventiveness that inspire makers around the world.

When we launched our Voice Kit with help from our partner Raspberry Pi in May and sold out globally in just a few hours, we got the message loud and clear. There is a genuine demand among do-it-yourselfers for artificial intelligence that makes human-to-machine interaction more like natural human interaction.

Last week we announced the Speech Commands Dataset, a collaboration between the TensorFlow and AIY teams. The dataset has 65,000 one-second long utterances of 30 short words by thousands of different contributors of the AIY websiteand allows you to build simple voice interfaces for applications. We're currently in the process of integrating the dataset with the next release of the Voice Kit, so makers could build devices that respond to simple voice commands without the press of a button or an internet connection.

Today, you can pre-order your Voice Kit, which will be available for purchase in stores and online through Micro Center.

Or you may have to resort to the hackthat maker Shivasiddarthcreated when Voice Kit with MagPi #57 sold out in May, and then again (within 17 minutes) earlier this month.

Cool ways that makers are already using the Voice Kit

Martin Mander created a retro-inspired intercom that he calls 1986 Google Pi Intercom. He describes it as "a wall-mounted Google voice assistant using a Raspberry PI 3 and the Google AIY (Artificial Intelligence Yourself) [voice] kit." He used a mid-80s intercom that he bought on sale for £4. It cleaned up well!

Get the full story from Martin and see what Slashgear had to say about the project.

(This one's for Dr. Who fans) Tom Minnich created a Dalek-voiced assistant.

He offers a tutorialon how you can modify the Voice Kit to do something similar — perhaps create a Drogon-voiced assistant?

Victor Van Heeused the Voice Kit to create a voice-activated internet streaming radio that can play other types of audio files as well. He provides instructions, so you can do the same.

The Voice Kit is currently available in the U.S. We'll be expanding globally by the end of this year. Stay tuned here, where we'll share the latest updates. The strong demand for the Voice Kit drives us to keep the momentum going on AIY Projects.

Inspiring makers with kits that understand human speech, vision and movement

What we build next will include vision and motion detection and will go hand in hand with our existing Voice Kit. AIY Project kits will soon offer makers the "eyes," "ears," "voice" and sense of "balance" to allow simple yet powerful device interfaces.

We'd love to bake your input into our next releases. Go to hackster.io or leave a comment to start up a conversation with us. Show us and the maker community what you're working on by using hashtag #AIYprojects on social media.