Tag Archives: machine learning

Experimenting with machine learning in media

From the Gutenberg printing press in 1440 to virtual reality today, advances in technology have made it possible to discover new audiences and new ways of expressing. And there’s more to come.

Machine learning is the latest technology to change how news, entertainment, lifestyle and sports content is created, distributed and monetized. YouTube, for example, has used machine learning to automatically caption more than one billion videos to make them more accessible to the 300 million+ people who are deaf or hard of hearing.

While many media executives are increasingly aware of machine learning, it's not always apparent which problems are most suited for machine learning and whose solutions will result in maximum impact.

Machine learning can help transform your business with new user experiences, better monetization of your content and reduce your operational cost.

Executives, here are three things to keep in mind as you consider and experiment with machine learning to transform your  digital business:

  1. The time to experiment with machine learning is right now. The barriers to using machine learning have never been lower. In the same way companies started thinking about investing in mobile 10 years ago, the time to start exploring machine learning is right now. Solutions like Google Cloud Machine Learning Engine have made powerful machine learning infrastructure available to all without the need for investment in dedicated hardware. Companies can start experimenting today with Google Cloud Machine Learning APIs at no charge—and even developers with no machine learning expertise can do it. For example, in less than a day, Time Inc. used a combination of Cloud Machine Learning APIs to prototype a personalized date night assistant that integrated fashion, lifestyle and events recommendations powered by its vast corpus of editorial content.

  2. Bring together key stakeholders from diverse teams to identify the top problems to solve before you start. Machine learning is not the answer to all of your business woes, but a toolkit that can help solve specific, data-intensive problems at scale. With limited time and people to dedicate to machine learning applications, start by  bringing together the right decision makers across your business, product and engineering teams to identify the top problems to solve. Once the top challenges are identified, teams need to work closely with their engineering leads to determine technical feasibility and prioritize where machine learning could have the highest impact. Key questions to answer that will help prioritize efforts are: Can current technology reasonably solve the problem? What does success look like? What training data is needed, and is that data currently available or does it need to be generated. This was the approach that was taken during a recent Machine Learning for Media hackathon hosted by Google and the NYC Media lab, and it paid off with clearer design objectives and better prototypes. For example, for the Associated Press, there was an opportunity to quickly generate sports highlights from analysis of video footage. So they created an automated, real-time sports highlights tool for editors using Cloud Video Intelligence API.

  3. Machine learning has a vibrant community that can help you get started. Companies can kickstart their machine learning endeavors by plugging into the vibrant and growing machine learnig community. TensorFlow, an open source machine learning framework, offers resources, meetups, and more. And if your company needs more hands-on assistance, Google offers a suite of services through the Advanced Solutions Lab to work side-by-side with companies to build bespoke machine learning solutions. There are also partners with deep technical expertise in machine learning that can help. For example, Quantiphi, a machine learning specialist, has been working closely with media companies to extract meaningful insights from their video content using a hybrid of the Cloud Video Intelligence API and custom models created using TensorFlow. However you decide to integrate machine learning technologies into your business, there's a growing ecosystem of solutions and subject matter experts that are available to help.

We hope this provided some insight into ways media companies can leverage machine learning—and what executives can do to bring machine learning to their organizations. We look forward to seeing the full potential of machine learning unfold.

Source: Google Cloud


AIY Projects update: new maker projects, new partners, new kits

Posted by Billy Rutledge, Director, AIY Projects

Makers are hands-on when it comes to making change. We're explorers, hackers and problem solvers who build devices, ecosystems, art (sometimes a combination of the three) on the basis of our own (often unconventional) ideas. So when my team first sought out to empower makers of all types and ages with the AI technology we've honed at Google, we knew whatever we built had to be open and accessible. We stayed clear of limitations that come from platform and software stack requirements, high cost and complex set up, and fixed our focus on the curiosity and inventiveness that inspire makers around the world.

When we launched our Voice Kit with help from our partner Raspberry Pi in May and sold out globally in just a few hours, we got the message loud and clear. There is a genuine demand among do-it-yourselfers for artificial intelligence that makes human-to-machine interaction more like natural human interaction.

Last week we announced the Speech Commands Dataset, a collaboration between the TensorFlow and AIY teams. The dataset has 65,000 one-second long utterances of 30 short words by thousands of different contributors of the AIY websiteand allows you to build simple voice interfaces for applications. We're currently in the process of integrating the dataset with the next release of the Voice Kit, so makers could build devices that respond to simple voice commands without the press of a button or an internet connection.

Today, you can pre-order your Voice Kit, which will be available for purchase in stores and online through Micro Center.

Or you may have to resort to the hackthat maker Shivasiddarthcreated when Voice Kit with MagPi #57 sold out in May, and then again (within 17 minutes) earlier this month.

Cool ways that makers are already using the Voice Kit

Martin Mander created a retro-inspired intercom that he calls 1986 Google Pi Intercom. He describes it as "a wall-mounted Google voice assistant using a Raspberry PI 3 and the Google AIY (Artificial Intelligence Yourself) [voice] kit." He used a mid-80s intercom that he bought on sale for £4. It cleaned up well!

Get the full story from Martin and see what Slashgear had to say about the project.

(This one's for Dr. Who fans) Tom Minnich created a Dalek-voiced assistant.

He offers a tutorialon how you can modify the Voice Kit to do something similar — perhaps create a Drogon-voiced assistant?

Victor Van Heeused the Voice Kit to create a voice-activated internet streaming radio that can play other types of audio files as well. He provides instructions, so you can do the same.

The Voice Kit is currently available in the U.S. We'll be expanding globally by the end of this year. Stay tuned here, where we'll share the latest updates. The strong demand for the Voice Kit drives us to keep the momentum going on AIY Projects.

Inspiring makers with kits that understand human speech, vision and movement

What we build next will include vision and motion detection and will go hand in hand with our existing Voice Kit. AIY Project kits will soon offer makers the "eyes," "ears," "voice" and sense of "balance" to allow simple yet powerful device interfaces.

We'd love to bake your input into our next releases. Go to hackster.io or leave a comment to start up a conversation with us. Show us and the maker community what you're working on by using hashtag #AIYprojects on social media.

AIY Voice Kit: Inspiring the maker community

Recently, we launched AIY Voice Kit, a do-it-yourself voice recognition kit for Raspberry Pi-based maker projects. Our initial offering sold out globally in just a few hours, so today, we’re happy to announce that more AIY Voice Kits will be available for purchase in stores and online in the coming weeks. You can pre-order your kit today with Micro Center.

The Voice Kit includes the same VoiceHAT (Hardware Accessory on Top), mic board, speaker, components, connectors and cardboard form for easy assembly that we first made available in the initial offering of Voice Kit with MagPi #57 in May. (Creative makers have since responded with their own recipes while waiting for more inventory.)  

The Google Assistant SDK is configured by default to bring hotword detection, voice control, natural language understanding, Google’s smarts and more to your Voice Kit. You can extend the project further with local vocabularies using TensorFlow, Google’s open source machine learning framework for custom voice user interfaces.  

Our goal with AIY Projects has always been to make artificial intelligence open and accessible for makers of all ages. Makers often strive to solve real world problems in creative ways, and we're already seeing makers do some cool things with their Voice Kits. Here are a few examples:

Cool things makers are doing with Voice Kit 

Martin Mander created a retro-inspired intercom that he calls 1986 Google Pi Intercom. He describes it as “a wall-mounted Google voice assistant using a Raspberry Pi 3 and the AIY Voice Kit.” He used a mid-80s intercom that he bought on sale for £4. It cleaned up well!

aiy-projects-2
aiy-projects-6

Get the full story from Martin and see what Slashgear had to say about the project.

(This one’s for Dr. Who fans) Tom Minnich created a Dalek-voiced assistant.

Maker using AiY Voice Kit

He offers a tutorial on how you can modify the Voice Kit to do something similar — perhaps create a Drogon-voiced assistant?

Victor Van Hee used the Voice Kit to create a voice-activated internet streaming radio that can play other types of audio files as well. He provides instructions, so you can do the same.
aiy-projects-4

The Voice Kit is currently available in the U.S. We’ll be expanding globally by the end of this year. Stay tuned here, where we’ll share the latest updates.

The positive reception to Voice Kit has encouraged us to keep the momentum going with more AIY Projects. We’ll soon bring makers the “eyes,” “ears,” “voice” and sense of “balance” to allow simple, powerful device interfaces.

Your input is critical to helping us plan our next releases, so let us know how AI can improve your projects, and solve real problems. Join the conversation at hackster.io, and share what you’re working on using the #AIYprojects hashtag. We can’t wait to see what you make.

Exploring and Visualizing an Open Global Dataset



Machine learning systems are increasingly influencing many aspects of everyday life, and are used by both the hardware and software products that serve people globally. As such, researchers and designers seeking to create products that are useful and accessible for everyone often face the challenge of finding data sets that reflect the variety and backgrounds of users around the world. In order to train these machine learning systems, open, global — and growing — datasets are needed.

Over the last six months, we’ve seen such a dataset emerge from users of Quick, Draw!, Google’s latest approach to helping wide, international audiences understand how neural networks work. A group of Googlers designed Quick, Draw! as a way for anyone to interact with a machine learning system in a fun way, drawing everyday objects like trees and mugs. The system will try to guess what their drawing depicts, within 20 seconds. While the goal of Quick, Draw! was simply to create a fun game that runs on machine learning, it has resulted in 800 million drawings from twenty million people in 100 nations, from Brazil to Japan to the U.S. to South Africa.

And now we are releasing an open dataset based on these drawings so that people around the world can contribute to, analyze, and inform product design with this data. The dataset currently includes 50 million drawings Quick Draw! players have generated (we will continue to release more of the 800 million drawings over time).

It’s a considerable amount of data; and it’s also a fascinating lens into how to engage a wide variety of people to participate in (1) training machine learning systems, no matter what their technical background; and (2) the creation of open data sets that reflect a wide spectrum of cultures and points of view.
Seeing national — and global — patterns in one glance
To understand visual patterns within the dataset quickly and efficiently, we worked with artist Kyle McDonald to overlay thousands of drawings from around the world. This helped us create composite images and identify trends in each nation, as well across all nations. We made animations of 1000 layered international drawings of cats and chairs, below, to share how we searched for visual trends with this data:

Cats, made from 1000 drawings from around the world:
Chairs, made from 1,000 drawings around the world:
Doodles of naturally recurring objects, like cats (or trees, rainbows, or skulls) often look alike across cultures:
However, for objects that might be familiar to some cultures, but not others, we saw notable differences. Sandwiches took defined forms or were a jumbled set of lines; mug handles pointed in opposite directions; and chairs were drawn facing forward or sideways, depending on the nation or region of the world:
One size doesn’t fit all
These composite drawings, we realized, could reveal how perspectives and preferences differ between audiences from different regions, from the type of bread used in sandwiches to the shape of a coffee cup, to the aesthetic of how to depict objects so they are visually appealing. For example, a more straightforward, head-on view was more consistent in some nations; side angles in others.

Overlaying the images also revealed how to improve how we train neural networks when we lack a variety of data — even within a large, open, and international data set. For example, when we analyzed 115,000+ drawings of shoes in the Quick, Draw! dataset, we discovered that a single style of shoe, which resembles a sneaker, was overwhelmingly represented. Because it was so frequently drawn, the neural network learned to recognize only this style as a “shoe.”

But just as in the physical world, in the realm of training data, one size does not fit all. We asked, how can we consistently and efficiently analyze datasets for clues that could point toward latent bias? And what would happen if a team built a classifier based on a non-varied set of data?
Diagnosing data for inclusion
With the open source tool Facets, released last month as part of Google’s PAIR initiative, one can see patterns across a large dataset quickly. The goal is to efficiently, and visually, diagnose how representative large datasets, like the Quick, Draw! Dataset, may be.

Here’s a screenshot from the Quick,Draw! dataset within the Facets tool. The tool helped us position thousands of drawings by "faceting" them in multiple dimensions by their feature values, such as country, up to 100 countries. You, too, can filter for for features such as “random faces” in a 10-country view, which can then be expanded to 100 countries. At a glance, you can see proportions of country representations. You can also zoom in and see details of each individual drawing, allowing you to dive deeper into single data points. This is especially helpful when working with a large visual data set like Quick, Draw!, allowing researchers to explore for subtle differences or anomalies, or to begin flagging small-scale visual trends that might emerge later as patterns within the larger data set.
Here’s the same Quick, Draw! data for “random faces,” faceted for 94 countries and seen from another view. It’s clear in the few seconds that Facets loads the drawings in this new visualization that the data is overwhelmingly representative of the United States and European countries. This is logical given that the Quick, Draw! game is currently only available in English. We plan to add more languages over time. However, the visualization shows us that Brazil and Thailand seem to be non-English-speaking nations that are relatively well-represented within the data. This suggested to us that designers could potentially research what elements of the interface design may have worked well in these countries. Then, we could use that information to improve Quick,Draw! in its next iteration for other global, non-English-speaking audiences. We’re also using the faceted data to help us figure out how prioritize local languages for future translations.
Another outcome of using Facets to diagnose the Quick, Draw! data for inclusion was to identify concrete ways that anyone can improve the variety of data, as well as check for potential biases. Improvements could include:
  • Changing protocols for human rating of data or content generation, so that the data is more accurately representative of local or global populations
  • Analyzing subgroups of data and identify the database equivalent of "intersectionality" surfaced within visual patterns
  • Augmenting and reweighting data so that it is more inclusive
By releasing this dataset, and tools like Facets, we hope to facilitate the exploration of more inclusive approaches to machine learning, and to turn those observations into opportunities for innovation. We’re just beginning to draw insights from both Quick, Draw! and Facets. And we invite you to draw more with us, too.

Acknowledgements
Jonas Jongejan, Henry Rowley, Takashi Kawashima, Jongmin Kim, Nick Fox-Gieg, built Quick, Draw! in collaboration with Google Creative Lab and Google’s Data Arts Team. The video about fairness in machine learning was created by Teo Soares, Alexander Chen, Bridget Prophet, Lisa Steinman, and JR Schmidt from Google Creative Lab. James Wexler, Jimbo Wilson, and Mahima Pushkarna, of PAIR, designed Facets, a project led by Martin Wattenberg and Fernanda Viégas, Senior Staff Research Scientists on the Google Brain team, and UX Researcher Jess Holbrook. Ian Johnson from the Google Cloud team contributed to the visualizations of overlaid drawings.

Google at KDD’17: Graph Mining and Beyond



The 23rd ACM conference on Knowledge Discovery and Data Mining (KDD’17), a main venue for academic and industry research in data science, information retrieval, data mining and machine learning, was held last week in Halifax, Canada. Google has historically been an active participant in KDD, and this year was no exception, with Googlers’ contributing numerous papers and participating in workshops.

In addition to our overall participation, we are happy to congratulate fellow Googler Bryan Perozzi for receiving the SIGKDD 2017 Doctoral Dissertation Award, which serves to recognize excellent research by doctoral candidates in the field of data mining and knowledge discovery. This award was given in recognition of his thesis on the topic of machine learning on graphs performed at Stony Brook University, under the advisorship of Steven Skiena. Part of his thesis was developed during his internships at Google. The thesis dealt with using a restricted set of local graph primitives (such as ego-networks and truncated random walks) to effectively exploit the information around each vertex for classification, clustering, and anomaly detection. Most notably, the work introduced the random-walk paradigm for graph embedding with neural networks in DeepWalk.

DeepWalk: Online Learning of Social Representations, originally presented at KDD'14, outlines a method for using a series of local information obtained from truncated random walks to learn latent representations of nodes in a graph (e.g. users in a social network). The core idea was to treat each segment of a random walk as a sentence “in the language of the graph.” These segments could then be used as input for neural network models to learn representations of the graph’s nodes, using sequence modeling methods like word2vec (which had just been developed at the time). This research continues at Google, most recently with Learning Edge Representations via Low-Rank Asymmetric Projections.

The full list of Google contributions at KDD’17 is listed below (Googlers highlighted in blue).

Organizing Committee
Panel Chair: Andrew Tomkins
Research Track Program Chair: Ravi Kumar
Applied Data Science Track Program Chair: Roberto J. Bayardo
Research Track Program Committee: Sergei Vassilvitskii, Alex Beutel, Abhimanyu Das, Nan Du, Alessandro Epasto, Alex Fabrikant, Silvio Lattanzi, Kristen Lefevre, Bryan Perozzi, Karthik Raman, Steffen Rendle, Xiao Yu
Applied Data Science Program Track Committee: Edith Cohen, Ariel Fuxman, D. Sculley, Isabelle Stanton, Martin Zinkevich, Amr Ahmed, Azin Ashkan, Michael Bendersky, James Cook, Nan Du, Balaji Gopalan, Samuel Huston, Konstantinos Kollias, James Kunz, Liang Tang, Morteza Zadimoghaddam

Awards
Doctoral Dissertation Award: Bryan Perozzi, for Local Modeling of Attributed Graphs: Algorithms and Applications.

Doctoral Dissertation Runner-up Award: Alex Beutel, for User Behavior Modeling with Large-Scale Graph Analysis.

Papers
Ego-Splitting Framework: from Non-Overlapping to Overlapping Clusters
Alessandro Epasto, Silvio Lattanzi, Renato Paes Leme

HyperLogLog Hyperextended: Sketches for Concave Sublinear Frequency Statistics
Edith Cohen

Google Vizier: A Service for Black-Box Optimization
Daniel Golovin, Benjamin Solnik, Subhodeep Moitra, Greg Kochanski, John Karro, D. Sculley

Quick Access: Building a Smart Experience for Google Drive
Sandeep Tata, Alexandrin Popescul, Marc Najork, Mike Colagrosso, Julian Gibbons, Alan Green, Alexandre Mah, Michael Smith, Divanshu Garg, Cayden Meyer, Reuben KanPapers

TFX: A TensorFlow­ Based Production ­Scale Machine Learning Platform
Denis Baylor, Eric Breck, Heng-Tze Cheng, Noah Fiedel, Chuan Yu Foo, Zakaria Haque, Salem Haykal, Mustafa Ispir, Vihan Jain, Levent Koc, Chiu Yuen Koo, Lukasz Lew, Clemens MewaldAkshay Modi, Neoklis Polyzotis, Sukriti Ramesh, Sudip Roy, Steven Whang, Martin Wicke Jarek Wilkiewicz, Xin Zhang, Martin Zinkevich

Construction of Directed 2K Graphs
Balint Tillman, Athina Markopoulou, Carter T. Butts, Minas Gjoka

A Practical Algorithm for Solving the Incoherence Problem of Topic Models In Industrial Applications
Amr Ahmed, James Long, Dan Silva, Yuan Wang

Train and Distribute: Managing Simplicity vs. Flexibility in High-­Level Machine Learning Frameworks
Heng-Tze Cheng, Lichan Hong, Mustafa Ispir, Clemens Mewald, Zakaria Haque, Illia Polosukhin, Georgios Roumpos, D Sculley, Jamie Smith, David Soergel, Yuan Tang, Philip Tucker, Martin Wicke, Cassandra Xia, Jianwei Xie

Learning to Count Mosquitoes for the Sterile Insect Technique
Yaniv Ovadia, Yoni Halpern, Dilip Krishnan, Josh Livni, Daniel Newburger, Ryan Poplin, Tiantian Zha, D. Sculley

Workshops
13th International Workshop on Mining and Learning with Graphs
Keynote Speaker: Vahab Mirrokni - Distributed Graph Mining: Theory and Practice
Contributed talks include:
HARP: Hierarchical Representation Learning for Networks
Haochen Chen, Bryan Perozzi, Yifan Hu and Steven Skiena

Fairness, Accountability, and Transparency in Machine Learning
Contributed talks include:
Fair Clustering Through Fairlets
Flavio Chierichetti, Ravi Kumar, Silvio Lattanzi, Sergei Vassilvitskii
Data Decisions and Theoretical Implications when Adversarially Learning Fair Representations
Alex Beutel, Jilin Chen, Zhe Zhao, Ed H. Chi

Tutorial
TensorFlow
Rajat Monga, Martin Wicke, Daniel ‘Wolff’ Dobson, Joshua Gordon

Harness the Power of Machine Learning in Your Browser with Deeplearn.js



Machine learning (ML) has become an increasingly powerful tool, one that can be applied to a wide variety of areas spanning object recognition, language translation, health and more. However, the development of ML systems is often restricted to those with computational resources and the technical expertise to work with commonly available ML libraries.

With PAIR — an initiative to study and redesign human interactions with ML — we want to open machine learning up to as many people as possible. In pursuit of that goal, we are excited to announce deeplearn.js 0.1.0, an open source WebGL-accelerated JavaScript library for machine learning that runs entirely in your browser, with no installations and no backend.
There are many reasons to bring machine learning into the browser. A client-side ML library can be a platform for interactive explanations, for rapid prototyping and visualization, and even for offline computation. And if nothing else, the browser is one of the world's most popular programming platforms.

While web machine learning libraries have existed for years (e.g., Andrej Karpathy's convnetjs) they have been limited by the speed of Javascript, or have been restricted to inference rather than training (e.g., TensorFire). By contrast, deeplearn.js offers a significant speedup by exploiting WebGL to perform computations on the GPU, along with the ability to do full backpropagation.

The API mimics the structure of TensorFlow and NumPy, with a delayed execution model for training (like TensorFlow), and an immediate execution model for inference (like NumPy). We have also implemented versions of some of the most commonly-used TensorFlow operations. With the release of deeplearn.js, we will be providing tools to export weights from TensorFlow checkpoints, which will allow authors to import them into web pages for deeplearn.js inference.

You can explore the potential of this library by training a convolutional neural network to recognize photos and handwritten digits — all in your browser without writing a single line of code.
We're releasing a series of demos that show deeplearn.js in action. Play with an image classifier that uses your webcam in real-time and watch the network’s internal representations of what it sees. Or generate abstract art videos at a smooth 60 frames per second. The deeplearn.js homepage contains these and other demos.

Our vision is that this library will significantly increase visibility and engagement with machine learning, giving developers access to powerful tools while simultaneously providing the everyday user with a way to interact with them. We’re looking forward to collaborating with the open source community to drive this vision forward.

Apply to Google Developers Launchpad Studio for AI & ML focused startups

The mission of Google Developers Launchpad is to enable startups from around the world to build great companies. In the last 4 years, we’ve learned a lot while supporting early and late-stage founders. From working with dynamic startups---such as teams applying Artificial Intelligence technology to solving transportation problems in Israel, improving tele-medicine in Brazil, and optimizing online retail in India---we’ve learned that these startups require specialized services to help them scale.

So today, we’re launching a new initiative - Google Developers Launchpad Studio - a full-service studio that provides tailored technical and product support to Artificial Intelligence & Machine Learning startups, all in one place.

Whether you’re a 3-person team or an established post-Series B startup applying AI/ML to your product offering, we want to start connecting with you.

Applications to join Launchpad Studio are now open and you can apply here.

The global headquarters of Launchpad Studio will be based in San Francisco at Launchpad Space, with events and activities taking place in Tel Aviv and New York. We plan to expand our activities and events to Toronto, London, Bangalore, and Singapore soon.

As a member of the Studio program, you’ll find services tailored to your startups’ unique needs and challenges such as:  
  • Applied AI integration toolkits: Datasets, testing environments, rapid prototyping, simulation tools, and architecture troubleshooting.
  • Product validation support: Industry-specific proof of concept and pilots, as well as use case workshops with Fortune 500 industry practitioners and other experts.
  • Access to AI experts: Best practice advice from our global community of AI thought leaders, which includes Peter Norvig, Dan Ariely, Yossi Matias Chris DiBona and more.
  • Access to AI practitioners and investors: Interaction with some of the best AI and ML engineers, product managers, industry leaders and VCs from Google, Silicon Valley, and other international locations.
We’re looking forward to working closely with you in the AI & Machine Learning space, soon!  
“Innovation is open to everyone, worldwide. With this global program we now have an important opportunity to support entrepreneurs everywhere in the world who are aiming to use AI to solve the biggest challenges.” Yossi Matias, VP of Engineering, Google

Posted By Roy Glasberg, Global Lead, Google Developers Launchpad

Apply to Google Developers Launchpad Studio for AI & ML focused startups

Posted by Roy Glasberg, Global Lead, Google Developers Launchpad

The mission of Google Developers Launchpad is to enable startups from around the world to build great companies. In the last 4 years, we've learned a lot while supporting early and late-stage founders. From working with dynamic startups---such as teams applying Artificial Intelligence technology to solving transportation problems in Israel, improving tele-medicine in Brazil, and optimizing online retail in India---we've learned that these startups require specialized services to help them scale.

So today, we're launching a new initiative - Google Developers Launchpad Studio - a full-service studio that provides tailored technical and product support to Artificial Intelligence & Machine Learning startups, all in one place.

Whether you're a 3-person team or an established post-Series B startup applying AI/ML to your product offering, we want to start connecting with you.

Applications to join Launchpad Studio are now open and you can apply here.

The global headquarters of Launchpad Studio will be based in San Francisco at Launchpad Space, with events and activities taking place in Tel Aviv and New York. We plan to expand our activities and events to Toronto, London, Bangalore, and Singapore soon.

As a member of the Studio program, you'll find services tailored to your startups' unique needs and challenges such as:

  • Applied AI integration toolkits: Datasets, testing environments, rapid prototyping, simulation tools, and architecture troubleshooting.
  • Product validation support: Industry-specific proof of concept and pilots, as well as use case workshops with Fortune 500 industry practitioners and other experts.
  • Access to AI experts: Best practice advice from our global community of AI thought leaders, which includes Peter Norvig, Dan Ariely, Yossi MatiasChris DiBonaand more.
  • Access to AI practitioners and investors: Interaction with some of the best AI and ML engineers, product managers, industry leaders and VCs from Google, Silicon Valley, and other international locations.

We're looking forward to working closely with you in the AI & Machine Learning space, soon!

"Innovation is open to everyone, worldwide. With this global program we now have an important opportunity to support entrepreneurs everywhere in the world who are aiming to use AI to solve the biggest challenges." Yossi Matias, VP of Engineering, Google

An Update to Open Images – Now with Bounding-Boxes



Last year we introduced Open Images, a collaborative release of ~9 million images annotated with labels spanning over 6000 object categories, designed to be a useful dataset for machine learning research. The initial release featured image-level labels automatically produced by a computer vision model similar to Google Cloud Vision API, for all 9M images in the training set, and a validation set of 167K images with 1.2M human-verified image-level labels.

Today, we introduce an update to Open Images, which contains the addition of a total of ~2M bounding-boxes to the existing dataset, along with several million additional image-level labels. Details include:
  • 1.2M bounding-boxes around objects for 600 categories on the training set. These have been produced semi-automatically by an enhanced version of the technique outlined in [1], and are all human-verified.
  • Complete bounding-box annotation for all object instances of the 600 categories on the validation set, all manually drawn (830K boxes). The bounding-box annotations in the training and validations sets will enable research on object detection on this dataset. The 600 categories offer a broader range than those in the ILSVRC and COCO detection challenges, and include new objects such as fedora hat and snowman.
  • 4.3M human-verified image-level labels on the training set (over all categories). This will enable large-scale experiments on object classification, based on a clean training set with reliable labels.
Annotated images from the Open Images dataset. Left: FAMILY MAKING A SNOWMAN by mwvchamber. Right: STANZA STUDENTI.S.S. ANNUNZIATA by ersupalermo. Both images used under CC BY 2.0 license. See more examples here.
We hope that this update to Open Images will stimulate the broader research community to experiment with object classification and detection models, and facilitate the development and evaluation of new techniques.

References
[1] We don't need no bounding-boxes: Training object class detectors using only human verification, Papadopoulos, Uijlings, Keller, and Ferrari, CVPR 2016

Ask a question, get an answer in Google Analytics

What if getting answers about your key business metrics was as easy as asking a question in plain English? What if you could simply say, "How many new users did we have from organic search on mobile last week?" ― and get an answer right away?

Today, Google Analytics is taking a step toward that future.  Know what data you need and want it quickly? Just ask Google Analytics and get your answer.
This feature, which uses the same natural language processing technology available across Google products like Android and Search, is rolling out now and will become available in English to all Google Analytics users over the next few weeks.
The ability to ask questions is part of Analytics Intelligence, a set of features in Google Analytics that use machine learning to help you better understand and act on your analytics data. Analytics Intelligence also includes existing machine learning capabilities like automated insights (now available on both web and the mobile app), smart lists, smart goals, and session quality.

How it Works
We've talked to web analysts who say they spend half their time answering basic analytics questions for other people in their organization. In fact, a recent report from Forrester found that 57% of marketers find it difficult to give their stakeholders in different functions access to their data and insights. Asking questions in Analytics Intelligence can help everyone get their answers directly in the product ― so team members get what they need faster, and analysts can spend their valuable time on deeper research and discovery.
Try it! This short video will give you a feel for how it works:
“Analytics Intelligence enables those users who aren’t too familiar with Google Analytics to access and make use of the data within their business’ account. Democratising data in this way can only be a good thing for everyone involved in Google Analytics!”
Joe Whitehead, Analytics Consultant, Merkle | Periscopix


Beyond answering your questions, Analytics Intelligence also surfaces new opportunities for you through automated insights, now available in the web interface as well as in the mobile app. These insights can show spikes or drops in metrics like revenue or session duration, tipping you off to issues that you may need to investigate further. Insights may also present opportunities to improve key metrics by following specific recommendations. For example, a chance to improve bounce rate by reducing a page's load time, or the potential to boost conversion rate by adding a new keyword to your AdWords campaign.

To ask questions and get automated insights from Analytics Intelligence in our web interface, click the Intelligence button to open a side panel. In the Google Analytics mobile app for Android and iOS, tap the Intelligence icon in the upper right-hand corner of most screens. Check out this article to learn more about the types of questions you can ask today.

Help us Learn
Our Intelligence system gets even smarter over time as it learns which questions and insights users are interested in. In that spirit, we need your help: After you ask questions or look at insights, please leave feedback at the bottom of the card.

Your answers will help us train Analytics Intelligence to be more useful.

Our goal is to help you get more insights to more people, faster. That way everyone can get to the good stuff: creating amazing experiences that make customers happier and help you grow your business.
Happy Analyzing!