Tag Archives: machine learning

Harness the Power of Machine Learning in Your Browser with Deeplearn.js



Machine learning (ML) has become an increasingly powerful tool, one that can be applied to a wide variety of areas spanning object recognition, language translation, health and more. However, the development of ML systems is often restricted to those with computational resources and the technical expertise to work with commonly available ML libraries.

With PAIR — an initiative to study and redesign human interactions with ML — we want to open machine learning up to as many people as possible. In pursuit of that goal, we are excited to announce deeplearn.js 0.1.0, an open source WebGL-accelerated JavaScript library for machine learning that runs entirely in your browser, with no installations and no backend.
There are many reasons to bring machine learning into the browser. A client-side ML library can be a platform for interactive explanations, for rapid prototyping and visualization, and even for offline computation. And if nothing else, the browser is one of the world's most popular programming platforms.

While web machine learning libraries have existed for years (e.g., Andrej Karpathy's convnetjs) they have been limited by the speed of Javascript, or have been restricted to inference rather than training (e.g., TensorFire). By contrast, deeplearn.js offers a significant speedup by exploiting WebGL to perform computations on the GPU, along with the ability to do full backpropagation.

The API mimics the structure of TensorFlow and NumPy, with a delayed execution model for training (like TensorFlow), and an immediate execution model for inference (like NumPy). We have also implemented versions of some of the most commonly-used TensorFlow operations. With the release of deeplearn.js, we will be providing tools to export weights from TensorFlow checkpoints, which will allow authors to import them into web pages for deeplearn.js inference.

You can explore the potential of this library by training a convolutional neural network to recognize photos and handwritten digits — all in your browser without writing a single line of code.
We're releasing a series of demos that show deeplearn.js in action. Play with an image classifier that uses your webcam in real-time and watch the network’s internal representations of what it sees. Or generate abstract art videos at a smooth 60 frames per second. The deeplearn.js homepage contains these and other demos.

Our vision is that this library will significantly increase visibility and engagement with machine learning, giving developers access to powerful tools while simultaneously providing the everyday user with a way to interact with them. We’re looking forward to collaborating with the open source community to drive this vision forward.

Apply to Google Developers Launchpad Studio for AI & ML focused startups

The mission of Google Developers Launchpad is to enable startups from around the world to build great companies. In the last 4 years, we’ve learned a lot while supporting early and late-stage founders. From working with dynamic startups---such as teams applying Artificial Intelligence technology to solving transportation problems in Israel, improving tele-medicine in Brazil, and optimizing online retail in India---we’ve learned that these startups require specialized services to help them scale.

So today, we’re launching a new initiative - Google Developers Launchpad Studio - a full-service studio that provides tailored technical and product support to Artificial Intelligence & Machine Learning startups, all in one place.

Whether you’re a 3-person team or an established post-Series B startup applying AI/ML to your product offering, we want to start connecting with you.

Applications to join Launchpad Studio are now open and you can apply here.

The global headquarters of Launchpad Studio will be based in San Francisco at Launchpad Space, with events and activities taking place in Tel Aviv and New York. We plan to expand our activities and events to Toronto, London, Bangalore, and Singapore soon.

As a member of the Studio program, you’ll find services tailored to your startups’ unique needs and challenges such as:  
  • Applied AI integration toolkits: Datasets, testing environments, rapid prototyping, simulation tools, and architecture troubleshooting.
  • Product validation support: Industry-specific proof of concept and pilots, as well as use case workshops with Fortune 500 industry practitioners and other experts.
  • Access to AI experts: Best practice advice from our global community of AI thought leaders, which includes Peter Norvig, Dan Ariely, Yossi Matias Chris DiBona and more.
  • Access to AI practitioners and investors: Interaction with some of the best AI and ML engineers, product managers, industry leaders and VCs from Google, Silicon Valley, and other international locations.
We’re looking forward to working closely with you in the AI & Machine Learning space, soon!  
“Innovation is open to everyone, worldwide. With this global program we now have an important opportunity to support entrepreneurs everywhere in the world who are aiming to use AI to solve the biggest challenges.” Yossi Matias, VP of Engineering, Google

Posted By Roy Glasberg, Global Lead, Google Developers Launchpad

Apply to Google Developers Launchpad Studio for AI & ML focused startups

Posted by Roy Glasberg, Global Lead, Google Developers Launchpad

The mission of Google Developers Launchpad is to enable startups from around the world to build great companies. In the last 4 years, we've learned a lot while supporting early and late-stage founders. From working with dynamic startups---such as teams applying Artificial Intelligence technology to solving transportation problems in Israel, improving tele-medicine in Brazil, and optimizing online retail in India---we've learned that these startups require specialized services to help them scale.

So today, we're launching a new initiative - Google Developers Launchpad Studio - a full-service studio that provides tailored technical and product support to Artificial Intelligence & Machine Learning startups, all in one place.

Whether you're a 3-person team or an established post-Series B startup applying AI/ML to your product offering, we want to start connecting with you.

Applications to join Launchpad Studio are now open and you can apply here.

The global headquarters of Launchpad Studio will be based in San Francisco at Launchpad Space, with events and activities taking place in Tel Aviv and New York. We plan to expand our activities and events to Toronto, London, Bangalore, and Singapore soon.

As a member of the Studio program, you'll find services tailored to your startups' unique needs and challenges such as:

  • Applied AI integration toolkits: Datasets, testing environments, rapid prototyping, simulation tools, and architecture troubleshooting.
  • Product validation support: Industry-specific proof of concept and pilots, as well as use case workshops with Fortune 500 industry practitioners and other experts.
  • Access to AI experts: Best practice advice from our global community of AI thought leaders, which includes Peter Norvig, Dan Ariely, Yossi MatiasChris DiBonaand more.
  • Access to AI practitioners and investors: Interaction with some of the best AI and ML engineers, product managers, industry leaders and VCs from Google, Silicon Valley, and other international locations.

We're looking forward to working closely with you in the AI & Machine Learning space, soon!

"Innovation is open to everyone, worldwide. With this global program we now have an important opportunity to support entrepreneurs everywhere in the world who are aiming to use AI to solve the biggest challenges." Yossi Matias, VP of Engineering, Google

An Update to Open Images – Now with Bounding-Boxes



Last year we introduced Open Images, a collaborative release of ~9 million images annotated with labels spanning over 6000 object categories, designed to be a useful dataset for machine learning research. The initial release featured image-level labels automatically produced by a computer vision model similar to Google Cloud Vision API, for all 9M images in the training set, and a validation set of 167K images with 1.2M human-verified image-level labels.

Today, we introduce an update to Open Images, which contains the addition of a total of ~2M bounding-boxes to the existing dataset, along with several million additional image-level labels. Details include:
  • 1.2M bounding-boxes around objects for 600 categories on the training set. These have been produced semi-automatically by an enhanced version of the technique outlined in [1], and are all human-verified.
  • Complete bounding-box annotation for all object instances of the 600 categories on the validation set, all manually drawn (830K boxes). The bounding-box annotations in the training and validations sets will enable research on object detection on this dataset. The 600 categories offer a broader range than those in the ILSVRC and COCO detection challenges, and include new objects such as fedora hat and snowman.
  • 4.3M human-verified image-level labels on the training set (over all categories). This will enable large-scale experiments on object classification, based on a clean training set with reliable labels.
Annotated images from the Open Images dataset. Left: FAMILY MAKING A SNOWMAN by mwvchamber. Right: STANZA STUDENTI.S.S. ANNUNZIATA by ersupalermo. Both images used under CC BY 2.0 license. See more examples here.
We hope that this update to Open Images will stimulate the broader research community to experiment with object classification and detection models, and facilitate the development and evaluation of new techniques.

References
[1] We don't need no bounding-boxes: Training object class detectors using only human verification, Papadopoulos, Uijlings, Keller, and Ferrari, CVPR 2016

Ask a question, get an answer in Google Analytics

What if getting answers about your key business metrics was as easy as asking a question in plain English? What if you could simply say, "How many new users did we have from organic search on mobile last week?" ― and get an answer right away?

Today, Google Analytics is taking a step toward that future.  Know what data you need and want it quickly? Just ask Google Analytics and get your answer.
This feature, which uses the same natural language processing technology available across Google products like Android and Search, is rolling out now and will become available in English to all Google Analytics users over the next few weeks.
The ability to ask questions is part of Analytics Intelligence, a set of features in Google Analytics that use machine learning to help you better understand and act on your analytics data. Analytics Intelligence also includes existing machine learning capabilities like automated insights (now available on both web and the mobile app), smart lists, smart goals, and session quality.

How it Works
We've talked to web analysts who say they spend half their time answering basic analytics questions for other people in their organization. In fact, a recent report from Forrester found that 57% of marketers find it difficult to give their stakeholders in different functions access to their data and insights. Asking questions in Analytics Intelligence can help everyone get their answers directly in the product ― so team members get what they need faster, and analysts can spend their valuable time on deeper research and discovery.
Try it! This short video will give you a feel for how it works:
“Analytics Intelligence enables those users who aren’t too familiar with Google Analytics to access and make use of the data within their business’ account. Democratising data in this way can only be a good thing for everyone involved in Google Analytics!”
Joe Whitehead, Analytics Consultant, Merkle | Periscopix


Beyond answering your questions, Analytics Intelligence also surfaces new opportunities for you through automated insights, now available in the web interface as well as in the mobile app. These insights can show spikes or drops in metrics like revenue or session duration, tipping you off to issues that you may need to investigate further. Insights may also present opportunities to improve key metrics by following specific recommendations. For example, a chance to improve bounce rate by reducing a page's load time, or the potential to boost conversion rate by adding a new keyword to your AdWords campaign.

To ask questions and get automated insights from Analytics Intelligence in our web interface, click the Intelligence button to open a side panel. In the Google Analytics mobile app for Android and iOS, tap the Intelligence icon in the upper right-hand corner of most screens. Check out this article to learn more about the types of questions you can ask today.

Help us Learn
Our Intelligence system gets even smarter over time as it learns which questions and insights users are interested in. In that spirit, we need your help: After you ask questions or look at insights, please leave feedback at the bottom of the card.

Your answers will help us train Analytics Intelligence to be more useful.

Our goal is to help you get more insights to more people, faster. That way everyone can get to the good stuff: creating amazing experiences that make customers happier and help you grow your business.
Happy Analyzing!

Facets: An Open Source Visualization Tool for Machine Learning Training Data

Cross-posted on the Google Research Blog

Getting the best results out of a machine learning (ML) model requires that you truly understand your data. However, ML datasets can contain hundreds of millions of data points, each consisting of hundreds (or even thousands) of features, making it nearly impossible to understand an entire dataset in an intuitive fashion. Visualization can help unlock nuances and insights in large datasets. A picture may be worth a thousand words, but an interactive visualization can be worth even more.

Working with the PAIR initiative, we’ve released Facets, an open source visualization tool to aid in understanding and analyzing ML datasets. Facets consists of two visualizations that allow users to see a holistic picture of their data at different granularities. Get a sense of the shape of each feature of the data using Facets Overview, or explore a set of individual observations using Facets Dive. These visualizations allow you to debug your data which, in machine learning, is as important as debugging your model. They can easily be used inside of Jupyter notebooks or embedded into webpages. In addition to the open source code, we've also created a Facets demo website. This website allows anyone to visualize their own datasets directly in the browser without the need for any software installation or setup, without the data ever leaving your computer.

Facets Overview

Facets Overview automatically gives users a quick understanding of the distribution of values across the features of their datasets. Multiple datasets, such as a training set and a test set, can be compared on the same visualization. Common data issues that can hamper machine learning are pushed to the forefront, such as: unexpected feature values, features with high percentages of missing values, features with unbalanced distributions, and feature distribution skew between datasets.
overview-numerical.png
Facets Overview visualization of the six numeric features of the UCI Census datasets[1]. The features are sorted by non-uniformity, with the feature with the most non-uniform distribution at the top. Numbers in red indicate possible trouble spots, in this case numeric features with a high percentage of values set to 0. The histograms at right allow you to compare the distributions between the training data (blue) and test data (orange).

overview-categorical-expand.png
Facets Overview visualization showing two of the nine categorical features of the UCI Census datasets[1]. The features are sorted by distribution distance, with the feature with the biggest skew between the training (blue) and test (orange) datasets at the top. Notice in the “Target” feature that the label values differ between the training and test datasets, due to a trailing period in the test set (“<=50K” vs “<=50K.”). This can be seen in the chart for the feature and also in the entries in the “top” column of the table. This label mismatch would cause a model trained and tested on this data to not be evaluated correctly.

Facets Dive

Facets Dive provides an easy-to-customize, intuitive interface for exploring the relationship between the data points across the different features of a dataset. With Facets Dive, you control the position, color and visual representation of each data point based on its feature values. If the data points have images associated with them, the images can be used as the visual representations.
facets-dive.gif
Facets Dive visualization showing all 16281 data points in the UCI Census test dataset[1]. The animation shows a user coloring the data points by one feature (“Relationship”), faceting in one dimension by a continuous feature (“Age”) and then faceting in another dimension by a discrete feature (“Marital Status”).
dive-quickdraw.png
Facets Dive visualization of a large number of face drawings from the “Quick, Draw!” Dataset, showing the relationship between the number of strokes and points in the drawings and the ability for the “Quick, Draw!” classifier to correctly categorize them as faces.

Fun Fact: In large datasets, such as the CIFAR-10 dataset[2], a small human labelling error can easily go unnoticed. We inspected the CIFAR-10 dataset with Dive and were able to catch a frog-cat – an image of a frog that had been incorrectly labelled as a cat!
cat-frogs.gif
Exploration of the CIFAR-10 dataset using Facets Dive. Here we facet the ground truth labels by row and the predicted labels by column. This produces a confusion matrix view, allowing us to drill into particular kinds of misclassifications. In this particular case, the ML model incorrectly labels some small percentage of true cats as frogs. The interesting thing we find by putting the real images in the confusion matrix is that one of these "true cats" that the model predicted was a frog is actually a frog from visual inspection. With Facets Dive, we can determine that this one misclassification wasn't a true misclassification of the model, but instead incorrectly labeled data in the dataset.
Screen Shot 2017-07-14 at 2.59.13 PM.png
Can you spot the frog-cat?
We’ve gotten great value out of Facets inside of Google and are excited to share the visualizations with the world. We hope they can help you discover new and interesting things about your data that lead you to create more powerful and accurate machine learning models. And since they are open source, you can customize the visualizations for your specific needs or contribute to the project to help us all better understand our data. If you have feedback about your experience with Facets, please let us know what you think.

By James Wexler, Senior Software Engineer, Google Big Picture Team

Acknowledgments

This work is a collaboration between Mahima Pushkarna, James Wexler and Jimbo Wilson, with input from the entire Big Picture team. We would also like to thank Justine Tunney for providing us with the build tooling.

References

[1] Lichman, M. (2013). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml/datasets/Census+Income]. Irvine, CA: University of California, School of Information and Computer Science

[2] Learning Multiple Layers of Features from Tiny Images, Alex Krizhevsky (2009).

Facets: An Open Source Visualization Tool for Machine Learning Training Data



(Cross-posted on the Google Open Source Blog)

Getting the best results out of a machine learning (ML) model requires that you truly understand your data. However, ML datasets can contain hundreds of millions of data points, each consisting of hundreds (or even thousands) of features, making it nearly impossible to understand an entire dataset in an intuitive fashion. Visualization can help unlock nuances and insights in large datasets. A picture may be worth a thousand words, but an interactive visualization can be worth even more.

Working with the PAIR initiative, we’ve released Facets, an open source visualization tool to aid in understanding and analyzing ML datasets. Facets consists of two visualizations that allow users to see a holistic picture of their data at different granularities. Get a sense of the shape of each feature of the data using Facets Overview, or explore a set of individual observations using Facets Dive. These visualizations allow you to debug your data which, in machine learning, is as important as debugging your model. They can easily be used inside of Jupyter notebooks or embedded into webpages. In addition to the open source code, we've also created a Facets demo website. This website allows anyone to visualize their own datasets directly in the browser without the need for any software installation or setup, without the data ever leaving your computer.

Facets Overview
Facets Overview automatically gives users a quick understanding of the distribution of values across the features of their datasets. Multiple datasets, such as a training set and a test set, can be compared on the same visualization. Common data issues that can hamper machine learning are pushed to the forefront, such as: unexpected feature values, features with high percentages of missing values, features with unbalanced distributions, and feature distribution skew between datasets.
Facets Overview visualization of the six numeric features of the UCI Census datasets[1]. The features are sorted by non-uniformity, with the feature with the most non-uniform distribution at the top. Numbers in red indicate possible trouble spots, in this case numeric features with a high percentage of values set to 0. The histograms at right allow you to compare the distributions between the training data (blue) and test data (orange).

Facets Overview visualization showing two of the nine categorical features of the UCI Census datasets[1]. The features are sorted by distribution distance, with the feature with the biggest skew between the training (blue) and test (orange) datasets at the top. Notice in the “Target” feature that the label values differ between the training and test datasets, due to a trailing period in the test set (“<=50K” vs “<=50K.”). This can be seen in the chart for the feature and also in the entries in the “top” column of the table. This label mismatch would cause a model trained and tested on this data to not be evaluated correctly.
Facets Dive
Facets Dive provides an easy-to-customize, intuitive interface for exploring the relationship between the data points across the different features of a dataset. With Facets Dive, you control the position, color and visual representation of each data point based on its feature values. If the data points have images associated with them, the images can be used as the visual representations.
Facets Dive visualization showing all 16281 data points in the UCI Census test dataset[1]. The animation shows a user coloring the data points by one feature (“Relationship”), faceting in one dimension by a continuous feature (“Age”) and then faceting in another dimension by a discrete feature (“Marital Status”).
Facets Dive visualization of a large number of face drawings from the “Quick, Draw!” Dataset, showing the relationship between the number of strokes and points in the drawings and the ability for the “Quick, Draw!” classifier to correctly categorize them as faces.
Fun Fact: In large datasets, such as the CIFAR-10 dataset[2], a small human labelling error can easily go unnoticed. We inspected the CIFAR-10 dataset with Dive and were able to catch a frog-cat – an image of a frog that had been incorrectly labelled as a cat!
Exploration of the CIFAR-10 dataset using Facets Dive. Here we facet the ground truth labels by row and the predicted labels by column. This produces a confusion matrix view, allowing us to drill into particular kinds of misclassifications. In this particular case, the ML model incorrectly labels some small percentage of true cats as frogs. The interesting thing we find by putting the real images in the confusion matrix is that one of these "true cats" that the model predicted was a frog is actually a frog from visual inspection. With Facets Dive, we can determine that this one misclassification wasn't a true misclassification of the model, but instead incorrectly labeled data in the dataset.
Can you spot the frog-cat?

We’ve gotten great value out of Facets inside of Google and are excited to share the visualizations with the world. We hope they can help you discover new and interesting things about your data that lead you to create more powerful and accurate machine learning models. And since they are open source, you can customize the visualizations for your specific needs or contribute to the project to help us all better understand our data. If you have feedback about your experience with Facets, please let us know what you think.

Acknowledgments
This work is a collaboration between Mahima Pushkarna, James Wexler and Jimbo Wilson, with input from the entire Big Picture team. We would also like to thank Justine Tunney for providing us with the build tooling.

References
[1] Lichman, M. (2013). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml/datasets/Census+Income]. Irvine, CA: University of California, School of Information and Computer Science

[2] Learning Multiple Layers of Features from Tiny Images, Alex Krizhevsky, 2009.

Using Deep Learning to Create Professional-Level Photographs



Machine learning (ML) excels in many areas with well defined goals. Tasks where there exists a right or wrong answer help with the training process and allow the algorithm to achieve its desired goal, whether it be correctly identifying objects in images or providing a suitable translation from one language to another. However, there are areas where objective evaluations are not available. For example, whether a photograph is beautiful is measured by its aesthetic value, which is a highly subjective concept.
A professional(?) photograph of Jasper National Park, Canada.
To explore how ML can learn subjective concepts, we introduce an experimental deep-learning system for artistic content creation. It mimics the workflow of a professional photographer, roaming landscape panoramas from Google Street View and searching for the best composition, then carrying out various postprocessing operations to create an aesthetically pleasing image. Our virtual photographer “travelled” ~40,000 panoramas in areas like Alps, Banff and Jasper National Parks, big sur in California, and Yellowstone National Park, returned with creations that are quite impressive, some even approaching professional quality -- as judged by professional photographers.

Training the Model
While aesthetics can be modelled using datasets like AVA, using it naively to enhance photos may miss some aspect in aesthetics, such as making a photo over-saturated. Using supervised learning to learn multiple aspects in aesthetics properly, however, may require a labelled dataset that is intractable to collect.

Our approach relies only on a collection of professional quality photos, without before/after image pairs, or any additional labels. It breaks down aesthetics into multiple aspects automatically, each of which is learned individually with negative examples generated by a coupled image operation. By keeping these image operations semi-”orthogonal”, we can enhance a photo on its composition, saturation/HDR level and dramatic lighting with fast and separable optimizations:
A panorama (a) is cropped into (b), with saturation and HDR strength enhanced in (c), and with dramatic mask applied in (d). Each step is guided by one learned aspect of aesthetics.
A traditional image filter was used to generate negative training examples for saturation, HDR detail and composition. We also introduce a special operation named dramatic mask, which was created jointly while learning the concept of dramatic lighting. The negative examples were generated by applying a combination of image filters that modify brightness randomly on professional photos, degrading their appearance. For the training we use a generative adversarial network (GAN), where a generative model creates a mask to fix lighting for negative examples, while a discriminative model tries to distinguish enhanced results from the real professional ones. Unlike shape-fixed filters such as vignette, dramatic mask adds content-aware brightness adjustment to a photo. The competitive nature of GAN training leads to good variations of such suggestions. You can read more about the training details in our paper.

Results
Some creations of our system from Google Street View are shown below. As you can see, the application of the trained aesthetic filters creates some dramatic results (including the image we started this post with!):
Jasper National Park, Canada.
Interlaken, Switzerland.
Park Parco delle Orobie Bergamasche, Italy.
Jasper National Park, Canada.
Professional Evaluation
To judge how successful our algorithm was, we designed a “Turing-test”-like experiment: we mix our creations with other photos at different quality, and show them to several professional photographers. They were instructed to assign a quality score for each of them, with meaning defined as following:
  • 1: Point-and-shoot without consideration for composition, lighting etc.
  • 2: Good photos from general population without a background in photography. Nothing artistic stands out.
  • 3: Semi-pro. Great photos showing clear artistic aspects. The photographer is on the right track of becoming a professional.
  • 4: Pro.
In the following chart, each curve shows scores from professional photographers for images within a certain predicted score range. For our creations with a high predicted score, about 40% ratings they received are at “semi-pro” to “pro” levels.
Scores received from professional photographers for photos with different predicted scores.
Future Work
The Street View panoramas served as a testing bed for our project. Someday this technique might even help you to take better photos in the real world. We compiled a showcase of photos created to our satisfaction. If you see a photo you like, you can click on it to bring out a nearby Street View panorama. Would you make the same decision if you were there holding the camera at that moment?

Acknowledgements
This work was done by Hui Fang and Meng Zhang from Machine Perception at Google Research. We would like to thank Vahid Kazemi for his earlier work in predicting AVA scores using Inception network, and Sagarika Chalasani, Nick Beato, Bryan Klingner and Rupert Breheny for their help in processing Google Street View panoramas. We would like to thank Peyman Milanfar, Tomas Izo, Christian Szegedy, Jon Barron and Sergey Ioffe for their helpful reviews and comments. Huge thanks to our anonymous professional photographers!

Introducing Gradient Ventures

AI-powered technology holds a lot of promise—from improving patient health to making data centers more efficient. But while we’ve seen some amazing applications of AI so far, we know there are many more out there that haven’t even been imagined yet. And sometimes, these new ideas need support to flourish.

That’s why we’re announcing Gradient Ventures, a new venture fund from Google with technical mentorship for early-stage startups focused on artificial intelligence. Through Gradient, we’ll provide portfolio companies with capital, resources, and dedicated access to experts and bootcamps in AI. We’ll take a minority stake in the startups in which we invest.

Many members of our team are engineers, so we’re familiar with the journey from big idea to product launch. The goal is to help our portfolio companies overcome engineering challenges to create products that will apply artificial intelligence to today’s challenges and those we’ll face in the future.

Our portfolio is already growing, and our first companies are making progress, including Algorithmia, a marketplace for algorithms and functions, and Cogniac, a suite of tools used to create and manage visual models.

Through AI, yesterday's science fiction is becoming today's nonfiction. There's everything to reimagine as we usher in this new era of technology—and we're excited to work with entrepreneurs to start building it.

The Google Brain Residency Program — One Year Later



“Coming from a background in statistics, physics, and chemistry, the Google Brain Residency was my first exposure to both deep learning and serious programming. I enjoyed the autonomy that I was given to research diverse topics of my choosing: deep learning for computer vision and language, reinforcement learning, and theory. I originally intended to pursue a statistics PhD but my experience here spurred me to enroll in the Stanford CS program starting this fall!”
- Melody Guan, 2016 Google Brain Residency Alumna

This month marks the end of an incredibly successful year for our first class of the Google Brain Residency Program. This one-year program was created as an opportunity for individuals from diverse educational backgrounds and experiences to dive into research in machine learning and deep learning. Over the past year, the Residents familiarized themselves with the literature, designed and implemented experiments at Google scale, and engaged in cutting edge research in a wide variety of subjects ranging from theory to robotics to music generation.

To date, the inaugural class of Residents have published over 30 papers at leading machine learning publication venues such as ICLR (15), ICML (11), CVPR (3), EMNLP (2), RSS, GECCO, ISMIR, ISMB and Cosyne. An additional 18 papers are currently under review at NIPS, ICCV, BMVC and Nature Methods. Two of the above papers were published in Distill, exploring how deconvolution causes checkerboard artifacts and presenting ways of visualizing a generative model of handwriting.
A Distill article by residents interactively explores how a neural network generates handwriting.
A system that explores how robots can learn to imitate human motion from observation. For more details, see “Time-Contrastive Networks: Self-Supervised Learning from Multi-View Observation” (Co-authored by Resident Corey Lynch, along with P. Sermanet, , J. Hsu, S. Levine, accepted to CVPR Workshop 2017)
A model that uses reinforcement learning to train distributed deep learning networks at large scale by optimizing computations to hardware devices assignment. For more details, see “Device Placement Optimization with Reinforcement Learning” (Co-authored by Residents Azalia Mirhoseini and Hieu Pham, along with Q. Le, B. Steiner, R. Larsen, Y. Zhou, N. Kumar, M. Norouzi, S. Bengio, J. Dean, submitted to ICML 2017).
An approach to automate the process of discovering optimization methods, with a focus on deep learning architectures. Final version of the paper “Neural Optimizer Search with Reinforcement Learning” (Co-authored by Residents Irwan Bello and Barret Zoph, along with V. Vasudevan, Q. Le, submitted to ICML 2017) coming soon.
Residents have also made significant contributions to the open source community with general-purpose sequence-to-sequence models (used for example in translation), music synthesis, mimicking human sketching, subsampling a sequence for model training, an efficient “attention” mechanism for models, and time series analysis (particularly for neuroscience).

The end of the program year marks our Residents embarking on the next stages in their careers. Many are continuing their research careers on the Google Brain team as full time employees. Others have chosen to enter top machine learning Ph.D. programs at schools such as Stanford University, UC Berkeley, Cornell University, Oxford University and NYU, University of Toronto and CMU. We could not be more proud to see where their hard work and experiences will take them next!

As we “graduate” our first class, this week we welcome our next class of 35 incredibly talented Residents who have joined us from a wide range of experience and education backgrounds. We can’t wait to see how they will build on the successes of our first class and continue to push the team in new and exciting directions. We look forward to another exciting year of research and innovation ahead of us!

Applications to the 2018 Residency program will open in September 2017. To learn more about the program, visit g.co/brainresidency.