Tag Archives: conferences

Google at NAACL



This week, New Orleans, LA hosted the North American Association of Computational Linguistics (NAACL) conference, a venue for the latest research on computational approaches to understanding natural language. Google once again had a strong presence, presenting our research on a diverse set of topics, including dialog, summarization, machine translation, and linguistic analysis. In addition to contributing publications, Googlers were also involved as committee members, workshop organizers, panelists and presented one of the conference keynotes. We also provided telepresence robots, which enabled researchers who couldn’t attend in person to present their work remotely at the Widening Natural Language Processing Workshop (WiNLP).
Googler Margaret Mitchell and a researcher using our telepresence robots to remotely present their work at the WiNLP workshop.
This year NAACL also introduced a new Test of Time Award recognizing influential papers published between 2002 and 2012. We are happy and honored to recognize that all three papers receiving the award (listed below with a shot summary) were co-authored by researchers who are now at Google (in blue):

BLEU: a Method for Automatic Evaluation of Machine Translation (2002)
Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu
Before the introduction of the BLEU metric, comparing Machine Translation (MT) models required expensive human evaluation. While human evaluation is still the gold standard, the strong correlation of BLEU with human judgment has permitted much faster experiment cycles. BLEU has been a reliable measure of progress, persisting through multiple paradigm shifts in MT.

Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with Perceptron Algorithms (2002)
Michael Collins
The structured perceptron is a generalization of the classical perceptron to structured prediction problems, where the number of possible "labels" for each input is a very large set, and each label has rich internal structure. Canonical examples are speech recognition, machine translation, and syntactic parsing. The structured perceptron was one of the first algorithms proposed for structured prediction, and has been shown to be effective in spite of its simplicity.

Thumbs up?: Sentiment Classification using Machine Learning Techniques (2002)
Bo Pang, Lillian Lee, Shivakumar Vaithyanathan
This paper is amongst the first works in sentiment analysis and helped define the subfield of sentiment and opinion analysis and review mining. The paper introduced a new way to look at document classification, developed the first solutions to it using supervised machine learning methods, and discussed insights and challenges. This paper also had significant data impact -- the movie review dataset has supported much of the early work in this area and is still one of the commonly used benchmark evaluation datasets.

If you attended NAACL 2018, we hope that you stopped by the booth to check out some demos, meet our researchers and discuss projects and opportunities at Google that go into solving interesting problems for billions of people. You can learn more about Google research presented at NAACL 2018 below (Googlers highlighted in blue), and visit the Google AI Language Team page.

Keynote
Google Assistant or My Assistant? Towards Personalized Situated Conversational Agents
Dilek Hakkani-Tür

Publications
Bootstrapping a Neural Conversational Agent with Dialogue Self-Play, Crowdsourcing and On-Line Reinforcement Learning
Pararth Shah, Dilek Hakkani-Tür, Bing Liu, Gokhan Tür

SHAPED: Shared-Private Encoder-Decoder for Text Style Adaptation
Ye Zhang, Nan Ding, Radu Soricut

Olive Oil is Made of Olives, Baby Oil is Made for Babies: Interpreting Noun Compounds Using Paraphrases in a Neural Model
Vered Schwartz, Chris Waterson

Are All Languages Equally Hard to Language-Model?
Ryan Cotterell, Sebastian J. Mielke, Jason Eisner, Brian Roark

Self-Attention with Relative Position Representations
Peter Shaw, Jakob Uszkoreit, Ashish Vaswani

Dialogue Learning with Human Teaching and Feedback in End-to-End Trainable Task-Oriented Dialogue Systems
Bing Liu, Gokhan Tür, Dilek Hakkani-Tür, Parath Shah, Larry Heck

Workshops
Subword & Character Level Models in NLP
Organizers: Manaal Faruqui, Hinrich Schütze, Isabel Trancoso, Yulia Tsvetkov, Yadollah Yaghoobzadeh

Storytelling Workshop
Organizers: Margaret Mitchell, Ishan Misra, Ting-Hao 'Kenneth' Huang, Frank Ferraro

Ethics in NLP
Organizers: Michael Strube, Dirk Hovy, Margaret Mitchell, Mark Alfano

NAACL HLT Panels
Careers in Industry
Participants: Philip Resnik (moderator), Jason Baldridge, Laura Chiticariu, Marie Mateer, Dan Roth

Ethics in NLP
Participants: Dirk Hovy (moderator), Margaret Mitchell, Vinodkumar Prabhakaran, Mark Yatskar, Barbara Plank

Source: Google AI Blog


Announcing an updated YouTube-8M, and the 2nd YouTube-8M Large-Scale Video Understanding Challenge and Workshop



Last year, we organized the first YouTube-8M Large-Scale Video Understanding Challenge with Kaggle, in which 742 teams consisting of 946 individuals from 60 countries used the YouTube-8M dataset (2017 edition) to develop classification algorithms which accurately assign video-level labels. The purpose of the competition was to accelerate improvements in large-scale video understanding, representation learning, noisy data modeling, transfer learning and domain adaptation approaches that can help improve the machine-learning models that classify video. In addition to the competition, we hosted an affiliated workshop at CVPR’17, inviting competition top-performers and researchers and share their ideas on how to advance the state-of-the-art in video understanding.

As a continuation of these efforts to accelerate video understanding, we are excited to announce another update to the YouTube-8M dataset, a new Kaggle video understanding challenge and an affiliated 2nd Workshop on YouTube-8M Large-Scale Video Understanding, to be held at the 2018 European Conference on Computer Vision (ECCV'18).
An Updated YouTube-8M Dataset (2018 Edition)
Our YouTube-8M (2018 edition) features a major improvement in the quality of annotations, obtained using a machine learning system that combines audio-visual content with title, description and other metadata to provide more accurate ground truth annotations. The updated version contains 6.1 million URLs, labeled with a vocabulary of 3,862 visual entities, with each video annotated with one or more labels and an average of 3 labels per video. We have also updated the starter code, with updated instructions for downloading and training TensorFlow video annotation models on the dataset.

The 2nd YouTube-8M Video Understanding Challenge
The 2nd YouTube-8M Video Understanding Challenge invites participants to build audio-visual content classification models using YouTube-8M as training data, and then to label an unknown subset of test videos. Unlike last year, we strictly impose a hard limit on model size, encouraging participants to advance a single model within tight budget rather than assembling as many models as possible. Each of the top 5 teams will be awarded $5,000 to support their travel to Munich to attend ECCV’18. For details, please visit the Kaggle competition page.

The 2nd Workshop on YouTube-8M Large-Scale Video Understanding
To be held at ECCV’18, the workshop will consist of invited talks by distinguished researchers, as well as presentations by top-performing challenge participants in order to facilitate the exchange of ideas. We encourage those who wish to attend to submit papers describing their research, experiments, or applications based on YouTube-8M dataset, including papers summarizing their participation in the challenge above. Please refer to the workshop page for more details.

It is our hope that this update to the dataset, along with the new challenge and workshop, will continue to advance the research in large-scale video understanding. We hope you will join us again!

Acknowledgements
This post reflects the work of many machine perception researchers including Sami Abu-El-Haija, Ke Chen, Nisarg Kothari, Joonseok Lee, Hanhan Li, Paul Natsev, Sobhan Naderi Parizi, Rahul Sukthankar, George Toderici, Balakrishnan Varadarajan, as well as Sohier Dane, Julia Elliott, Wendy Kan and Walter Reade from Kaggle. We are also grateful for the support and advice from our partners at YouTube.

Source: Google AI Blog


Google at ICLR 2018



This week, Vancouver, Canada hosts the 6th International Conference on Learning Representations (ICLR 2018), a conference focused on how one can learn meaningful and useful representations of data for machine learning. ICLR includes conference and workshop tracks, with invited talks along with oral and poster presentations of some of the latest research on deep learning, metric learning, kernel learning, compositional models, non-linear structured prediction, and issues regarding non-convex optimization.

At the forefront of innovation in cutting-edge technology in neural networks and deep learning, Google focuses on both theory and application, developing learning approaches to understand and generalize. As Platinum Sponsor of ICLR 2018, Google will have a strong presence with over 130 researchers attending, contributing to and learning from the broader academic research community by presenting papers and posters, in addition to participating on organizing committees and in workshops.

If you are attending ICLR 2018, we hope you'll stop by our booth and chat with our researchers about the projects and opportunities at Google that go into solving interesting problems for billions of people. You can also learn more about our research being presented at ICLR 2018 in the list below (Googlers highlighted in blue)

Senior Program Chairs include:
Tara Sainath

Steering Committee includes:
Hugo Larochelle

Oral Contributions
Wasserstein Auto-Encoders
Ilya Tolstikhin, Olivier Bousquet, Sylvain Gelly, Bernhard Scholkopf

On the Convergence of Adam and Beyond (Best Paper Award)
Sashank J. Reddi, Satyen Kale, Sanjiv Kumar

Ask the Right Questions: Active Question Reformulation with Reinforcement Learning
Christian Buck, Jannis Bulian, Massimiliano Ciaramita, Wojciech Gajewski, Andrea Gesmundo, Neil Houlsby, Wei Wang

Beyond Word Importance: Contextual Decompositions to Extract Interactions from LSTMs
W. James Murdoch, Peter J. Liu, Bin Yu

Conference Posters
Boosting the Actor with Dual Critic
Bo Dai, Albert Shaw, Niao He, Lihong Li, Le Song

MaskGAN: Better Text Generation via Filling in the _______
William Fedus, Ian Goodfellow, Andrew M. Dai

Scalable Private Learning with PATE
Nicolas Papernot, Shuang Song, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, Ulfar Erlingsson

Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training
Yujun Lin, Song Han, Huizi Mao, Yu Wang, William J. Dally

Flipout: Efficient Pseudo-Independent Weight Perturbations on Mini-Batches
Yeming Wen, Paul Vicol, Jimmy Ba, Dustin Tran, Roger Grosse

Latent Constraints: Learning to Generate Conditionally from Unconditional Generative Models
Adam Roberts, Jesse Engel, Matt Hoffman

Multi-Mention Learning for Reading Comprehension with Neural Cascades
Swabha Swayamdipta, Ankur P. Parikh, Tom Kwiatkowski

QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension
Adams Wei Yu, David Dohan, Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, Quoc V. Le

Sensitivity and Generalization in Neural Networks: An Empirical Study
Roman Novak, Yasaman Bahri, Daniel A. Abolafia, Jeffrey Pennington, Jascha Sohl-Dickstein

Action-dependent Control Variates for Policy Optimization via Stein Identity
Hao Liu, Yihao Feng, Yi Mao, Dengyong Zhou, Jian Peng, Qiang Liu

An Efficient Framework for Learning Sentence Representations
Lajanugen Logeswaran, Honglak Lee

Fidelity-Weighted Learning
Mostafa Dehghani, Arash Mehrjou, Stephan Gouws, Jaap Kamps, Bernhard Schölkopf

Generating Wikipedia by Summarizing Long Sequences
Peter J. Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, Noam Shazeer

Matrix Capsules with EM Routing
Geoffrey Hinton, Sara Sabour, Nicholas Frosst

Temporal Difference Models: Model-Free Deep RL for Model-Based Control
Sergey Levine, Shixiang Gu, Murtaza Dalal, Vitchyr Pong

Deep Neural Networks as Gaussian Processes
Jaehoon Lee, Yasaman Bahri, Roman Novak, Samuel L. Schoenholz, Jeffrey Pennington, Jascha Sohl-Dickstein

Many Paths to Equilibrium: GANs Do Not Need to Decrease a Divergence at Every Step
William Fedus, Mihaela Rosca, Balaji Lakshminarayanan, Andrew M. Dai, Shakir Mohamed, Ian Goodfellow

Initialization Matters: Orthogonal Predictive State Recurrent Neural Networks
Krzysztof Choromanski, Carlton Downey, Byron Boots

Learning Differentially Private Recurrent Language Models
H. Brendan McMahan, Daniel Ramage, Kunal Talwar, Li Zhang

Learning Latent Permutations with Gumbel-Sinkhorn Networks
Gonzalo Mena, David Belanger, Scott Linderman, Jasper Snoek

Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Benjamin Eysenbach, Shixiang Gu, Julian IbarzSergey Levine

Meta-Learning for Semi-Supervised Few-Shot Classification
Mengye Ren, Eleni Triantafillou, Sachin Ravi, Jake Snell, Kevin Swersky, Josh Tenenbaum, Hugo Larochelle, Richard Zemel

Thermometer Encoding: One Hot Way to Resist Adversarial Examples
Jacob Buckman, Aurko Roy, Colin Raffel, Ian Goodfellow

A Hierarchical Model for Device Placement
Azalia Mirhoseini, Anna Goldie, Hieu Pham, Benoit Steiner, Quoc V. LeJeff Dean

Monotonic Chunkwise Attention
Chung-Cheng Chiu, Colin Raffel

Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples
Kimin Lee, Honglak Lee, Kibok Lee, Jinwoo Shin

Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans

Ensemble Adversarial Training: Attacks and Defenses
Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, Patrick McDaniel

Stochastic Variational Video Prediction
Mohammad Babaeizadeh, Chelsea Finn, Dumitru Erhan, Roy Campbell, Sergey Levine

Depthwise Separable Convolutions for Neural Machine Translation
Lukasz Kaiser, Aidan N. Gomez, Francois Chollet

Don’t Decay the Learning Rate, Increase the Batch Size
Samuel L. Smith, Pieter-Jan Kindermans, Chris Ying, Quoc V. Le

Generative Models of Visually Grounded Imagination
Ramakrishna Vedantam, Ian Fischer, Jonathan Huang, Kevin Murphy

Large Scale Distributed Neural Network Training through Online Distillation
Rohan Anil, Gabriel Pereyra, Alexandre Passos, Robert Ormandi, George E. Dahl, Geoffrey E. Hinton

Learning a Neural Response Metric for Retinal Prosthesis
Nishal P. Shah, Sasidhar Madugula, Alan Litke, Alexander Sher, EJ Chichilnisky, Yoram Singer, Jonathon Shlens

Neumann Optimizer: A Practical Optimization Algorithm for Deep Neural Networks
Shankar Krishnan, Ying Xiao, Rif A. Saurous

A Neural Representation of Sketch Drawings
David HaDouglas Eck

Deep Bayesian Bandits Showdown: An Empirical Comparison of Bayesian Deep Networks for Thompson Sampling
Carlos Riquelme, George Tucker, Jasper Snoek

Generalizing Hamiltonian Monte Carlo with Neural Networks
Daniel Levy, Matthew D. HoffmanJascha Sohl-Dickstein

Leveraging Grammar and Reinforcement Learning for Neural Program Synthesis
Rudy Bunel, Matthew Hausknecht, Jacob Devlin, Rishabh Singh, Pushmeet Kohli

On the Discrimination-Generalization Tradeoff in GANs
Pengchuan Zhang, Qiang Liu, Dengyong Zhou, Tao Xu, Xiaodong He

A Bayesian Perspective on Generalization and Stochastic Gradient Descent
Samuel L. Smith, Quoc V. Le

Learning how to Explain Neural Networks: PatternNet and PatternAttribution
Pieter-Jan Kindermans, Kristof T. Schütt, Maximilian Alber, Klaus-Robert Müller, Dumitru Erhan, Been Kim, Sven Dähne

Skip RNN: Learning to Skip State Updates in Recurrent Neural Networks
Víctor Campos, Brendan Jou, Xavier Giró-i-Nieto, Jordi Torres, Shih-Fu Chang

Towards Neural Phrase-based Machine Translation
Po-Sen Huang, Chong Wang, Sitao Huang, Dengyong Zhou, Li Deng

Unsupervised Cipher Cracking Using Discrete GANs
Aidan N. Gomez, Sicong Huang, Ivan Zhang, Bryan M. Li, Muhammad Osama, Lukasz Kaiser

Variational Image Compression With A Scale Hyperprior
Johannes Ballé, David Minnen, Saurabh Singh, Sung Jin Hwang, Nick Johnston

Workshop Posters
Local Explanation Methods for Deep Neural Networks Lack Sensitivity to Parameter Values
Julius Adebayo, Justin Gilmer, Ian Goodfellow, Been Kim

Stoachastic Gradient Langevin Dynamics that Exploit Neural Network Structure
Zachary Nado, Jasper Snoek, Bowen Xu, Roger Grosse, David Duvenaud, James Martens

Towards Mixed-initiative generation of multi-channel sequential structure
Anna Huang, Sherol Chen, Mark J. Nelson, Douglas Eck

Can Deep Reinforcement Learning Solve Erdos-Selfridge-Spencer Games?
Maithra Raghu, Alex Irpan, Jacob Andreas, Robert Kleinberg, Quoc V. Le, Jon Kleinberg

GILBO: One Metric to Measure Them All
Alexander Alemi, Ian Fischer

HoME: a Household Multimodal Environment
Simon Brodeur, Ethan Perez, Ankesh Anand, Florian Golemo, Luca Celotti, Florian Strub, Jean Rouat, Hugo Larochelle, Aaron Courville

Learning to Learn without Labels
Luke Metz, Niru Maheswaranathan, Brian Cheung, Jascha Sohl-Dickstein

Learning via Social Awareness: Improving Sketch Representations with Facial Feedback
Natasha Jaques, Jesse Engel, David Ha, Fred Bertsch, Rosalind Picard, Douglas Eck

Negative Eigenvalues of the Hessian in Deep Neural Networks
Guillaume Alain, Nicolas Le Roux, Pierre-Antoine Manzagol

Realistic Evaluation of Semi-Supervised Learning Algorithms
Avital Oliver, Augustus Odena, Colin Raffel, Ekin Cubuk, lan Goodfellow

Winner's Curse? On Pace, Progress, and Empirical Rigor
D. Sculley, Jasper Snoek, Alex Wiltschko, Ali Rahimi

Meta-Learning for Batch Mode Active Learning
Sachin Ravi, Hugo Larochelle

To Prune, or Not to Prune: Exploring the Efficacy of Pruning for Model Compression
Michael Zhu, Suyog Gupta

Adversarial Spheres
Justin Gilmer, Luke Metz, Fartash Faghri, Sam Schoenholz, Maithra Raghu,,Martin Wattenberg, Ian Goodfellow

Clustering Meets Implicit Generative Models
Francesco Locatello, Damien Vincent, Ilya Tolstikhin, Gunnar Ratsch, Sylvain Gelly, Bernhard Scholkopf

Decoding Decoders: Finding Optimal Representation Spaces for Unsupervised Similarity Tasks
Vitalii Zhelezniak, Dan Busbridge, April Shen, Samuel L. Smith, Nils Y. Hammerla

Learning Longer-term Dependencies in RNNs with Auxiliary Losses
Trieu Trinh, Quoc Le, Andrew Dai, Thang Luong

Graph Partition Neural Networks for Semi-Supervised Classification
Alexander Gaunt, Danny Tarlow, Marc Brockschmidt, Raquel Urtasun, Renjie Liao, Richard Zemel

Searching for Activation Functions
Prajit Ramachandran, Barret Zoph, Quoc Le

Time-Dependent Representation for Neural Event Sequence Prediction
Yang Li, Nan Du, Samy Bengio

Faster Discovery of Neural Architectures by Searching for Paths in a Large Model
Hieu Pham, Melody Guan, Barret Zoph, Quoc V. Le, Jeff Dean

Intriguing Properties of Adversarial Examples
Ekin Dogus Cubuk, Barret Zoph, Sam Schoenholz, Quoc Le

PPP-Net: Platform-aware Progressive Search for Pareto-optimal Neural Architectures
Jin-Dong Dong, An-Chieh Cheng, Da-Cheng Juan, Wei Wei, Min Sun

The Mirage of Action-Dependent Baselines in Reinforcement Learning
George Tucker, Surya Bhupatiraju, Shixiang Gu, Richard E. Turner, Zoubin Ghahramani, Sergey Levine

Learning to Organize Knowledge with N-Gram Machines
Fan Yang, Jiazhong Nie, William W. Cohen, Ni Lao

Online variance-reducing optimization
Nicolas Le Roux, Reza Babanezhad, Pierre-Antoine Manzagol

Google at ICLR 2018



This week, Vancouver, Canada hosts the 6th International Conference on Learning Representations (ICLR 2018), a conference focused on how one can learn meaningful and useful representations of data for machine learning. ICLR includes conference and workshop tracks, with invited talks along with oral and poster presentations of some of the latest research on deep learning, metric learning, kernel learning, compositional models, non-linear structured prediction, and issues regarding non-convex optimization.

At the forefront of innovation in cutting-edge technology in neural networks and deep learning, Google focuses on both theory and application, developing learning approaches to understand and generalize. As Platinum Sponsor of ICLR 2018, Google will have a strong presence with over 130 researchers attending, contributing to and learning from the broader academic research community by presenting papers and posters, in addition to participating on organizing committees and in workshops.

If you are attending ICLR 2018, we hope you'll stop by our booth and chat with our researchers about the projects and opportunities at Google that go into solving interesting problems for billions of people. You can also learn more about our research being presented at ICLR 2018 in the list below (Googlers highlighted in blue)

Senior Program Chair:
Tara Sainath

Steering Committee includes:
Hugo Larochelle

Oral Contributions
Wasserstein Auto-Encoders
Ilya Tolstikhin, Olivier Bousquet, Sylvain Gelly, Bernhard Scholkopf

On the Convergence of Adam and Beyond (Best Paper Award)
Sashank J. Reddi, Satyen Kale, Sanjiv Kumar

Ask the Right Questions: Active Question Reformulation with Reinforcement Learning
Christian Buck, Jannis Bulian, Massimiliano Ciaramita, Wojciech Gajewski, Andrea Gesmundo, Neil Houlsby, Wei Wang

Beyond Word Importance: Contextual Decompositions to Extract Interactions from LSTMs
W. James Murdoch, Peter J. Liu, Bin Yu

Conference Posters
Boosting the Actor with Dual Critic
Bo Dai, Albert Shaw, Niao He, Lihong Li, Le Song

MaskGAN: Better Text Generation via Filling in the _______
William Fedus, Ian Goodfellow, Andrew M. Dai

Scalable Private Learning with PATE
Nicolas Papernot, Shuang Song, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, Ulfar Erlingsson

Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training
Yujun Lin, Song Han, Huizi Mao, Yu Wang, William J. Dally

Flipout: Efficient Pseudo-Independent Weight Perturbations on Mini-Batches
Yeming Wen, Paul Vicol, Jimmy Ba, Dustin Tran, Roger Grosse

Latent Constraints: Learning to Generate Conditionally from Unconditional Generative Models
Adam Roberts, Jesse Engel, Matt Hoffman

Multi-Mention Learning for Reading Comprehension with Neural Cascades
Swabha Swayamdipta, Ankur P. Parikh, Tom Kwiatkowski

QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension
Adams Wei Yu, David Dohan, Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, Quoc V. Le

Sensitivity and Generalization in Neural Networks: An Empirical Study
Roman Novak, Yasaman Bahri, Daniel A. Abolafia, Jeffrey Pennington, Jascha Sohl-Dickstein

Action-dependent Control Variates for Policy Optimization via Stein Identity
Hao Liu, Yihao Feng, Yi Mao, Dengyong Zhou, Jian Peng, Qiang Liu

An Efficient Framework for Learning Sentence Representations
Lajanugen Logeswaran, Honglak Lee

Fidelity-Weighted Learning
Mostafa Dehghani, Arash Mehrjou, Stephan Gouws, Jaap Kamps, Bernhard Schölkopf

Generating Wikipedia by Summarizing Long Sequences
Peter J. Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, Noam Shazeer

Matrix Capsules with EM Routing
Geoffrey Hinton, Sara Sabour, Nicholas Frosst

Temporal Difference Models: Model-Free Deep RL for Model-Based Control
Sergey Levine, Shixiang Gu, Murtaza Dalal, Vitchyr Pong

Deep Neural Networks as Gaussian Processes
Jaehoon Lee, Yasaman Bahri, Roman Novak, Samuel L. Schoenholz, Jeffrey Pennington, Jascha Sohl-Dickstein

Many Paths to Equilibrium: GANs Do Not Need to Decrease a Divergence at Every Step
William Fedus, Mihaela Rosca, Balaji Lakshminarayanan, Andrew M. Dai, Shakir Mohamed, Ian Goodfellow

Initialization Matters: Orthogonal Predictive State Recurrent Neural Networks
Krzysztof Choromanski, Carlton Downey, Byron Boots

Learning Differentially Private Recurrent Language Models
H. Brendan McMahan, Daniel Ramage, Kunal Talwar, Li Zhang

Learning Latent Permutations with Gumbel-Sinkhorn Networks
Gonzalo Mena, David Belanger, Scott Linderman, Jasper Snoek

Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Benjamin Eysenbach, Shixiang Gu, Julian IbarzSergey Levine

Meta-Learning for Semi-Supervised Few-Shot Classification
Mengye Ren, Eleni Triantafillou, Sachin Ravi, Jake Snell, Kevin Swersky, Josh Tenenbaum, Hugo Larochelle, Richard Zemel

Thermometer Encoding: One Hot Way to Resist Adversarial Examples
Jacob Buckman, Aurko Roy, Colin Raffel, Ian Goodfellow

A Hierarchical Model for Device Placement
Azalia Mirhoseini, Anna Goldie, Hieu Pham, Benoit Steiner, Quoc V. LeJeff Dean

Monotonic Chunkwise Attention
Chung-Cheng Chiu, Colin Raffel

Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples
Kimin Lee, Honglak Lee, Kibok Lee, Jinwoo Shin

Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans

Ensemble Adversarial Training: Attacks and Defenses
Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, Patrick McDaniel

Stochastic Variational Video Prediction
Mohammad Babaeizadeh, Chelsea Finn, Dumitru Erhan, Roy Campbell, Sergey Levine

Depthwise Separable Convolutions for Neural Machine Translation
Lukasz Kaiser, Aidan N. Gomez, Francois Chollet

Don’t Decay the Learning Rate, Increase the Batch Size
Samuel L. Smith, Pieter-Jan Kindermans, Chris Ying, Quoc V. Le

Generative Models of Visually Grounded Imagination
Ramakrishna Vedantam, Ian Fischer, Jonathan Huang, Kevin Murphy

Large Scale Distributed Neural Network Training through Online Distillation
Rohan Anil, Gabriel Pereyra, Alexandre Passos, Robert Ormandi, George E. Dahl, Geoffrey E. Hinton

Learning a Neural Response Metric for Retinal Prosthesis
Nishal P. Shah, Sasidhar Madugula, Alan Litke, Alexander Sher, EJ Chichilnisky, Yoram Singer, Jonathon Shlens

Neumann Optimizer: A Practical Optimization Algorithm for Deep Neural Networks
Shankar Krishnan, Ying Xiao, Rif A. Saurous

A Neural Representation of Sketch Drawings
David HaDouglas Eck

Deep Bayesian Bandits Showdown: An Empirical Comparison of Bayesian Deep Networks for Thompson Sampling
Carlos Riquelme, George Tucker, Jasper Snoek

Generalizing Hamiltonian Monte Carlo with Neural Networks
Daniel Levy, Matthew D. HoffmanJascha Sohl-Dickstein

Leveraging Grammar and Reinforcement Learning for Neural Program Synthesis
Rudy Bunel, Matthew Hausknecht, Jacob Devlin, Rishabh Singh, Pushmeet Kohli

On the Discrimination-Generalization Tradeoff in GANs
Pengchuan Zhang, Qiang Liu, Dengyong Zhou, Tao Xu, Xiaodong He

A Bayesian Perspective on Generalization and Stochastic Gradient Descent
Samuel L. Smith, Quoc V. Le

Learning how to Explain Neural Networks: PatternNet and PatternAttribution
Pieter-Jan Kindermans, Kristof T. Schütt, Maximilian Alber, Klaus-Robert Müller, Dumitru Erhan, Been Kim, Sven Dähne

Skip RNN: Learning to Skip State Updates in Recurrent Neural Networks
Víctor Campos, Brendan Jou, Xavier Giró-i-Nieto, Jordi Torres, Shih-Fu Chang

Towards Neural Phrase-based Machine Translation
Po-Sen Huang, Chong Wang, Sitao Huang, Dengyong Zhou, Li Deng

Unsupervised Cipher Cracking Using Discrete GANs
Aidan N. Gomez, Sicong Huang, Ivan Zhang, Bryan M. Li, Muhammad Osama, Lukasz Kaiser

Variational Image Compression With A Scale Hyperprior
Johannes Ballé, David Minnen, Saurabh Singh, Sung Jin Hwang, Nick Johnston

Workshop Posters
Local Explanation Methods for Deep Neural Networks Lack Sensitivity to Parameter Values
Julius Adebayo, Justin Gilmer, Ian Goodfellow, Been Kim

Stoachastic Gradient Langevin Dynamics that Exploit Neural Network Structure
Zachary Nado, Jasper Snoek, Bowen Xu, Roger Grosse, David Duvenaud, James Martens

Towards Mixed-initiative generation of multi-channel sequential structure
Anna Huang, Sherol Chen, Mark J. Nelson, Douglas Eck

Can Deep Reinforcement Learning Solve Erdos-Selfridge-Spencer Games?
Maithra Raghu, Alex Irpan, Jacob Andreas, Robert Kleinberg, Quoc V. Le, Jon Kleinberg

GILBO: One Metric to Measure Them All
Alexander Alemi, Ian Fischer

HoME: a Household Multimodal Environment
Simon Brodeur, Ethan Perez, Ankesh Anand, Florian Golemo, Luca Celotti, Florian Strub, Jean Rouat, Hugo Larochelle, Aaron Courville

Learning to Learn without Labels
Luke Metz, Niru Maheswaranathan, Brian Cheung, Jascha Sohl-Dickstein

Learning via Social Awareness: Improving Sketch Representations with Facial Feedback
Natasha Jaques, Jesse Engel, David Ha, Fred Bertsch, Rosalind Picard, Douglas Eck

Negative Eigenvalues of the Hessian in Deep Neural Networks
Guillaume Alain, Nicolas Le Roux, Pierre-Antoine Manzagol

Realistic Evaluation of Semi-Supervised Learning Algorithms
Avital Oliver, Augustus Odena, Colin Raffel, Ekin Cubuk, lan Goodfellow

Winner's Curse? On Pace, Progress, and Empirical Rigor
D. Sculley, Jasper Snoek, Alex Wiltschko, Ali Rahimi

Meta-Learning for Batch Mode Active Learning
Sachin Ravi, Hugo Larochelle

To Prune, or Not to Prune: Exploring the Efficacy of Pruning for Model Compression
Michael Zhu, Suyog Gupta

Adversarial Spheres
Justin Gilmer, Luke Metz, Fartash Faghri, Sam Schoenholz, Maithra Raghu,,Martin Wattenberg, Ian Goodfellow

Clustering Meets Implicit Generative Models
Francesco Locatello, Damien Vincent, Ilya Tolstikhin, Gunnar Ratsch, Sylvain Gelly, Bernhard Scholkopf

Decoding Decoders: Finding Optimal Representation Spaces for Unsupervised Similarity Tasks
Vitalii Zhelezniak, Dan Busbridge, April Shen, Samuel L. Smith, Nils Y. Hammerla

Learning Longer-term Dependencies in RNNs with Auxiliary Losses
Trieu Trinh, Quoc Le, Andrew Dai, Thang Luong

Graph Partition Neural Networks for Semi-Supervised Classification
Alexander Gaunt, Danny Tarlow, Marc Brockschmidt, Raquel Urtasun, Renjie Liao, Richard Zemel

Searching for Activation Functions
Prajit Ramachandran, Barret Zoph, Quoc Le

Time-Dependent Representation for Neural Event Sequence Prediction
Yang Li, Nan Du, Samy Bengio

Faster Discovery of Neural Architectures by Searching for Paths in a Large Model
Hieu Pham, Melody Guan, Barret Zoph, Quoc V. Le, Jeff Dean

Intriguing Properties of Adversarial Examples
Ekin Dogus Cubuk, Barret Zoph, Sam Schoenholz, Quoc Le

PPP-Net: Platform-aware Progressive Search for Pareto-optimal Neural Architectures
Jin-Dong Dong, An-Chieh Cheng, Da-Cheng Juan, Wei Wei, Min Sun

The Mirage of Action-Dependent Baselines in Reinforcement Learning
George Tucker, Surya Bhupatiraju, Shixiang Gu, Richard E. Turner, Zoubin Ghahramani, Sergey Levine

Learning to Organize Knowledge with N-Gram Machines
Fan Yang, Jiazhong Nie, William W. Cohen, Ni Lao

Online variance-reducing optimization
Nicolas Le Roux, Reza Babanezhad, Pierre-Antoine Manzagol

Source: Google AI Blog


A Summary of the First Conference on Robot Learning



Whether in the form of autonomous vehicles, home assistants or disaster rescue units, robotic systems of the future will need to be able to operate safely and effectively in human-centric environments. In contrast to to their industrial counterparts, they will require a very high level of perceptual awareness of the world around them, and to adapt to continuous changes in both their goals and their environment. Machine learning is a natural answer to both the problems of perception and generalization to unseen environments, and with the recent rapid progress in computer vision and learning capabilities, applying these new technologies to the field of robotics is becoming a very central research question.

This past November, Google helped kickstart and host the first Conference on Robot Learning (CoRL) at our campus in Mountain View. The goal of CoRL was to bring machine learning and robotics experts together for the first time in a single-track conference, in order to foster new research avenues between the two disciplines. The sold-out conference attracted 350 researchers from many institutions worldwide, who collectively presented 74 original papers, along with 5 keynotes by some of the most innovative researchers in the field.
Prof. Sergey Levine, CoRL 2017 co-chair, answering audience questions.
Sayna Ebrahimi (UC Berkeley) presenting her research.
Videos of the inaugural CoRL are available on the conference website. Additionally, we are delighted to announce that next year, CoRL moves to Europe! CoRL 2018 will be chaired by Professor Aude Billard from the École Polytechnique Fédérale de Lausanne, and will tentatively be held in the Eidgenössische Technische Hochschule (ETH) in Zürich on October 29th-31st, 2018. Looking forward to seeing you there!
Prof. Ken Goldberg, CoRL 2017 co-chair, and Jeffrey Mahler (UC Berkeley) during a break.

Google at NIPS 2017



This week, Long Beach, California hosts the 31st annual Conference on Neural Information Processing Systems (NIPS 2017), a machine learning and computational neuroscience conference that includes invited talks, demonstrations and presentations of some of the latest in machine learning research. Google will have a strong presence at NIPS 2017, with over 450 Googlers attending to contribute to, and learn from, the broader academic research community via technical talks and posters, workshops, competitions and tutorials.

Google is at the forefront of machine learning, actively exploring virtually all aspects of the field from classical algorithms to deep learning and more. Focusing on both theory and application, much of our work on language understanding, speech, translation, visual processing, and prediction relies on state-of-the-art techniques that push the boundaries of what is possible. In all of those tasks and many others, we develop learning approaches to understand and generalize, providing us with new ways of looking at old problems and helping transform how we work and live.

If you are attending NIPS 2017, we hope you’ll stop by our booth and chat with our researchers about the projects and opportunities at Google that go into solving interesting problems for billions of people, and to see demonstrations of some of the exciting research we pursue. You can also learn more about our work being presented in the list below (Googlers highlighted in blue).

Google is a Platinum Sponsor of NIPS 2017.

Organizing Committee
Program Chair: Samy Bengio
Senior Area Chairs include: Corinna Cortes, Dale Schuurmans, Hugo Larochelle
Area Chairs include: Afshin Rostamizadeh, Amir Globerson, Been Kim, D. Sculley, Dumitru Erhan, Gal Chechik, Hartmut Neven, Honglak Lee, Ian Goodfellow, Jasper Snoek, John Wright, Jon Shlens, Kun Zhang, Lihong Li, Maya Gupta, Moritz Hardt, Navdeep Jaitly, Ryan Adams, Sally Goldman, Sanjiv Kumar, Surya Ganguli, Tara Sainath, Umar Syed, Viren Jain, Vitaly Kuznetsov

Invited Talk
Powering the next 100 years
John Platt

Accepted Papers
A Meta-Learning Perspective on Cold-Start Recommendations for Items
Manasi Vartak, Hugo Larochelle, Arvind Thiagarajan

AdaGAN: Boosting Generative Models
Ilya Tolstikhin, Sylvain Gelly, Olivier Bousquet, Carl-Johann Simon-Gabriel, Bernhard Schölkopf

Deep Lattice Networks and Partial Monotonic Functions
Seungil You, David Ding, Kevin Canini, Jan Pfeifer, Maya Gupta

From which world is your graph
Cheng Li, Varun Kanade, Felix MF Wong, Zhenming Liu

Hiding Images in Plain Sight: Deep Steganography
Shumeet Baluja

Improved Graph Laplacian via Geometric Self-Consistency
Dominique Joncas, Marina Meila, James McQueen

Model-Powered Conditional Independence Test
Rajat Sen, Ananda Theertha Suresh, Karthikeyan Shanmugam, Alexandros Dimakis, Sanjay Shakkottai

Nonlinear random matrix theory for deep learning
Jeffrey Pennington, Pratik Worah

Resurrecting the sigmoid in deep learning through dynamical isometry: theory and practice
Jeffrey Pennington, Samuel Schoenholz, Surya Ganguli

SGD Learns the Conjugate Kernel Class of the Network
Amit Daniely

SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability
Maithra Raghu, Justin Gilmer, Jason Yosinski, Jascha Sohl-Dickstein

Learning Hierarchical Information Flow with Recurrent Neural Modules
Danijar Hafner, Alexander Irpan, James Davidson, Nicolas Heess

Online Learning with Transductive Regret
Scott Yang, Mehryar Mohri

Acceleration and Averaging in Stochastic Descent Dynamics
Walid Krichene, Peter Bartlett

Parameter-Free Online Learning via Model Selection
Dylan J Foster, Satyen Kale, Mehryar Mohri, Karthik Sridharan

Dynamic Routing Between Capsules
Sara Sabour, Nicholas Frosst, Geoffrey E Hinton

Modulating early visual processing by language
Harm de Vries, Florian Strub, Jeremie Mary, Hugo Larochelle, Olivier Pietquin, Aaron C Courville

MarrNet: 3D Shape Reconstruction via 2.5D Sketches
Jiajun Wu, Yifan Wang, Tianfan Xue, Xingyuan Sun, Bill Freeman, Josh Tenenbaum

Affinity Clustering: Hierarchical Clustering at Scale
Mahsa Derakhshan, Soheil Behnezhad, Mohammadhossein Bateni, Vahab Mirrokni, MohammadTaghi Hajiaghayi, Silvio Lattanzi, Raimondas Kiveris

Asynchronous Parallel Coordinate Minimization for MAP Inference
Ofer Meshi, Alexander Schwing

Cold-Start Reinforcement Learning with Softmax Policy Gradient
Nan Ding, Radu Soricut

Filtering Variational Objectives
Chris J Maddison, Dieterich Lawson, George Tucker, Mohammad Norouzi, Nicolas Heess, Andriy Mnih, Yee Whye Teh, Arnaud Doucet

Multi-Armed Bandits with Metric Movement Costs
Tomer Koren, Roi Livni, Yishay Mansour

Multiscale Quantization for Fast Similarity Search
Xiang Wu, Ruiqi Guo, Ananda Theertha Suresh, Sanjiv Kumar, Daniel Holtmann-Rice, David Simcha, Felix Yu

Reducing Reparameterization Gradient Variance
Andrew Miller, Nicholas Foti, Alexander D'Amour, Ryan Adams

Statistical Cost Sharing
Eric Balkanski, Umar Syed, Sergei Vassilvitskii

The Unreasonable Effectiveness of Structured Random Orthogonal Embeddings
Krzysztof Choromanski, Mark Rowland, Adrian Weller

Value Prediction Network
Junhyuk Oh, Satinder Singh, Honglak Lee

REBAR: Low-variance, unbiased gradient estimates for discrete latent variable models
George Tucker, Andriy Mnih, Chris J Maddison, Dieterich Lawson, Jascha Sohl-Dickstein

Approximation and Convergence Properties of Generative Adversarial Learning
Shuang Liu, Olivier Bousquet, Kamalika Chaudhuri

Attention is All you Need
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin

PASS-GLM: polynomial approximate sufficient statistics for scalable Bayesian GLM inference
Jonathan Huggins, Ryan Adams, Tamara Broderick

Repeated Inverse Reinforcement Learning
Kareem Amin, Nan Jiang, Satinder Singh

Fair Clustering Through Fairlets
Flavio Chierichetti, Ravi Kumar, Silvio Lattanzi, Sergei Vassilvitskii

Affine-Invariant Online Optimization and the Low-rank Experts Problem
Tomer Koren, Roi Livni

Batch Renormalization: Towards Reducing Minibatch Dependence in Batch-Normalized Models
Sergey Ioffe

Bridging the Gap Between Value and Policy Based Reinforcement Learning
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans

Discriminative State Space Models
Vitaly Kuznetsov, Mehryar Mohri

Dynamic Revenue Sharing
Santiago Balseiro, Max Lin, Vahab Mirrokni, Renato Leme, Song Zuo

Multi-view Matrix Factorization for Linear Dynamical System Estimation
Mahdi Karami, Martha White, Dale Schuurmans, Csaba Szepesvari

On Blackbox Backpropagation and Jacobian Sensing
Krzysztof Choromanski, Vikas Sindhwani

On the Consistency of Quick Shift
Heinrich Jiang

Revenue Optimization with Approximate Bid Predictions
Andres Munoz, Sergei Vassilvitskii

Shape and Material from Sound
Zhoutong Zhang, Qiujia Li, Zhengjia Huang, Jiajun Wu, Josh Tenenbaum, Bill Freeman

Learning to See Physics via Visual De-animation
Jiajun Wu, Erika Lu, Pushmeet Kohli, Bill Freeman, Josh Tenenbaum

Conference Demos
Electronic Screen Protector with Efficient and Robust Mobile Vision
Hee Jung Ryu, Florian Schroff

Magenta and deeplearn.js: Real-time Control of DeepGenerative Music Models in the Browser
Curtis Hawthorne, Ian Simon, Adam Roberts, Jesse Engel, Daniel Smilkov, Nikhil Thorat, Douglas Eck

Workshops
6th Workshop on Automated Knowledge Base Construction (AKBC) 2017
Program Committee includes: Arvind Neelakanta
Authors include: Jiazhong Nie, Ni Lao

Acting and Interacting in the Real World: Challenges in Robot Learning
Invited Speakers include: Pierre Sermanet

Advances in Approximate Bayesian Inference
Panel moderator: Matthew D. Hoffman

Conversational AI - Today's Practice and Tomorrow's Potential
Invited Speakers include: Matthew Henderson, Dilek Hakkani-Tur
Organizers include: Larry Heck

Extreme Classification: Multi-class and Multi-label Learning in Extremely Large Label Spaces
Invited Speakers include: Ed Chi, Mehryar Mohri

Learning in the Presence of Strategic Behavior
Invited Speakers include: Mehryar Mohri
Presenters include: Andres Munoz Medina, Sebastien Lahaie, Sergei Vassilvitskii, Balasubramanian Sivan

Learning on Distributions, Functions, Graphs and Groups
Invited speakers include: Corinna Cortes

Machine Deception
Organizers include: Ian Goodfellow
Invited Speakers include: Jacob Buckman, Aurko Roy, Colin Raffel, Ian Goodfellow

Machine Learning and Computer Security
Invited Speakers include: Ian Goodfellow
Organizers include: Nicolas Papernot
Authors include: Jacob Buckman, Aurko Roy, Colin Raffel, Ian Goodfellow

Machine Learning for Creativity and Design
Keynote Speakers include: Ian Goodfellow
Organizers include: Doug Eck, David Ha

Machine Learning for Audio Signal Processing (ML4Audio)
Authors include: Aren Jansen, Manoj Plakal, Dan Ellis, Shawn Hershey, Channing Moore, Rif A. Saurous, Yuxuan Wang, RJ Skerry-Ryan, Ying Xiao, Daisy Stanton, Joel Shor, Eric Batternberg, Rob Clark

Machine Learning for Health (ML4H)
Organizers include: Jasper Snoek, Alex Wiltschko
Keynote: Fei-Fei Li

NIPS Time Series Workshop 2017
Organizers include: Vitaly Kuznetsov
Authors include: Brendan Jou

OPT 2017: Optimization for Machine Learning
Organizers include: Sashank Reddi

ML Systems Workshop
Invited Speakers include: Rajat Monga, Alexander Mordvintsev, Chris Olah, Jeff Dean
Authors include: Alex Beutel, Tim Kraska, Ed H. Chi, D. Scully, Michael Terry

Aligned Artificial Intelligence
Invited Speakers include: Ian Goodfellow

Bayesian Deep Learning
Organizers include: Kevin Murphy
Invited speakers include: Nal Kalchbrenner, Matthew D. Hoffman

BigNeuro 2017
Invited speakers include: Viren Jain

Cognitively Informed Artificial Intelligence: Insights From Natural Intelligence
Authors include: Jiazhong Nie, Ni Lao

Deep Learning At Supercomputer Scale
Organizers include: Erich Elsen, Zak Stone, Brennan Saeta, Danijar Haffner

Deep Learning: Bridging Theory and Practice
Invited Speakers include: Ian Goodfellow

Interpreting, Explaining and Visualizing Deep Learning
Invited Speakers include: Been Kim, Honglak Lee
Authors include: Pieter Kinderman, Sara Hooker, Dumitru Erhan, Been Kim

Learning Disentangled Features: from Perception to Control
Organizers include: Honglak Lee
Authors include: Jasmine Hsu, Arkanath Pathak, Abhinav Gupta, James Davidson, Honglak Lee

Learning with Limited Labeled Data: Weak Supervision and Beyond
Invited Speakers include: Ian Goodfellow

Machine Learning on the Phone and other Consumer Devices
Invited Speakers include: Rajat Monga
Organizers include: Hrishikesh Aradhye
Authors include: Suyog Gupta, Sujith Ravi

Optimal Transport and Machine Learning
Organizers include: Olivier Bousquet

The future of gradient-based machine learning software & techniques
Organizers include: Alex Wiltschko, Bart van Merriënboer

Workshop on Meta-Learning
Organizers include: Hugo Larochelle
Panelists include: Samy Bengio
Authors include: Aliaksei Severyn, Sascha Rothe

Symposiums
Deep Reinforcement Learning Symposium
Authors include: Benjamin Eysenbach, Shane Gu, Julian Ibarz, Sergey Levine

Interpretable Machine Learning
Authors include: Minmin Chen

Metalearning
Organizers include: Quoc V Le

Competitions
Adversarial Attacks and Defences
Organizers include: Alexey Kurakin, Ian Goodfellow, Samy Bengio

Competition IV: Classifying Clinically Actionable Genetic Mutations
Organizers include: Wendy Kan

Tutorial
Fairness in Machine Learning
Solon Barocas, Moritz Hardt


Talk to Google at Node.js Interactive

We’re headed to Vancouver this week, with about 25 Googlers who are incredibly excited to attend Node.js Interactive. With a mix of folks working on Cloud, Chrome, and V8, we’re going to be giving demos and answering questions at the Google booth. 

A few of us are also going to be giving talks. Here’s a list of the talks Googlers will be giving at the conference, ranging from serverless Slack bots to JavaScript performance tuning.

Wednesday, October 4th

Keynote: Franzi Hinkelmann
9:40am - 10:00am, West Ballroom A
Franzi Hinkelmann, Software Engineer @ Google

Franzi is located in Munich, Germany where she works at Google on Chrome V8. Franzi, like James and Anna, is a member of the Node.js Core Technical Committee. She speaks across the globe on the topic of JavaScript virtual machines. She has a PhD in mathematics, but left academia to follow her true passion: writing code.

Franzi will discuss her perspective on Chrome V8 in Node.js, and what the Chrome V8 team is doing to continue to support Node.js. Want to know what the future of browser development looks like? This is a must-attend keynote.

Functionality Abuse: The Forgotten Class of Attacks
12:20pm - 12:50pm, West Ballroom A
Nwokedi Idika, Software Engineer @ Google

If you were given a magic wand that would remove all implementation flaws from your web application, would it be free of security problems? If it took you more five seconds to say “No!” (or if, worse, you said “Yes!”), then you’re the target audience for this talk. If you’re in the target audience, don’t fret, much of the security community is there with you. After this talk, attendees will understand why the answer to the aforementioned question is an emphatic “No!” and they will learn an approach to decrease their chance of failing to consider an important vector of attack for their current and future web applications.

High Performance JS in V8
5:20pm - 5:50pm, West Ballroom A
Peter Marshall, Software Engineer @ Google

This year, V8 launched Ignition and Turbofan, the new compiler pipeline that handles all JavaScript code generation. Previously, achieving high performance in Node.js meant catering to the oddities of our now-deprecated Crankshaft compiler. This talk covers our new code generation architecture - what makes it special, a bit about how it works, and how to write high performance code for the new V8 pipeline.

Thursday, October 5th

New DevTools Features for JavaScript
11:40am - 12:10pm, West Meeting Room 122
Yang Guo, Software Engineer @ Google

Ever since v8-inspector was moved to V8's repository, we have been working on a number of new features for DevTools, usable for both Chrome and Node.js. The talk will demonstrate code coverage, type profiling, and give a deep dive into how evaluating a code snippet in DevTools console works in V8.

Understanding and Debugging Memory Leaks in Your Node.js Applications
12:20pm - 12:50pm, West Meeting Room 122
Ali Sheikh, Software Engineer @ Google

Memory leaks are hard. This talk with introduce developers to what memory leaks are, how they can exist in a garbage collected language, the available tooling that can help them understand and isolate memory leaks in their code. Specifically it will talk about heap snapshots, the new sampling heap profiler in V8, and other various other tools available in the ecosystem.

Workshop: Serverless Bots with Node.js
2:20pm - 4:10pm, West Meeting Room 117
Bret McGowen, Developer Advocate @ Google
Amir Shevat, DevRel @ Slack

This talk will show you how to build both voice and chat bots using serverless technologies. Amir Shevat, Head of Developer Relations at Slack, has overseen 17K+ bots deployed on the platform. He will present a maturing model, as best practices, for enterprise bots covering all sorts of use cases ranging for devops, HR, and marketing. Alan Ho from Google Cloud will then show you how to use various serverless technologies to build these bots. He’ll give you a demo of Slack and Google Assistant bots incorporating Google’s latest serverless technology including Edge (API Management), CloudFunctions (Serverless Compute), Cloud Datastore, and API.ai.

Modules Modules Modules
3:00pm - 3:30pm, West Meeting Room 120
Myles Borins, Developer Advocate @ Google

ES Modules and Common JS go together like old bay seasoning and vanilla ice cream. This talk will dig into the inconsistencies of the two patterns, and how the Node.js project is dealing with reconciling the problem. The talk will look at the history of modules in the JavaScript ecosystem and the subtle difference between them. It will also skim over how ECMA-262 is standardized by the TC39, and how ES Modules were developed.

Keynote: The case for Node.js
4:50pm - 5:05pm, West Ballroom A
Justin Beckwith, Product Manager @ Google

Node.js has had a transformational effect on the way we build software. However, convincing your organization to take a bet on Node.js can be difficult. My personal journey with Node.js has included convincing a few teams to take a bet on this technology, and this community. Let’s take a look at the case for Node.js we made at Google, and how you can make the case to bring it to your organization.


We can’t wait to see everyone and have some great conversations. Feel free to reach out to us on Twitter @googlecloud, or request an invite to the Google Cloud Slack community and join the #nodejs channel.

By Justin Beckwith, Languages and Runtimes Team

Google at KDD’17: Graph Mining and Beyond



The 23rd ACM conference on Knowledge Discovery and Data Mining (KDD’17), a main venue for academic and industry research in data science, information retrieval, data mining and machine learning, was held last week in Halifax, Canada. Google has historically been an active participant in KDD, and this year was no exception, with Googlers’ contributing numerous papers and participating in workshops.

In addition to our overall participation, we are happy to congratulate fellow Googler Bryan Perozzi for receiving the SIGKDD 2017 Doctoral Dissertation Award, which serves to recognize excellent research by doctoral candidates in the field of data mining and knowledge discovery. This award was given in recognition of his thesis on the topic of machine learning on graphs performed at Stony Brook University, under the advisorship of Steven Skiena. Part of his thesis was developed during his internships at Google. The thesis dealt with using a restricted set of local graph primitives (such as ego-networks and truncated random walks) to effectively exploit the information around each vertex for classification, clustering, and anomaly detection. Most notably, the work introduced the random-walk paradigm for graph embedding with neural networks in DeepWalk.

DeepWalk: Online Learning of Social Representations, originally presented at KDD'14, outlines a method for using a series of local information obtained from truncated random walks to learn latent representations of nodes in a graph (e.g. users in a social network). The core idea was to treat each segment of a random walk as a sentence “in the language of the graph.” These segments could then be used as input for neural network models to learn representations of the graph’s nodes, using sequence modeling methods like word2vec (which had just been developed at the time). This research continues at Google, most recently with Learning Edge Representations via Low-Rank Asymmetric Projections.

The full list of Google contributions at KDD’17 is listed below (Googlers highlighted in blue).

Organizing Committee
Panel Chair: Andrew Tomkins
Research Track Program Chair: Ravi Kumar
Applied Data Science Track Program Chair: Roberto J. Bayardo
Research Track Program Committee: Sergei Vassilvitskii, Alex Beutel, Abhimanyu Das, Nan Du, Alessandro Epasto, Alex Fabrikant, Silvio Lattanzi, Kristen Lefevre, Bryan Perozzi, Karthik Raman, Steffen Rendle, Xiao Yu
Applied Data Science Program Track Committee: Edith Cohen, Ariel Fuxman, D. Sculley, Isabelle Stanton, Martin Zinkevich, Amr Ahmed, Azin Ashkan, Michael Bendersky, James Cook, Nan Du, Balaji Gopalan, Samuel Huston, Konstantinos Kollias, James Kunz, Liang Tang, Morteza Zadimoghaddam

Awards
Doctoral Dissertation Award: Bryan Perozzi, for Local Modeling of Attributed Graphs: Algorithms and Applications.

Doctoral Dissertation Runner-up Award: Alex Beutel, for User Behavior Modeling with Large-Scale Graph Analysis.

Papers
Ego-Splitting Framework: from Non-Overlapping to Overlapping Clusters
Alessandro Epasto, Silvio Lattanzi, Renato Paes Leme

HyperLogLog Hyperextended: Sketches for Concave Sublinear Frequency Statistics
Edith Cohen

Google Vizier: A Service for Black-Box Optimization
Daniel Golovin, Benjamin Solnik, Subhodeep Moitra, Greg Kochanski, John Karro, D. Sculley

Quick Access: Building a Smart Experience for Google Drive
Sandeep Tata, Alexandrin Popescul, Marc Najork, Mike Colagrosso, Julian Gibbons, Alan Green, Alexandre Mah, Michael Smith, Divanshu Garg, Cayden Meyer, Reuben KanPapers

TFX: A TensorFlow­ Based Production ­Scale Machine Learning Platform
Denis Baylor, Eric Breck, Heng-Tze Cheng, Noah Fiedel, Chuan Yu Foo, Zakaria Haque, Salem Haykal, Mustafa Ispir, Vihan Jain, Levent Koc, Chiu Yuen Koo, Lukasz Lew, Clemens MewaldAkshay Modi, Neoklis Polyzotis, Sukriti Ramesh, Sudip Roy, Steven Whang, Martin Wicke Jarek Wilkiewicz, Xin Zhang, Martin Zinkevich

Construction of Directed 2K Graphs
Balint Tillman, Athina Markopoulou, Carter T. Butts, Minas Gjoka

A Practical Algorithm for Solving the Incoherence Problem of Topic Models In Industrial Applications
Amr Ahmed, James Long, Dan Silva, Yuan Wang

Train and Distribute: Managing Simplicity vs. Flexibility in High-­Level Machine Learning Frameworks
Heng-Tze Cheng, Lichan Hong, Mustafa Ispir, Clemens Mewald, Zakaria Haque, Illia Polosukhin, Georgios Roumpos, D Sculley, Jamie Smith, David Soergel, Yuan Tang, Philip Tucker, Martin Wicke, Cassandra Xia, Jianwei Xie

Learning to Count Mosquitoes for the Sterile Insect Technique
Yaniv Ovadia, Yoni Halpern, Dilip Krishnan, Josh Livni, Daniel Newburger, Ryan Poplin, Tiantian Zha, D. Sculley

Workshops
13th International Workshop on Mining and Learning with Graphs
Keynote Speaker: Vahab Mirrokni - Distributed Graph Mining: Theory and Practice
Contributed talks include:
HARP: Hierarchical Representation Learning for Networks
Haochen Chen, Bryan Perozzi, Yifan Hu and Steven Skiena

Fairness, Accountability, and Transparency in Machine Learning
Contributed talks include:
Fair Clustering Through Fairlets
Flavio Chierichetti, Ravi Kumar, Silvio Lattanzi, Sergei Vassilvitskii
Data Decisions and Theoretical Implications when Adversarially Learning Fair Representations
Alex Beutel, Jilin Chen, Zhe Zhao, Ed H. Chi

Tutorial
TensorFlow
Rajat Monga, Martin Wicke, Daniel ‘Wolff’ Dobson, Joshua Gordon

Google at ICML 2017



Machine learning (ML) is a key strategic focus at Google, with highly active groups pursuing research in virtually all aspects of the field, including deep learning and more classical algorithms, exploring theory as well as application. We utilize scalable tools and architectures to build machine learning systems that enable us to solve deep scientific and engineering challenges in areas of language, speech, translation, music, visual processing and more.

As a leader in ML research, Google is proud to be a Platinum Sponsor of the thirty-fourth International Conference on Machine Learning (ICML 2017), a premier annual event supported by the International Machine Learning Society taking place this week in Sydney, Australia. With over 130 Googlers attending the conference to present publications and host workshops, we look forward to our continued colalboration with the larger ML research community.

If you're attending ICML 2017, we hope you'll visit the Google booth and talk with our researchers to learn more about the exciting work, creativity and fun that goes into solving some of the field's most interesting challenges. Our researchers will also be available to talk about and demo several recent efforts, including the technology behind Facets, neural audio synthesis with Nsynth, a Q&A session on the Google Brain Residency program and much more. You can also learn more about our research being presented at ICML 2017 in the list below (Googlers highlighted in blue).

ICML 2017 Committees
Senior Program Committee includes: Alex Kulesza, Amr Ahmed, Andrew Dai, Corinna Cortes, George Dahl, Hugo Larochelle, Matthew Hoffman, Maya Gupta, Moritz Hardt, Quoc Le

Sponsorship Co-Chair: Ryan Adams

Publications
Robust Adversarial Reinforcement Learning
Lerrel Pinto, James Davidson, Rahul Sukthankar, Abhinav Gupta

Tight Bounds for Approximate Carathéodory and Beyond
Vahab Mirrokni, Renato Leme, Adrian Vladu, Sam Wong

Sharp Minima Can Generalize For Deep Nets
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio

Geometry of Neural Network Loss Surfaces via Random Matrix Theory
Jeffrey Pennington, Yasaman Bahri

Conditional Image Synthesis with Auxiliary Classifier GANs
Augustus Odena, Christopher Olah, Jon Shlens

Learning Deep Latent Gaussian Models with Markov Chain Monte Carlo
Maithra Raghu, Ben Poole, Surya Ganguli, Jon Kleinberg, Jascha Sohl-Dickstein

On the Expressive Power of Deep Neural Networks
Maithra Raghu, Ben Poole, Surya Ganguli, Jon Kleinberg, Jascha Sohl-Dickstein

AdaNet: Adaptive Structural Learning of Artificial Neural Networks
Corinna Cortes, Xavi Gonzalvo, Vitaly Kuznetsov, Mehryar Mohri, Scott Yang

Learned Optimizers that Scale and Generalize
Olga Wichrowska, Niru Maheswaranathan, Matthew Hoffman, Sergio Gomez, Misha Denil, Nando de Freitas, Jascha Sohl-Dickstein

Adaptive Feature Selection: Computationally Efficient Online Sparse Linear Regression under RIP
Satyen Kale, Zohar Karnin, Tengyuan Liang, David Pal

Algorithms for ℓp Low-Rank Approximation
Flavio Chierichetti, Sreenivas Gollapudi, Ravi Kumar, Silvio Lattanzi, Rina Panigrahy, David Woodruff

Consistent k-Clustering
Silvio Lattanzi, Sergei Vassilvitskii

Input Switched Affine Networks: An RNN Architecture Designed for Interpretability
Jakob Foerster, Justin Gilmer, Jan Chorowski, Jascha Sohl-Dickstein, David Sussillo

Online and Linear-Time Attention by Enforcing Monotonic Alignments
Colin RaffelThang Luong, Peter Liu, Ron Weiss, Douglas Eck

Gradient Boosted Decision Trees for High Dimensional Sparse Output
Si Si, Huan Zhang, Sathiya Keerthi, Dhruv Mahajan, Inderjit Dhillon, Cho-Jui Hsieh

Sequence Tutor: Conservative fine-tuning of sequence generation models with KL-control
Natasha Jaques, Shixiang Gu, Dzmitry Bahdanau, Jose Hernandez-Lobato, Richard E Turner, Douglas Eck

Uniform Convergence Rates for Kernel Density Estimation
Heinrich Jiang

Density Level Set Estimation on Manifolds with DBSCAN
Heinrich Jiang

Maximum Selection and Ranking under Noisy Comparisons
Moein Falahatgar, Alon Orlitsky, Venkatadheeraj Pichapati, Ananda Suresh

Neural Audio Synthesis of Musical Notes with WaveNet Autoencoders
Cinjon Resnick, Adam Roberts, Jesse Engel, Douglas Eck, Sander Dieleman, Karen Simonyan, Mohammad Norouzi

Distributed Mean Estimation with Limited Communication
Ananda Suresh, Felix Yu, Sanjiv Kumar, Brendan McMahan

Learning to Generate Long-term Future via Hierarchical Prediction
Ruben Villegas, Jimei Yang, Yuliang Zou, Sungryull Sohn, Xunyu Lin, Honglak Lee

Variational Boosting: Iteratively Refining Posterior Approximations
Andrew Miller, Nicholas J Foti, Ryan Adams

RobustFill: Neural Program Learning under Noisy I/O
Jacob Devlin, Jonathan Uesato, Surya Bhupatiraju, Rishabh Singh, Abdel-rahman Mohamed, Pushmeet Kohli

A Unified Maximum Likelihood Approach for Estimating Symmetric Properties of Discrete Distributions
Jayadev Acharya, Hirakendu Das, Alon Orlitsky, Ananda Suresh

Axiomatic Attribution for Deep Networks
Ankur Taly, Qiqi Yan,,Mukund Sundararajan

Differentiable Programs with Neural Libraries
Alex L Gaunt, Marc Brockschmidt, Nate Kushman, Daniel Tarlow

Latent LSTM Allocation: Joint Clustering and Non-Linear Dynamic Modeling of Sequence Data
Manzil Zaheer, Amr Ahmed, Alex Smola

Device Placement Optimization with Reinforcement Learning
Azalia Mirhoseini, Hieu Pham, Quoc Le, Benoit Steiner, Mohammad Norouzi, Rasmus Larsen, Yuefeng Zhou, Naveen Kumar, Samy Bengio, Jeff Dean

Canopy — Fast Sampling with Cover Trees
Manzil Zaheer, Satwik Kottur, Amr Ahmed, Jose Moura, Alex Smola

Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning
Junhyuk Oh, Satinder Singh, Honglak Lee, Pushmeet Kohli

Probabilistic Submodular Maximization in Sub-Linear Time
Serban Stan, Morteza Zadimoghaddam, Andreas Krause, Amin Karbasi

Deep Value Networks Learn to Evaluate and Iteratively Refine Structured Outputs
Michael Gygli, Mohammad Norouzi, Anelia Angelova

Stochastic Generative Hashing
Bo Dai, Ruiqi Guo, Sanjiv Kumar, Niao He, Le Song

Accelerating Eulerian Fluid Simulation With Convolutional Networks
Jonathan Tompson, Kristofer D Schlachter, Pablo Sprechmann, Ken Perlin

Large-Scale Evolution of Image Classifiers
Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alexey Kurakin

Neural Message Passing for Quantum Chemistry
Justin Gilmer, Samuel Schoenholz, Patrick Riley, Oriol Vinyals, George Dahl

Neural Optimizer Search with Reinforcement Learning
Irwan BelloBarret Zoph, Vijay Vasudevan, Quoc Le

Workshops
Implicit Generative Models
Organizers include: Ian Goodfellow

Learning to Generate Natural Language
Accepted Papers include:
Generating High-Quality and Informative Conversation Responses with Sequence-to-Sequence Models
Louis Shao, Stephan Gouws, Denny Britz, Anna Goldie, Brian Strope, Ray Kurzweil

Lifelong Learning: A Reinforcement Learning Approach
Accepted Papers include:
Bridging the Gap Between Value and Policy Based Reinforcement Learning
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans

Principled Approaches to Deep Learning
Organizers include: Robert Gens
Program Committee includes: Jascha Sohl-Dickstein

Workshop on Human Interpretability in Machine Learning (WHI)
Organizers include: Been Kim

ICML Workshop on TinyML: ML on a Test-time Budget for IoT, Mobiles, and Other Applications
Invited speakers include: Sujith Ravi

Deep Structured Prediction
Organizers include: Gal Chechik, Ofer Meshi
Program Committee includes: Vitaly Kuznetsov, Kevin Murphy
Invited Speakers include: Ryan Adams
Accepted Papers include:
Filtering Variational Objectives
Chris J Maddison, Dieterich Lawson, George Tucker, Mohammad Norouzi, Nicolas Heess, Arnaud Doucet, Andriy Mnih, Yee Whye Teh
REBAR: Low-variance, unbiased gradient estimates for discrete latent variable models
George Tucker, Andriy Mnih, Chris J Maddison, Dieterich Lawson, Jascha Sohl-Dickstein

Machine Learning in Speech and Language Processing
Organizers include: Tara Sainath
Invited speakers include: Ron Weiss

Picky Learners: Choosing Alternative Ways to Process Data
Invited speakers include: Tomer Koren
Organizers include: Corinna Cortes, Mehryar Mohri

Private and Secure Machine Learning
Keynote Speakers include: Ilya Mironov

Reproducibility in Machine Learning Research
Invited Speakers include: Hugo Larochelle, Francois Chollet
Organizers include: Samy Bengio

Time Series Workshop
Organizers include: Vitaly Kuznetsov

Tutorial
Interpretable Machine Learning
Presenters include: Been Kim


Google at ACL 2017



This week, Vancouver, Canada hosts the 2017 Annual Meeting of the Association for Computational Linguistics (ACL 2017), the premier conference in the field of natural language understanding, covering a broad spectrum of diverse research areas that are concerned with computational approaches to natural language.

As a leader in natural language processing & understanding and a Platinum sponsor of ACL 2017, Google will be on hand to showcase research interests that include syntax, semantics, discourse, conversation, multilingual modeling, sentiment analysis, question answering, summarization, and generally building better systems using labeled and unlabeled data, state-of-the-art modeling and learning from indirect supervision.

If you’re attending ACL 2017, we hope that you’ll stop by the Google booth to check out some demos, meet our researchers and discuss projects and opportunities at Google that go into solving interesting problems for billions of people. Learn more about the Google research being presented at ACL 2017 below (Googlers highlighted in blue).

Organizing Committee
Area Chairs include: Sujith Ravi (Machine Learning), Thang Luong (Machine Translation)
Publication Chairs include: Margaret Mitchell (Advisory)

Accepted Papers
A Polynomial-Time Dynamic Programming Algorithm for Phrase-Based Decoding with a Fixed Distortion Limit
Yin-Wen Chang, Michael Collins
(Oral Session)

Cross-Sentence N-ary Relation Extraction with Graph LSTMs
Nanyun Peng, Hoifung Poon, Chris Quirk, Kristina Toutanova, Wen-Tau Yih
(Oral Session)

Neural Symbolic Machines: Learning Semantic Parsers on Freebase with Weak Supervision
Chen Liang, Jonathan Berant, Quoc Le, Kenneth D. Forbus, Ni Lao

Coarse-to-Fine Question Answering for Long Documents
Eunsol Choi, Daniel Hewlett, Jakob Uszkoreit, Illia Polosukhin, Alexandre Lacoste, Jonathan Berant

Automatic Compositor Attribution in the First Folio of Shakespeare
Maria Ryskina, Hannah Alpert-Abrams, Dan Garrette, Taylor Berg-Kirkpatrick

A Nested Attention Neural Hybrid Model for Grammatical Error Correction
Jianshu Ji, Qinlong Wang, Kristina Toutanova, Yongen Gong, Steven Truong, Jianfeng Gao

Get To The Point: Summarization with Pointer-Generator Networks
Abigail See, Peter J. Liu, Christopher D. Manning

Identifying 1950s American Jazz Composers: Fine-Grained IsA Extraction via Modifier Composition
Ellie Pavlick*, Marius Pasca

Learning to Skim Text
Adams Wei Yu, Hongrae Lee, Quoc Le

Workshops
2017 ACL Student Research Workshop
Program Committee includes: Emily Pitler, Brian Roark, Richard Sproat

WiNLP: Women and Underrepresented Minorities in Natural Language Processing
Organizers include: Margaret Mitchell
Gold Sponsor

BUCC: 10th Workshop on Building and Using Comparable Corpora
Scientific Committee includes: Richard Sproat

CLPsych: Computational Linguistics and Clinical Psychology – From Linguistic Signal to Clinical
Reality
Program Committee includes: Brian Roark, Richard Sproat

Repl4NLP: 2nd Workshop on Representation Learning for NLP
Program Committee includes: Ankur Parikh, John Platt

RoboNLP: Language Grounding for Robotics
Program Committee includes: Ankur Parikh, Tom Kwiatkowski

CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies
Management Group includes: Slav Petrov

CoNLL-SIGMORPHON-2017 Shared Task: Universal Morphological Reinflection
Organizing Committee includes: Manaal Faruqui
Invited Speaker: Chris Dyer

SemEval: 11th International Workshop on Semantic Evaluation
Organizers include: Daniel Cer

ALW1: 1st Workshop on Abusive Language Online
Panelists include: Margaret Mitchell

EventStory: Events and Stories in the News
Program Committee includes: Silvia Pareti

NMT: 1st Workshop on Neural Machine Translation
Organizing Committee includes: Thang Luong
Program Committee includes: Hieu Pham, Taro Watanabe
Invited Speaker: Quoc Le

Tutorials
Natural Language Processing for Precision Medicine
Hoifung Poon, Chris Quirk, Kristina Toutanova, Wen-tau Yih

Deep Learning for Dialogue Systems
Yun-Nung Chen, Asli Celikyilmaz, Dilek Hakkani-Tur



* Contributed during an internship at Google.