Tag Archives: conference

Google at NeurIPS 2019



This week, Vancouver hosts the 33rd annual Conference on Neural Information Processing Systems (NeurIPS 2019), the biggest machine learning conference of the year. The conference includes invited talks, demonstrations and presentations of some of the latest in machine learning research. As a Diamond Sponsor of NeurIPS 2019, Google will have a strong presence at NeurIPS 2019 with more than 500 Googlers attending in order to contribute to, and learn from, the broader academic research community via talks, posters, workshops, competitions and tutorials. We will be presenting work that pushes the boundaries of what is possible in language understanding, translation, speech recognition and visual & audio perception, with Googlers co-authoring more than 130 accepted papers.

If you are attending NeurIPS 2019, we hope you’ll stop by our booth and chat with our researchers about the projects and opportunities at Google that go into solving the world's most challenging research problems, and to see demonstrations of some of the exciting research we pursue, such as ML-based Flood Forecasting, AI for Social Good, Google Research Football, Google Dataset Search, TF-Agents and much more. You can also learn more about our work being presented in the list below (Google affiliations highlighted in blue).

NeurIPS Foundation Board
Samy Bengio, Corinna Cortes

NeurIPS Advisory Board
John C. Platt, Fernando Pereira, Dale Schuurmans

NeurIPS Program Committee
Program Chair: Hugo Larochelle
Diversity & Inclusion Co-Chair: Katherine Heller
Meetup Chair: Nicolas La Roux
Party Co-Chair: Pablo Samuel Castro

Senior Area Chairs include: Amir Globerson, Claudio Gentile, Cordelia Schmid, Corinna Cortes, Dale Schuurmans, Elad Hazan, Honglak Lee, Mehryar Mohri, Peter Bartlett, Satyen Kale, Sergey Levine, Surya Ganguli

Area Chairs include: Afshin Rostamizadeh, Alex Kulesza, Amin Karbasi, Andrew Dai, Been Kim, Boqing Gong, Brainslav Kveton, Ce Liu, Charles Sutton, Chelsea Finn, Cho-Jui Hsieh, D Sculley, Danny Tarlow, David Held, Denny Zhou, Yann Dauphin, Dustin Tran, Hartmut Neven, Hossein Mobahi, Ilya Tolstikhin, Jasper Snoek, Jean-Philippe Vert, Jeffrey Pennington, Kevin Swersky, Kun Zhang, Kunal Talwar, Lihong Li, Manzil Zaheer, Marc G Bellemare, Marco Cuturi, Maya Gupta, Meg Mitchell, Minmin Chen, Mohammad Norouzi, Moustapha Cisse, Olivier Bachem, Qiang Liu, Rong Ge, Sanjiv Kumar, Sanmi Koyejo, Sebastian Nowozin, Sergei Vassilvitskii, Shivani Agarwal, Slav Petrov, Srinadh Bhojanapalli, Stephen Bach, Timnit Gebru, Tomer Koren, Vitaly Feldman, William Cohen, Yann Dauphin, Nicolas La Roux

NeurIPS Workshops Program Committee
Yann Dauphin, Honglak Lee, Sebastian Nowozin, Fernanda Viegas

NeurIPS Invited Talk
Social Intelligence
Blaise Aguera y Arcas

Accepted Papers
Memory Efficient Adaptive Optimization
Rohan Anil, Vineet Gupta, Tomer Koren, Yoram Singer

Stand-Alone Self-Attention in Vision Models
Niki Parmar, Prajit Ramachandran, Ashish Vaswani, Irwan Bello, Anselm Levskaya, Jon Shlens

High Fidelity Video Prediction with Large Neural Nets
Ruben Villegas, Arkanath Pathak, Harini Kannan, Dumitru Erhan, Quoc V. Le, Honglak Lee

Unsupervised Learning of Object Structure and Dynamics from Videos
Matthias Minderer, Chen Sun, Ruben Villegas, Forrester Cole, Kevin Murphy, Honglak Lee

GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism
Yanping Huang, Youlong Cheng, Ankur Bapna, Orhan Firat, Dehao Chen, Mia Chen, Hyouk Joong Lee, Jiquan Ngiam, Quoc V. Le, Yonghui Wu, Zhifeng Chen

Quadratic Video Interpolation
Xiangyu Xu, Li Si-Yao, Wenxiu Sun, Qian Yin, Ming-Hsuan Yang

Online Stochastic Shortest Path with Bandit Feedback and Unknown Transition Function
Aviv Rosenberg, Yishay Mansour

Individual Regret in Cooperative Nonstochastic Multi-Armed Bandits
Yogev Bar-On, Yishay Mansour

Learning to Screen
Alon Cohen, Avinatan Hassidim, Haim Kaplan, Yishay Mansour, Shay Moran

DualDICE: Behavior-Agnostic Estimation of Discounted Stationary Distribution Corrections
Ofir Nachum, Yinlam Chow, Bo Dai, Lihong Li

A Kernel Loss for Solving the Bellman Equation
Yihao Feng, Lihong Li, Qiang Liu

Accurate Uncertainty Estimation and Decomposition in Ensemble Learning
Jeremiah Liu, John Paisley, Marithani-Anna Kioumourtzoglou, Brent Coull

Saccader: Improving Accuracy of Hard Attention Models for Vision
Gamaleldin F. Elsayed, Simon Kornblith, Quoc V. Le

Invertible Convolutional Flow
Mahdi Karami, Dale Schuurmans, Jascha Sohl-Dickstein, Laurent Dinh, Daniel Duckworth

Hypothesis Set Stability and Generalization
Dylan J. Foster, Spencer Greenberg, Satyen Kale, Haipeng Luo, Mehryar Mohri, Karthik Sridharan

Bandits with Feedback Graphs and Switching Costs
Raman Arora, Teodor V. Marinov, Mehryar Mohri

Regularized Gradient Boosting
Corinna Cortes, Mehryar Mohri, Dmitry Storcheus

Logarithmic Regret for Online Control
Naman Agarwal, Elad Hazan, Karan Singh

Sampled Softmax with Random Fourier Features
Ankit Singh Rawat, Jiecao Chen, Felix Yu, Ananda Theertha Suresh, Sanjiv Kumar

Multilabel Reductions: What is My Loss Optimising?
Aditya Krishna Menon, Ankit Singh Rawat, Sashank Reddi, Sanjiv Kumar

MetaInit: Initializing Learning by Learning to Initialize
Yann N. Dauphin, Sam Schoenholz

Generalization Bounds for Neural Networks via Approximate Description Length
Amit Daniely, Elad Granot

Variance Reduction of Bipartite Experiments through Correlation Clustering
Jean Pouget-Abadie, Kevin Aydin, Warren Schudy, Kay Brodersen, Vahab Mirrokni

Likelihood Ratios for Out-of-Distribution Detection
Jie Ren, Peter J. Liu, Emily Fertig, Jasper Snoek, Ryan Poplin, Mark A. DePristo, Joshua V. Dillon, Balaji Lakshminarayanan

Can You Trust Your Model’s Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift
Yaniv Ovadia, Emily Fertig, Jie Jessie Ren, D. Sculley, Josh Dillon, Sebastian Nowozin, Zack Nado, Balaji Lakshminarayanan, Jasper Snoek

Surrogate Objectives for Batch Policy Optimization in One-step Decision Making
Minmin Chen, Ramki Gummadi, Chris Harris, Dale Schuurmans

Globally Optimal Learning for Structured Elliptical Losses
Yoav Wald, Nofar Noy, Gal Elidan, Ami Wiesel

DPPNet: Approximating Determinantal Point Processes with Deep Networks
Zelda Mariet, Yaniv Ovadia, Jasper Snoek

Graph Normalizing Flows
Jenny Liu, Aviral Kumar, Jimmy Ba, Jamie Kiros, Kevin Swersky

When Does Label Smoothing Help?
Rafael Muller, Simon Kornblith, Geoff Hinton

On the Role of Inductive Bias From Simulation and the Transfer to the Real World: a new Disentanglement Dataset
Muhammad Waleed Gondal, Manuel Wüthrich, Đorđe Miladinović, Francesco Locatello, Martin Breidt, Valentin Volchkov, Joel Akpo, Olivier Bachem, Bernhard Schölkopf, Stefan Bauer

On the Fairness of Disentangled Representations
Francesco Locatello, Gabriele Abbati, Tom Rainforth, Stefan Bauer, Bernhard Schölkopf, Olivier Bachem

Are Disentangled Representations Helpful for Abstract Visual Reasoning?
Sjoerd van Steenkiste, Francesco Locatello, Jürgen Schmidhuber, Olivier Bachem

Don’t Blame the ELBO! A Linear VAE Perspective on Posterior Collapse
James Lucas, George Tucker, Roger Grosse, Mohammad Norouzi

Stabilizing Off-Policy Q-Learning via Bootstrapping Error Reduction
Aviral Kumar, Justin Fu, Matthew Soh, George Tucker, Sergey Levine

Optimizing Generalized Rate Metrics with Game Equilibrium
Harikrishna Narasimhan, Andrew Cotter, Maya Gupta

On Making Stochastic Classifiers Deterministic
Andrew Cotter, Harikrishna Narasimhan, Maya Gupta

Discrete Flows: Invertible Generative Models of Discrete Data
Dustin Tran, Keyon Vafa, Kumar Agrawal, Laurent Dinh, Ben Poole

Graph Agreement Models for Semi-Supervised Learning
Otilia Stretcu, Krishnamurthy Viswanathan, Dana Movshovitz-Attias, Emmanouil Platanios, Andrew Tomkins, Sujith Ravi

A Robust Non-Clairvoyant Dynamic Mechanism for Contextual Auctions
Yuan Deng, Sébastien Lahaie, Vahab Mirrokni

Adversarial Robustness through Local Linearization
Chongli Qin, James Martens, Sven Gowal, Dilip Krishnan, Krishnamurthy (Dj) Dvijotham, Alhusein Fawzi, Soham De, Robert Stanforth, Pushmeet Kohli

A Geometric Perspective on Optimal Representations for Reinforcement Learning
Marc G. Bellemare, Will Dabney, Robert Dadashi, Adrien Ali Taiga, Pablo Samuel Castro, Nicolas Le Roux, Dale Schuurmans, Tor Lattimore, Clare Lyle

Online Learning via the Differential Privacy Lens
Jacob Abernethy, Young Hun Jung, Chansoo Lee, Audra McMillan, Ambuj Tewari

Reducing the Variance in Online Optimization by Transporting Past Gradients
Sébastien M. R. Arnold, Pierre-Antoine Manzagol, Reza Babanezhad, Ioannis Mitliagkas, Nicolas Le Roux

Universality and Individuality in Neural Dynamics Across Large Populations of Recurrent Networks
Niru Maheswaranathan, Alex Williams, Matt Golub, Surya Ganguli, David Sussillo

Reverse Engineering Recurrent Networks for Sentiment Classification Reveals Line Attractor Dynamics
Niru Maheswaranathan, Alex H. Williams, Matthew D. Golub, Surya Ganguli, David Sussillo

Strategizing Against No-Regret Learners
Yuan Deng, Jon Schneider, Balasubramanian Sivan

Prior-Free Dynamic Auctions with Low Regret Buyers
Yuan Deng, Jon Schneider, Balasubramanian Sivan

Private Stochastic Convex Optimization with Optimal Rates
Raef Bassily, Vitaly Feldman, Kunal Talwar, Abhradeep Thakurta

Computational Separations between Sampling and Optimization
Kunal Talwar

Momentum-Based Variance Reduction in Non-Convex SGD
Ashok Cutkosky and Francesco Orabona

Kernel Truncated Randomized Ridge Regression: Optimal Rates and Low Noise Acceleration
Kwang-Sung Jun, Ashok Cutkosky, Francesco Orabona

Fast and Flexible Multi-Task Classification using Conditional Neural Adaptive Processes
James Requeima, Jonathan Gordon, John Bronskill, Sebastian Nowozin, Richard E. Turner

Icebreaker: Element-wise Active Information Acquisition with Bayesian Deep Latent Gaussian Model
Wenbo Gong, Sebastian Tschiatschek, Richard E. Turner, Sebastian Nowozin, Jose Miguel Hernandez-Lobato, Cheng Zhang

Multiview Aggregation for Learning Category-Specific Shape Reconstruction
Srinath Sridhar, Davis Rempe, Julien Valentin, Sofien Bouaziz, Leonidas J. Guibas

Visualizing and Measuring the Geometry of BERT
Andy Coenen, Emily Reif, Ann Yuan, Been Kim, Adam Pearce, Fernanda Viégas, Martin Wattenberg

Locality-Sensitive Hashing for f-Divergences: Mutual Information Loss and Beyond
Lin Chen, Hossein Esfandiari, Thomas Fu, Vahab S. Mirrokni

A Benchmark for Interpretability Methods in Deep Neural Networks
Sara Hooker, Dumitru Erhan, Pieter-jan Kindermans, Been Kim

Practical and Consistent Estimation of f-Divergences
Paul Rubenstein, Olivier Bousquet, Josip Djolonga, Carlos Riquelme, Ilya Tolstikhin

Tree-Sliced Variants of Wasserstein Distances
Tam Le, Makoto Yamada, Kenji Fukumizu, Marco Cuturi

Game Design for Eliciting Distinguishable Behavior
Fan Yang, Liu Leqi, Yifan Wu, Zachary Lipton, Pradeep Ravikumar, Tom M Mitchell, William Cohen

Differentially Private Anonymized Histograms
Ananda Theertha Suresh

Locally Private Gaussian Estimation
Matthew Joseph, Janardhan Kulkarni, Jieming Mao, Zhiwei Steven Wu

Exponential Family Estimation via Adversarial Dynamics Embedding
Bo Dai, Zhen Liu, Hanjun Dai, Niao He, Arthur Gretton, Le Song, Dale Schuurmans

Learning to Predict Without Looking Ahead: World Models Without Forward Prediction
C. Daniel Freeman, Luke Metz, David Ha

Adaptive Density Estimation for Generative Models
Thomas Lucas, Konstantin Shmelkov, Karteek Alahari, Cordelia Schmid, Jakob Verbeek

Weight Agnostic Neural Networks
Adam Gaier, David Ha

Retrosynthesis Prediction with Conditional Graph Logic Network
Hanjun Dai, Chengtao Li, Connor Coley, Bo Dai, Le Song

Large Scale Structure of Neural Network Loss Landscapes
Stanislav Fort, Stainslaw Jastrzebski

Off-Policy Evaluation via Off-Policy Classification
Alex Irpan, Kanishka Rao, Konstantinos Bousmalis, Chris Harris, Julian Ibarz, Sergey Levine

Domes to Drones: Self-Supervised Active Triangulation for 3D Human Pose Reconstruction
Aleksis Pirinen, Erik Gartner, Cristian Sminchisescu

Energy-Inspired Models: Learning with Sampler-Induced Distributions
Dieterich Lawson, George TuckerBo Dai, Rajesh Ranganath

From Deep Learning to Mechanistic Understanding in Neuroscience: The Structure of Retinal Prediction
Hidenori Tanaka, Aran Nayebi, Niru Maheswaranathan, Lane McIntosh, Stephen Baccus, Surya Ganguli

Language as an Abstraction for Hierarchical Deep Reinforcement Learning
Yiding Jiang, Shixiang Gu, Kevin Murphy, Chelsea Finn

Bayesian Layers: A Module for Neural Network Uncertainty
Dustin Tran, Michael W. Dusenberry, Mark van der Wilk, Danijar Hafner

Adaptive Temporal-Difference Learning for Policy Evaluation with Per-State Uncertainty Estimates
Hugo Penedones, Carlos RiquelmeDamien Vincent, Hartmut Maennel, Timothy Mann, Andre Barreto, Sylvain Gelly, Gergely Neu

A Unified Framework for Data Poisoning Attack to Graph-based Semi-Supervised Learning
Xuanqing Liu, Si Si, Xiaojin Zhu, Yang Li, Cho-Jui Hsieh

MixMatch: A Holistic Approach to Semi-Supervised Learning
David Berthelot, Nicholas Carlini, Ian Goodfellow (work done while at Google), Avital Oliver, Nicolas Papernot, Colin Raffel

SMILe: Scalable Meta Inverse Reinforcement Learning through Context-Conditional Policies
Seyed Kamyar Seyed Ghasemipour, Shixiang (Shane) Gu, Richard Zemel

Limits of Private Learning with Access to Public Data
Noga Alon, Raef Bassily, Shay Moran

Regularized Weighted Low Rank Approximation
Frank Ban, David Woodruff, Richard Zhang

Unsupervised Curricula for Visual Meta-Reinforcement Learning
Allan Jabri, Kyle Hsu, Abhishek Gupta, Benjamin Eysenbach, Sergey Levine, Chelsea Finn

Secretary Ranking with Minimal Inversions
Sepehr Assadi, Eric Balkanski, Renato Paes Leme

Mixtape: Breaking the Softmax Bottleneck Efficiently
Zhilin Yang, Thang Luong, Russ Salakhutdinov, Quoc V. Le

Budgeted Reinforcement Learning in Continuous State Space
Nicolas Carrara, Edouard Leurent, Romain Laroche, Tanguy Urvoy, Odalric-Ambrym Maillard, Olivier Pietquin

From Complexity to Simplicity: Adaptive ES-Active Subspaces for Blackbox Optimization
Krzysztof Choromanski, Aldo Pacchiano, Jack Parker-Holder, Yunhao Tang

Generalization Bounds for Neural Networks via Approximate Description Length
Amit Daniely, Elad Granot

Flattening a Hierarchical Clustering through Active Learning
Fabio Vitale, Anand Rajagopalan, Claudio Gentile

Robust Attribution Regularization
Jiefeng Chen, Xi Wu, Vaibhav Rastogi, Yingyu Liang, Somesh Jha

Robustness Verification of Tree-based Models
Hongge Chen, Huan Zhang, Si Si, Yang Li, Duane Boning, Cho-Jui Hsieh

Meta Architecture Search
Albert Shaw, Wei Wei, Weiyang Liu, Le Song, Bo Dai

Contextual Bandits with Cross-Learning
Santiago Balseiro, Negin Golrezaei, Mohammad Mahdian, Vahab Mirrokni, Jon Schneider

Dynamic Incentive-Aware Learning: Robust Pricing in Contextual Auctions
Negin Golrezaei, Adel Javanmard, Vahab Mirrokni

Optimizing Generalized Rate Metrics with Three Players
Harikrishna Narasimhan, Andrew Cotter, Maya Gupta

Noise-Tolerant Fair Classification
Alexandre Louis Lamy, Ziyuan Zhong, Aditya Krishna Menon, Nakul Verma

Towards Automatic Concept-based Explanations
Amirata Ghorbani, James Wexler, James Zou, Been Kim

Locally Private Learning without Interaction Requires Separation
Amit Daniely, Vitaly Feldman

Learning GANs and Ensembles Using Discrepancy
Ben Adlam, Corinna Cortes, Mehryar Mohri, Ningshan Zhang

CondConv: Conditionally Parameterized Convolutions for Efficient Inference
Brandon Yang, Gabriel Bender, Quoc V. Le, Jiquan Ngiam

A Fourier Perspective on Model Robustness in Computer Vision
Dong Yin, Raphael Gontijo Lopes, Jonathon Shlens, Ekin D. Cubuk, Justin Gilmer

Robust Bi-Tempered Logistic Loss Based on Bregman Divergences
Ehsan Amid, Manfred K. Warmuth, Rohan Anil, Tomer Koren

When Does Label Smoothing Help?
Rafael Müller, Simon Kornblith, Geoffrey Hinton

Memory Efficient Adaptive Optimization
Rohan Anil, Vineet Gupta, Tomer Koren, Yoram Singer

Which Algorithmic Choices Matter at Which Batch Sizes? Insights From a Noisy Quadratic Model
Guodong Zhang, Lala Li, Zachary Nado, James Martens, Sushant Sachdeva, George E. Dahl, Christopher J. Shallue, Roger Grosse

Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent
Jaehoon Lee, Lechao Xiao, Samuel S. Schoenholz, Yasaman Bahri, Roman Novak, Jascha Sohl-Dickstein, Jeffrey Pennington

Universality and Individuality in Neural Dynamics Across Large Populations of Recurrent Networks
Niru Maheswaranathan, Alex H. Williams, Matthew D. Golub, Surya Ganguli, David Sussillo

Abstract Reasoning with Distracting Features
Kecheng Zheng, Zheng-Jun Zha, Wei Wei

Search on the Replay Buffer: Bridging Planning and Reinforcement Learning
Benjamin Eysenbach, Ruslan Salakhutdinov, Sergey Levine

Differentiable Ranking and Sorting Using Optimal Transport
Marco Cuturi, Olivier Teboul, Jean-Philippe Vert

XLNet: Generalized Autoregressive Pretraining for Language Understanding
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le

Private Learning Implies Online Learning: An Efficient Reduction
Alon Gonen, Elad Hazan, Shay Moran

Evaluating Protein Transfer Learning with TAPE
Roshan Rao, Nicholas Bhattacharya, Neil Thomas, Yan Duan, Peter Chen, John Canny, Pieter Abbeel, Yun Song

Tight Dimensionality Reduction for Sketching Low Degree Polynomial Kernels
Michela Meister, Tamas Sarlos, David P. Woodruff

No Pressure! Addressing the Problem of Local Minima in Manifold Learning Algorithms
Max Vladymyrov

Subspace Detours: Building Transport Plans that are Optimal on Subspace Projections
Boris Muzellec, Marco Cuturi

Online Stochastic Shortest Path with Bandit Feedback and Unknown Transition Function
Aviv Rosenberg, Yishay Mansour

Private Learning Implies Online Learning: An Efficient Reduction
Alon Gonen, Elad Hazan, Shay Moran

On the Fairness of Disentangled Representations
Francesco Locatello, Gabriele Abbati, Tom Rainforth, Stefan Bauer, Bernhard Schölkopf, Olivier Bachem

On the Transfer of Inductive Bias from Simulation to the Real World: a New Disentanglement Dataset
Muhammad Waleed Gondal, Manuel Wüthrich, Ðorde Miladinovíc, Francesco Locatello, Martin Breidt, Valentin Volchkov, Joel Akpo, Olivier Bachem, Bernhard Schölkopf, Stefan Bauer

Stacked Capsule Autoencoders
Adam R. Kosiorek, Sara Sabour, Yee Whye Teh, Geoffrey E. Hinton

Wasserstein Dependency Measure for Representation Learning
Sherjil Ozair, Corey Lynch, Yoshua Bengio, Aaron van den Oord, Sergey Levine, Pierre Sermanet

Sampling Sketches for Concave Sublinear Functions of Frequencies
Edith Cohen, Ofir Geri

Hamiltonian Neural Networks
Sam Greydanus, Misko Dzamba, Jason Yosinski

Evaluating Protein Transfer Learning with TAPE
Roshan Rao, Nicholas Bhattacharya, Neil Thomas, Yan Duan, Xi Chen, John Canny, Pieter Abbeel, Yun S. Song

Computational Mirrors: Blind Inverse Light Transport by Deep Matrix Factorization
Miika Aittala, Prafull Sharma, Lukas Murmann, Adam B. Yedidia, Gregory W. Wornell, William T. Freeman, Frédo Durand

Quadratic Video Interpolation
Xiangyu Xu, Li Siyao, Wenxiu Sun, Qian Yin, Ming-Hsuan Yang

Transfusion: Understanding Transfer Learning for Medical Imagings
Maithra Raghu, Chiyuan Zhang, Jon Kleinberg, Samy Bengio

XLNet: Generalized Autoregressive Pretraining for Language Understanding
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le

Differentially Private Covariance Estimation
Kareem Amin, Travis Dick, Alex Kulesza, Andres Munoz, Sergei Vassilvitskii

Private Stochastic Convex Optimization with Optimal Rates
Raef Bassily, Vitaly Feldman, Kunal Talwar, Abhradeep Thakurta

Learning Transferable Graph Exploration
Hanjun Dai, Yujia Li, Chenglong Wang, Rishabh Singh, Po-Sen Huang, Pushmeet Kohli

Neural Attribution for Semantic Bug-Localization in Student Programs
Rahul Gupta, Aditya Kanade, Shirish Shevade

PyTorch: An Imperative Style, High-Performance Deep Learning Library
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, Soumith Chintala

Breaking the Glass Ceiling for Embedding-Based Classifiers for Large Output Spaces
Chuan Guo, Ali Mousavi, Xiang Wu, Daniel Holtmann-Rice, Satyen Kale, Sashank Reddi, Sanjiv Kumar

Efficient Rematerialization for Deep Networks
Ravi Kumar, Manish Purohit, Zoya Svitkina, Erik Vee, Joshua R. Wang

Momentum-Based Variance Reduction in Non-Convex SGD
Ashok Cutkosky, Francesco Orabona

Kernel Truncated Randomized Ridge Regression: Optimal Rates and Low Noise Acceleration
Kwang-Sung Jun, Ashok Cutkosky, Francesco Orabona

Workshops
3rd Conversational AI: Today's Practice and Tomorrow's Potential
Organizers include: Bill Byrne

AI for Humanitarian Assistance and Disaster Response Workshop
Invited Speakers include: Yossi Matias

Bayesian Deep Learning
Organizers include: Kevin P Murphy

Beyond First Order Methods in Machine Learning Systems
Invited Speakers include: Elad Hazan

Biological and Artificial Reinforcement Learning
Invited Speakers include: Igor Mordatch

Context and Compositionality in Biological and Artificial Neural Systems
Invited Speakers include: Kenton Lee

Deep Reinforcement Learning
Organizers include: Chelsea Finn

Document Intelligence
Organizers include: Tania Bedrax Weiss

Federated Learning for Data Privacy and Confidentiality
Organizers include: Jakub KonečnýBrendan McMahan
Invited Speakers include: Françoise Beaufays, Daniel Ramage

Graph Representation Learning
Organizers include: Rianne van den Berg

Human-Centric Machine Learning
Invited Speakers include: Been Kim

Information Theory and Machine Learning
Organizers include: Ben Poole
Invited Speakers include: Alex Alemi

KR2ML - Knowledge Representation and Reasoning Meets Machine Learning
Invited Speakers include: William Cohen

Learning Meaningful Representations of Life
Organizers include: Jasper Snoek, Alexander Wiltschko

Learning Transferable Skills
Invited Speakers include: David Ha

Machine Learning for Creativity and Design
Organizers include: Adam Roberts, Jesse Engel

Machine Learning for Health (ML4H): What Makes Machine Learning in Medicine Different?
Invited Speakers include: Lily Peng, Alan Karthikesalingam, Dale Webster

Machine Learning and the Physical Sciences
Speakers include: Yasaman Bahri, Samual Schoenholz

ML for Systems
Organizers include: Milad HashemiKevin SwerskyAzalia MirhoseiniAnna Goldie
Invited Speakers include: Jeff Dean

Optimal Transport for Machine Learning
Organizers include: Marco Cuturi

The Optimization Foundations of Reinforcement Learning
Organizers include: Bo DaiNicolas Le RouxLihong LiDale Schuurmans

Privacy in Machine Learning
Invited Speakers include: Brendan McMahan

Program Transformations for ML
Organizers include: Pascal LamblinAlexander WiltschkoBart van MerrienboerEmily Fertig
Invited Speakers include: Skye Wanderman-Milne

Real Neurons & Hidden Units: Future Directions at the Intersection of Neuroscience and Artificial Intelligence
Organizers include: David Sussillo

Robot Learning: Control and Interaction in the Real World
Organizers include: Stefan Schaal

Safety and Robustness in Decision Making
Organizers include: Yinlam Chow

Science Meets Engineering of Deep Learning
Invited Speakers include: Yasaman Bahri, Surya Ganguli‎, Been Kim, Surya Ganguli

Sets and Partitions
Organizers include: Manzil Zaheer, Andrew McCallum
Invited Speakers include: Amr Ahmed

Tackling Climate Change with ML
Organizers include: John Platt
Invited Speakers include: Jeff Dean

Visually Grounded Interaction and Language
Invited Speakers include: Jason Baldridge

Workshop on Machine Learning with Guarantees
Invited Speakers include: Mehryar Mohri

Tutorials
Representation Learning and Fairness
Organizers include: Moustapha Cisse, Sanmi Koyejo

Source: Google AI Blog


Building Skills, Building Community

Year after year, we hear from conference attendees that it's not just the content they came for, it's the connections. Meeting new people, getting new perspectives, making new friends (and sometimes hiring them!) is a big part of KubeCon Life. We want to make sure that the Kubecon community is welcoming to people from diverse backgrounds but just being welcoming is not enough: we have to actually do the work to help people get through the door.

The easiest way to help people get through the door is through diversity scholarships. One of the biggest blockers to full participation in our community is just having the resources to get to the room where it happens, and a diversity scholarship—not just a ticket, but travel assistance too—helps increase participation.

1: Going Swagless

This Kubecon we want you to take away the really important things from the conference: new knowledge and new connections... not just another pen or plastic doodad. (Although to be fair, we will also have plenty of stickers... stickers aren't swag, they're an essential part of Kubecon!)

Google prides itself on being a data-driven company, so when we need to decide where we can spend our dollars to make the most impact and do the most good for the Kubecon community, we turn to the data. We know there is an issue from the CNCF KubeCon report in Seattle 2018 reporting in 11% women (and that’s not even a complete diversity metric). Now looking at the things conference attendees have told us they value about Kubecon, we put together this handy chart to help us guide our decision-making:
Travel + Conf Ticket ScholarshipBranded Pen
Face to face learning
Career development
OSS community building
Writing tools

We also need to consider externalities when we make our decisions—and going #swagless and dedicating those resources to improving the conversation and community at Kubecon has some positive externalities: less plastic (and lighter luggage going home) is better for the planet, too!

If our work to support diversity and inclusion at Kubecon has inspired you and you want to know what your org can do to participate, there is plenty of room in the #swagless tent for everyone—redirect your swag budget to D&I efforts. Shoutouts to conference organizers like SpringOne that went totally swagless this year!

2: Diversity Lunch + Hack

Our commitment to a welcoming environment and a diverse community doesn't stop at getting people in the door: we also need to work on inclusion. Our diversity lunch and hack is a place where people can:
  • Build their skills through pair programming
  • Get installation help
  • Do deep-dives on k8s topics
  • Connect with others in the community
Our diversity lunch isn't just talking about diversity: it's about working towards diversity through skill-building and creating stronger community bonds. Register here!

We welcomed 220 friends and allies in Barcelona and expect to continue the sold-out streak in San Diego (get your ticket now)!

3: Redirecting Even More

But wait, there's more! We're not just going #swagless, we're also redirecting all the hands-on workshop registration fees ($50) from Anthos Day, Anthos&GKE Lab, OSS: Agones, Knative, and Kubeflow to the diversity scholarship fund. You can build a stronger, more diverse community while you build your skills—a total twofer. (And our workshops are also walking the walk of inclusion by being accessible themselves: if you need support to attend a workshop, whether financial or physical, send us a note.

4: Hiring

Also, one of the best things any company can do to drive D&I is to hire people who will help your company become more diverse, whether as a consultant to help you build your program, or as a team member who will help you bring a wider perspective to your product! Come meet a Googler at any of the activities we are doing during the week to discuss jobs at Google Cloud: g.co/Kubecon.

By: Paris Pittman, Google Open Source

DevFest 2019: It’s time for Latin America!

DevFest banner Posted by Mariela Altamirano, Community Manager for Latin America with Grant Timmerman, Developer Programs Engineer and Mete Atamel, Developer Advocate

DevFest season is always full of lively surprises with enchanting adventures right around the corner. Sometimes these adventures are big: attending a DevFest in the Caribbean, in the heart of the amazon jungle, or traveling more than 3,000 meters above sea level to discover the beautiful South American highlands. Other times they are small but precious: unlocking a new way of thinking that completely shifts how you code.

October marks the beginning of our DevFest 2019 season in Latin America, where all of these experiences become a reality thanks to the efforts of our communities.

What makes DevFests in LATAM different? Our community is free spirited, eager to explore the natural landscapes we call home, proud of our deep cultural diversity, and energized by our big cities. At the same time, we are connected to the tranquil spirit of our small towns. This year, we hope to reflect this way of life through our 55 official Latin America DevFests.

During the season, Latin America will open its doors to Google Developer Experts, Women Techmakers, Googlers, and other renowned speakers, to exchange ideas on Google products such as Android, TensorFlow, Flutter, Google Cloud Platform. Activities include, hackathons, codelabs and training sessions. This season, we will be joined by Googlers Grant Timmerman and Mete Atamel.

Grant is a Developer Programs Engineer at Google where he works on Cloud Functions, Cloud Run, and other serverless technologies on Google Cloud Platform. He loves open source, Node, and plays the alto saxophone in his spare time. During his time in Latin America, he'll be discussing all things serverless at DevFests and Cloud Summits in Chile, Argentina, Peru, Colombia, and Mexico.

Grant Timmerman, developer programs engineer
Mete Atamel, developer advocate

Mete is a Developer Advocate based in London. He focuses on helping developers with Google Cloud. At DevFest Sul in Floripa and other conferences and meetups throughout Brazil in October, he’ll be talking about serverless containers using Knative and Cloud Run. He first visited the region back in 2017 when he visited Sao Paulo

Afterwards, he went to Rio de Janeiro and immediately fell in love with the city, its friendly people and its positive vibe. Since then, he spoke at a number of conferences and meetups in Mexico, Colombia, Peru, Argentina, Uruguay and Brazil, and always has been impressed with the eagerness of people to learn more.

This year we will be visiting new countries such as Jamaica, Haiti, Guyana, Honduras, Venezuela and Ecuador that have created their first GDG (Google Developer Group) communities. Most of these new communities are celebrating their first DevFest! We'll also be hosting diversity and inclusion events, so keep an eye out for more details!

We thank everyone for being a part of DevFest and our community.

We hope you join us!

#DevFest

#DevFestLATAM

Find a DevFest near you at g.co/dev/fest/sa

Google at Interspeech 2019



This week, Graz, Austria hosts the 20th Annual Conference of the International Speech Communication Association (Interspeech 2019), one of the world‘s most extensive conferences on the research and engineering for spoken language processing. Over 2,000 experts in speech-related research fields gather to take part in oral presentations and poster sessions and to collaborate with streamed events across the globe.

As a Gold Sponsor of Interspeech 2019, we are excited to present 30 research publications, and demonstrate some of the impact speech technology has made in our products, from accessible, automatic video captioning to a more robust, reliable Google Assistant. If you’re attending Interspeech 2019, we hope that you’ll stop by the Google booth to meet our researchers and discuss projects and opportunities at Google that go into solving interesting problems for billions of people. Our researchers will also be on hand to discuss Google Cloud Text-to-Speech and Speech-to-text, demo Parrotron, and more. You can also learn more about the Google research being presented at Interspeech 2019 below (Google affiliations in blue).

Organizing Committee includes:
Michiel Bacchiani

Technical Program Committee includes:
Tara Sainath

Tutorials
Neural Machine Translation
Organizers include: Wolfgang Macherey, Yuan Cao

Accepted Publications
Building Large-Vocabulary ASR Systems for Languages Without Any Audio Training Data (link to appear soon)
Manasa Prasad, Daan van Esch, Sandy Ritchie, Jonas Fromseier Mortensen

Multi-Microphone Adaptive Noise Cancellation for Robust Hotword Detection (link to appear soon)
Yiteng Huang, Turaj Shabestary, Alexander Gruenstein, Li Wan

Direct Speech-to-Speech Translation with a Sequence-to-Sequence Model
Ye Jia, Ron Weiss, Fadi Biadsy, Wolfgang Macherey, Melvin Johnson, Zhifeng Chen, Yonghui Wu

Improving Keyword Spotting and Language Identification via Neural Architecture Search at Scale (link to appear soon)
Hanna Mazzawi, Javier Gonzalvo, Aleks Kracun, Prashant Sridhar, Niranjan Subrahmanya, Ignacio Lopez Moreno, Hyun Jin Park, Patrick Violette

Shallow-Fusion End-to-End Contextual Biasing (link to appear soon)
Ding Zhao, Tara Sainath, David Rybach, Pat Rondon, Deepti Bhatia, Bo Li, Ruoming Pang

VoiceFilter: Targeted Voice Separation by Speaker-Conditioned Spectrogram Masking
Quan Wang, Hannah Muckenhirn, Kevin Wilson, Prashant Sridhar, Zelin Wu, John Hershey, Rif Saurous, Ron Weiss, Ye Jia, Ignacio Lopez Moreno

SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition
Daniel Park, William Chan, Yu Zhang, Chung-Cheng Chiu, Barret Zoph, Ekin Dogus Cubuk, Quoc Le

Two-Pass End-to-End Speech Recognition
Ruoming Pang, Tara Sainath, David Rybach, Yanzhang He, Rohit Prabhavalkar, Mirko Visontai, Qiao Liang, Trevor Strohman, Yonghui Wu, Ian McGraw, Chung-Cheng Chiu

On the Choice of Modeling Unit for Sequence-to-Sequence Speech Recognition
Kazuki Irie, Rohit Prabhavalkar, Anjuli Kannan, Antoine Bruguier, David Rybach, Patrick Nguyen

Contextual Recovery of Out-of-Lattice Named Entities in Automatic Speech Recognition (link to appear soon)
Jack Serrino, Leonid Velikovich, Petar Aleksic, Cyril Allauzen

Joint Speech Recognition and Speaker Diarization via Sequence Transduction
Laurent El Shafey, Hagen Soltau, Izhak Shafran

Personalizing ASR for Dysarthric and Accented Speech with Limited Data
Joel Shor, Dotan Emanuel, Oran Lang, Omry Tuval, Michael Brenner, Julie Cattiau, Fernando Vieira, Maeve McNally, Taylor Charbonneau, Melissa Nollstadt, Avinatan Hassidim, Yossi Matias

An Investigation Into On-Device Personalization of End-to-End Automatic Speech Recognition Models (link to appear soon)
Khe Chai Sim, Petr Zadrazil, Francoise Beaufays

Salient Speech Representations Based on Cloned Networks
Bastiaan Kleijn, Felicia Lim, Michael Chinen, Jan Skoglund

Cross-Lingual Consistency of Phonological Features: An Empirical Study (link to appear soon)
Cibu Johny, Alexander Gutkin, Martin Jansche

LibriTTS: A Corpus Derived from LibriSpeech for Text-to-Speech
Heiga Zen, Viet Dang, Robert Clark, Yu Zhang, Ron Weiss, Ye Jia, Zhifeng Chen, Yonghui Wu

Improving Performance of End-to-End ASR on Numeric Sequences
Cal Peyser, Hao Zhang, Tara Sainath, Zelin Wu

Developing Pronunciation Models in New Languages Faster by Exploiting Common Grapheme-to-Phoneme Correspondences Across Languages (link to appear soon)
Harry Bleyan, Sandy Ritchie, Jonas Fromseier Mortensen, Daan van Esch

Phoneme-Based Contextualization for Cross-Lingual Speech Recognition in End-to-End Models
Ke Hu, Antoine Bruguier, Tara Sainath, Rohit Prabhavalkar, Golan Pundak

Fréchet Audio Distance: A Reference-free Metric for Evaluating Music Enhancement Algorithms
Kevin Kilgour, Mauricio Zuluaga, Dominik Roblek, Matthew Sharifi

Learning to Speak Fluently in a Foreign Language: Multilingual Speech Synthesis and Cross-Language Voice Cloning
Yu Zhang, Ron Weiss, Heiga Zen, Yonghui Wu, Zhifeng Chen, RJ Skerry-Ryan, Ye Jia, Andrew Rosenberg, Bhuvana Ramabhadran

Sampling from Stochastic Finite Automata with Applications to CTC Decoding
Martin Jansche, Alexander Gutkin

Large-Scale Multilingual Speech Recognition with a Streaming End-to-End Model (link to appear soon)
Anjuli Kannan, Arindrima Datta, Tara Sainath, Eugene Weinstein, Bhuvana Ramabhadran, Yonghui Wu, Ankur Bapna, Zhifeng Chen, SeungJi Lee

A Real-Time Wideband Neural Vocoder at 1.6 kb/s Using LPCNet
Jean-Marc Valin, Jan Skoglund

Low-Dimensional Bottleneck Features for On-Device Continuous Speech Recognition
David Ramsay, Kevin Kilgour, Dominik Roblek, Matthew Sharif

Unified Verbalization for Speech Recognition & Synthesis Across Languages (link to appear soon)
Sandy Ritchie, Richard Sproat, Kyle Gorman, Daan van Esch, Christian Schallhart, Nikos Bampounis, Benoit Brard, Jonas Mortensen, Amelia Holt, Eoin Mahon

Better Morphology Prediction for Better Speech Systems (link to appear soon)
Dravyansh Sharma, Melissa Wilson, Antoine Bruguier

Dual Encoder Classifier Models as Constraints in Neural Text Normalization
Ajda Gokcen, Hao Zhang, Richard Sproat

Large-Scale Visual Speech Recognition
Brendan Shillingford, Yannis Assael, Matthew Hoffman, Thomas Paine, Cían Hughes, Utsav Prabhu, Hank Liao, Hasim Sak, Kanishka Rao, Lorrayne Bennett, Marie Mulville, Ben Coppin, Ben Laurie, Andrew Senior, Nando de Freitas

Parrotron: An End-to-End Speech-to-Speech Conversion Model and its Applications to Hearing-Impaired Speech and Speech Separation
Fadi Biadsy, Ron Weiss, Pedro Moreno, Dimitri Kanevsky, Ye Jia




Source: Google AI Blog


Google at NeurIPS 2018



This week, Montréal hosts the 32nd annual Conference on Neural Information Processing Systems (NeurIPS 2018), the biggest machine learning conference of the year. The conference includes invited talks, demonstrations and presentations of some of the latest in machine learning research. Google will have a strong presence at NeurIPS 2018, with more than 400 Googlers attending in order to contribute to, and learn from, the broader academic research community via talks, posters, workshops, competitions and tutorials. We will be presenting work that pushes the boundaries of what is possible in language understanding, translation, speech recognition and visual & audio perception, with Googlers co-authoring nearly 100 accepted papers (see below).

At the forefront of machine learning, Google is actively exploring virtually all aspects of the field spanning both theory and applications. This research is often inspired by real product needs but increasingly more often driven by scientific curiosity. Given the range of research projects that we pursue, we have found it useful to define a new framework which helps crystalize the goals of projects and allows us to measure progress and success in appropriate ways. Our contributions to NeurIPS and to the broader research community in general are integral to our research mission.

If you are attending NeurIPS 2018, we hope you’ll stop by our booth and chat with our researchers about the projects and opportunities at Google that go into solving the world's most challenging research problems, and to see demonstrations of some of the exciting research we pursue. You can also learn more about our work being presented in the list below (Googlers highlighted in blue).

Google is a Platinum Sponsor of NeurIPS 2018.

NeurIPS Foundation Board
Corinna Cortes, John C. Platt, Fernando Pereira

NeurIPS Organizing Committee
General Chair: Samy Bengio
Program Co-Chair: Hugo Larochelle
Party Chair: Douglas Eck
Diversity and Inclusion Co-Chair: Katherine A. Heller

NeurIPS Program Committee
Senior Area Chairs include:Angela Yu, Claudio Gentile, Cordelia Schmid, Corinna Cortes, Csaba Szepesvari, Dale Schuurmans, Elad Hazan, Mehryar Mohri, Raia Hadsell, Satyen Kale, Yishay Mansour, Afshin Rostamizadeh, Alex Kulesza

Area Chairs include: Amin Karbasi, Amir Globerson, Amit Daniely, Andras Gyorgy, Andriy Mnih, Been Kim, Branislav Kveton, Ce Liu, D Sculley, Danilo Rezende, Danny TarlowDavid Balduzzi, Denny Zhou, Dilan Gorur, Dumitru Erhan, George Dahl, Graham Taylor, Ian Goodfellow, Jasper Snoek, Jean-Philippe Vert, Jia Deng, Jon Shlens, Karen Simonyan, Kevin Swersky, Kun Zhang, Lihong Li, Marc G. Bellemare, Marco Cuturi, Maya Gupta, Michael BowlingMichalis Titsias, Mohammad Norouzi, Mouhamadou Moustapha Cisse, Nicolas Le Roux, Remi Munos, Sanjiv Kumar, Sanmi Koyejo, Sergey Levine, Silvia Chiappa, Slav PetrovSurya Ganguli, Timnit Gebru, Timothy Lillicrap, Viren Jain, Vitaly Feldman, Vitaly Kuznetsov

Workshops Program Committee includes: Mehryar Mohri, Sergey Levine

Accepted Papers
3D-Aware Scene Manipulation via Inverse Graphics
Shunyu Yao, Tzu Ming Harry Hsu, Jun-Yan Zhu, Jiajun Wu, Antonio Torralba, William T. Freeman, Joshua B. Tenenbaum

A Retrieve-and-Edit Framework for Predicting Structured Outputs
Tatsunori Hashimoto, Kelvin Guu, Yonatan Oren, Percy Liang

Adversarial Attacks on Stochastic Bandits
Kwang-Sung Jun, Lihong Li, Yuzhe Ma, Xiaojin Zhu

Adversarial Examples that Fool both Computer Vision and Time-Limited Humans
Gamaleldin F. Elsayed, Shreya Shankar, Brian Cheung, Nicolas Papernot, Alex Kurakin, Ian Goodfellow, Jascha Sohl-Dickstein

Adversarially Robust Generalization Requires More Data
Ludwig Schmidt, Shibani Santurkar, Dimitris Tsipras, Kunal Talwar, Aleksander Madry

Are GANs Created Equal? A Large-Scale Study
Mario Lucic, Karol Kurach, Marcin Michalski, Olivier Bousquet, Sylvain Gelly

Collaborative Learning for Deep Neural Networks
Guocong Song, Wei Chai

Completing State Representations using Spectral Learning
Nan Jiang, Alex Kulesza, Santinder Singh

Content Preserving Text Generation with Attribute Controls
Lajanugen Logeswaran, Honglak Lee, Samy Bengio

Context-aware Synthesis and Placement of Object Instances
Donghoon Lee, Sifei Liu, Jinwei Gu, Ming-Yu Liu, Ming-Hsuan Yang, Jan Kautz

Co-regularized Alignment for Unsupervised Domain Adaptation
Abhishek Kumar, Prasanna Sattigeri, Kahini Wadhawan, Leonid Karlinsky, Rogerlo Feris, William T. Freeman, Gregory Wornell

cpSGD: Communication-efficient and differentially-private distributed SGD
Naman Agarwal, Ananda Theertha Suresh, Felix Yu, Sanjiv Kumar, H. Brendan Mcmahan

Data Center Cooling Using Model-Predictive Control
Nevena Lazic, Craig Boutilier, Tyler Lu, Eehern Wong, Binz Roy, MK Ryu, Greg Imwalle

Data-Efficient Hierarchical Reinforcement Learning
Ofir Nachum, Shixiang Gu, Honglak Lee, Sergey Levine

Deep Attentive Tracking via Reciprocative Learning
Shi Pu, Yibing Song, Chao Ma, Honggang Zhang, Ming-Hsuan Yang

Generalizing Point Embeddings Using the Wasserstein Space of Elliptical Distributions
Boris Muzellec, Marco Cuturi

GLoMo: Unsupervised Learning of Transferable Relational Graphs
Zhilin Yang, Jake (Junbo) Zhao, Bhuwan Dhingra, Kaiming He, William W. Cohen, Ruslan Salakhutdinov, Yann LeCun

GroupReduce: Block-Wise Low-Rank Approximation for Neural Language Model Shrinking
Patrick Chen, Si Si, Yang Li, Ciprian Chelba, Cho-Jui Hsieh

Interpreting Neural Network Judgments via Minimal, Stable, and Symbolic Corrections
Xin Zhang, Armando Solar-Lezama, Rishabh Singh

Learning Hierarchical Semantic Image Manipulation through Structured Representations
Seunghoon Hong, Xinchen Yan, Thomas Huang, Honglak Lee

Learning Temporal Point Processes via Reinforcement Learning
Shuang Li, Shuai Xiao, Shixiang Zhu, Nan Du, Yao Xie, Le Song

Learning Towards Minimum Hyperspherical Energy
Weiyang Liu, Rongmei Lin, Zhen Liu, Lixin Liu, Zhiding Yu, Bo Dai, Le Song

Mesh-TensorFlow: Deep Learning for Supercomputers
Noam Shazeer, Youlong Cheng, Niki Parmar, Dustin Tran, Ashish Vaswani, Penporn Koanantakool, Peter Hawkins, HyoukJoong Lee, Mingsheng Hong, Cliff Young, Ryan Sepassi, Blake Hechtman

MiME: Multilevel Medical Embedding of Electronic Health Records for Predictive Healthcare
Edward Choi, Cao Xiao, Walter F. Stewart, Jimeng Sun

Searching for Efficient Multi-Scale Architectures for Dense Image Prediction
Liang-Chieh Chen, Maxwell D. Collins, Yukun Zhu, George Papandreou, Barret Zoph, Florian Schroff, Hartwig Adam, Jonathon Shlens

SplineNets: Continuous Neural Decision Graphs
Cem Keskin, Shahram Izadi

Task-Driven Convolutional Recurrent Models of the Visual System
Aran Nayebi, Daniel Bear, Jonas Kubilius, Kohitij Kar, Surya Ganguli, David Sussillo, James J. DiCarlo, Daniel L. K. Yamins

To Trust or Not to Trust a Classifier
Heinrich Jiang, Been Kim, Melody Guan, Maya Gupta

Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis
Ye Jia, Yu Zhang, Ron J. Weiss, Quan Wang, Jonathan Shen, Fei Ren, Zhifeng Chen, Patrick Nguyen, Ruoming Pang, Ignacio Lopez Moreno, Yonghui Wu

Algorithms and Theory for Multiple-Source Adaptation
Judy Hoffman, Mehryar Mohri, Ningshan Zhang

A Lyapunov-based Approach to Safe Reinforcement Learning
Yinlam Chow, Ofir Nachum, Edgar Duenez-Guzman, Mohammad Ghavamzadeh

Adaptive Methods for Nonconvex Optimization
Manzil Zaheer, Sashank Reddi, Devendra Sachan, Satyen Kale, Sanjiv Kumar

Assessing Generative Models via Precision and Recall
Mehdi S. M. Sajjadi, Olivier Bachem, Mario Lucic, Olivier Bousquet, Sylvain Gelly

A Loss Framework for Calibrated Anomaly Detection
Aditya Menon, Robert Williamson

Blockwise Parallel Decoding for Deep Autoregressive Models
Mitchell Stern, Noam Shazeer, Jakob Uszkoreit

Breaking the Curse of Horizon: Infinite-Horizon Off-Policy Estimation
Qiang Liu, Lihong Li, Ziyang Tang, Dengyong Zhou

Contextual Pricing for Lipschitz Buyers
Jieming Mao, Renato Leme, Jon Schneider

Coupled Variational Bayes via Optimization Embedding
Bo Dai, Hanjun Dai, Niao He, Weiyang Liu, Zhen Liu, Jianshu Chen, Lin Xiao, Le Song

Data Amplification: A Unified and Competitive Approach to Property Estimation
Yi HAO, Alon Orlitsky, Ananda Theertha Suresh, Yihong Wu

Deep Network for the Integrated 3D Sensing of Multiple People in Natural Images
Elisabeta Marinoiu, Mihai Zanfir, Alin-Ionut Popa, Cristian Sminchisescu

Deep Non-Blind Deconvolution via Generalized Low-Rank Approximation
Wenqi Ren, Jiawei Zhang, Lin Ma, Jinshan Pan, Xiaochun Cao, Wei Liu, Ming-Hsuan Yang

Diminishing Returns Shape Constraints for Interpretability and Regularization
Maya Gupta, Dara Bahri, Andrew Cotter, Kevin Canini

DropBlock: A Regularization Method for Convolutional Networks
Golnaz Ghiasi, Tsung-Yi Lin, Quoc V. Le

Generalization Bounds for Uniformly Stable Algorithms
Vitaly Feldman, Jan Vondrak

Geometrically Coupled Monte Carlo Sampling
Mark Rowland, Krzysztof Choromanski, Francois Chalus, Aldo Pacchiano, Tamas Sarlos, Richard E. Turner, Adrian Weller

GILBO: One Metric to Measure Them All
Alexander A. Alemi, Ian Fischer

Insights on Representational Similarity in Neural Networks with Canonical Correlation
Ari S. Morcos, Maithra Raghu, Samy Bengio

Improving Online Algorithms via ML Predictions
Manish Purohit, Zoya Svitkina, Ravi Kumar

Learning to Exploit Stability for 3D Scene Parsing
Yilun Du, Zhijan Liu, Hector Basevi, Ales Leonardis, William T. Freeman, Josh Tenembaum, Jiajun Wu

Maximizing Induced Cardinality Under a Determinantal Point Process
Jennifer Gillenwater, Alex Kulesza, Sergei Vassilvitskii, Zelda Mariet

Memory Augmented Policy Optimization for Program Synthesis and Semantic Parsing
Chen Liang, Mohammad Norouzi, Jonathan Berant, Quoc V. Le, Ni Lao

PCA of High Dimensional Random Walks with Comparison to Neural Network Training
Joseph M. Antognini, Jascha Sohl-Dickstein

Predictive Approximate Bayesian Computation via Saddle Points
Yingxiang Yang, Bo Dai, Negar Kiyavash, Niao He

Recurrent World Models Facilitate Policy Evolution
David Ha, Jürgen Schmidhuber

Sanity Checks for Saliency Maps
Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, Been Kim

Simple, Distributed, and Accelerated Probabilistic Programming
Dustin Tran, Matthew Hoffman, Dave Moore, Christopher Suter, Srinivas Vasudevan, Alexey Radul, Matthew Johnson, Rif A. Saurous

Tangent: Automatic Differentiation Using Source-Code Transformation for Dynamically Typed Array Programming
Bart van Merriënboer, Dan Moldovan, Alex Wiltschko

The Emergence of Multiple Retinal Cell Types Through Efficient Coding of Natural Movies
Samuel A. Ocko, Jack Lindsey, Surya Ganguli, Stephane Deny

The Everlasting Database: Statistical Validity at a Fair Price
Blake Woodworth, Vitaly Feldman, Saharon Rosset, Nathan Srebro

The Spectrum of the Fisher Information Matrix of a Single-Hidden-Layer Neural Network
Jeffrey Pennington, Pratik Worah

A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks
Kimin Lee, Kibok Lee, Honglak Lee, Jinwoo Shin

Autoconj: Recognizing and Exploiting Conjugacy Without a Domain-Specific Language
Matthew D. Hoffman, Matthew Johnson, Dustin Tran

A Bayesian Nonparametric View on Count-Min Sketch
Diana Cai, Michael Mitzenmacher, Ryan Adams (no longer at Google)

Automatic Differentiation in ML: Where We are and Where We Should be Going
Bart van Merriënboer, Olivier Breuleux, Arnaud Bergeron, Pascal Lamblin

Assessing the Scalability of Biologically-Motivated Deep Learning Algorithms and Architectures
Sergey Bartunov, Adam Santoro, Blake A. Richards, Geoffrey E. Hinton, Timothy P. Lillicrap

Deep Generative Models for Distribution-Preserving Lossy Compression
Michael Tschannen, Eirikur Agustsson, Mario Lucic

Deep Structured Prediction with Nonlinear Output Transformations
Colin Graber, Ofer Meshi, Alexander Schwing

Discovery of Latent 3D Keypoints via End-to-end Geometric Reasoning
Supasorn Suwajanakorn, Noah Snavely, Jonathan Tompson, Mohammad Norouzi

Transfer Learning with Neural AutoML
Catherine Wong, Neil Houlsby, Yifeng Lu, Andrea Gesmundo

Efficient Gradient Computation for Structured Output Learning with Rational and Tropical Losses
Corinna Cortes, Vitaly Kuznetsov, Mehryar Mohri, Dmitry Storcheus, Scott Yang

Cooperative neural networks (CoNN): Exploiting prior independence structure for improved classification
Harsh Shrivastava, Eugene Bart, Bob Price, Hanjun Dai, Bo Dai, Srinivas Aluru

Graph Oracle Models, Lower Bounds, and Gaps for Parallel Stochastic Optimization
Blake Woodworth, Jialei Wang, Brendan McMahan, Nathan Srebro

Hierarchical Reinforcement Learning for Zero-shot Generalization with Subtask Dependencies
Sungryull Sohn, Junhyuk Oh, Honglak Lee

Human-in-the-Loop Interpretability Prior
Isaac Lage, Andrew Slavin Ross, Been Kim, Samuel J. Gershman, Finale Doshi-Velez

Joint Autoregressive and Hierarchical Priors for Learned Image Compression
David Minnen, Johannes Ballé, George D Toderici

Large-Scale Computation of Means and Clusters for Persistence Diagrams Using Optimal Transport
Théo Lacombe, Steve Oudot, Marco Cuturi

Learning to Reconstruct Shapes from Unseen Classes
Xiuming Zhang, Zhoutong Zhang, Chengkai Zhang, Joshua B. Tenenbaum, William T. Freeman, Jiajun Wu

Large Margin Deep Networks for Classification
Gamaleldin Fathy Elsayed, Dilip Krishnan, Hossein Mobahi, Kevin Regan, Samy Bengio

Mallows Models for Top-k Lists
Flavio Chierichetti, Anirban Dasgupta, Shahrzad Haddadan, Ravi Kumar, Silvio Lattanzi

Meta-Learning MCMC Proposals
Tongzhou Wang, YI WU, Dave Moore, Stuart Russell

Non-delusional Q-Learning and Value-Iteration
Tyler Lu, Dale Schuurmans, Craig Boutilier

Online Learning of Quantum States
Scott Aaronson, Xinyi Chen, Elad Hazan, Satyen Kale, Ashwin Nayak

Online Reciprocal Recommendation with Theoretical Performance Guarantees
Fabio Vitale, Nikos Parotsidis, Claudio Gentile

Optimal Algorithms for Continuous Non-monotone Submodular and DR-Submodular Maximization
Rad Niazadeh, Tim Roughgarden, Joshua R. Wang

Policy Regret in Repeated Games
Raman Arora, Michael Dinitz, Teodor Vanislavov Marinov, Mehryar Mohri

Provable Variational Inference for Constrained Log-Submodular Models
Josip Djolonga, Stefanie Jegelka, Andreas Krause

Realistic Evaluation of Deep Semi-Supervised Learning Algorithms
Avital Oliver, Augustus Odena, Colin Raffel, Ekin D. Cubuk, Ian J. Goodfellow

Sample-Efficient Reinforcement Learning with Stochastic Ensemble Value Expansion
Jacob Buckman, Danijar Hafner, George Tucker, Eugene Brevdo, Honglak Lee

Visual Object Networks: Image Generation with Disentangled 3D Representations
JunYan Zhu, Zhoutong Zhang, Chengkai Zhang, Jiajun Wu, Antonio Torralba, Josh Tenenbaum, William T. Freeman

Watch Your Step: Learning Node Embeddings via Graph Attention
Sami Abu-El-Haija, Bryan Perozzi, Rami AlRfou, Alexander Alemi

Workshops
2nd Workshop on Machine Learning on the Phone and Other Consumer Devices
Co-Chairs include: Sujith Ravi, Wei Chai, Hrishikesh Aradhye

Bayesian Deep Learning
Workshop Organizers include: Kevin Murphy

Continual Learning
Workshop Organizers include: Marc Pickett

The Second Conversational AI Workshop – Today's Practice and Tomorrow's Potential
Workshop Organizers include: Dilek Hakkani-Tur

Visually Grounded Interaction and Language
Workshop Organizers include: Olivier Pietquin

Workshop on Ethical, Social and Governance Issues in AI
Workshop Organizers include: D. Sculley

AI for Social Good
Workshop Program Committee includes: Samuel Greydanus

Black in AI
Workshop Organizers: Mouhamadou Moustapha Cisse, Timnit Gebru
Program Committee: Irwan Bello, Samy Bengio, Ian Goodfellow, Hugo Larochelle, Margaret Mitchell

Interpretability and Robustness in Audio, Speech, and Language
Workshop Organizers include: Ehsan Variani, Bhuvana Ramabhadran

LatinX in AI
Workshop Organizers includes: Pablo Samuel Castro
Program Committee includes: Sergio Guadarrama

Machine Learning for Systems
Workshop Organizers include: Anna Goldie, Azalia Mirhoseini, Kevin Swersky, Milad Hashemi
Program Committee includes: Simon Kornblith, Nicholas Frosst, Amir Yazdanbakhsh, Azade Nazi, James Bradbury, Sharan Narang, Martin Maas, Carlos Villavieja

Queer in AI
Workshop Organizers include: Raphael Gontijo Lopes

Second Workshop on Machine Learning for Creativity and Design
Workshop Organizers include: Jesse Engel, Adam Roberts

Workshop on Security in Machine Learning
Workshop Organizers include: Nicolas Papernot

Tutorial
Visualization for Machine Learning
Fernanda Viégas, Martin Wattenberg

Source: Google AI Blog


Google at EMNLP 2018



This week, the annual conference on Empirical Methods in Natural Language Processing (EMNLP 2018) will be held in Brussels, Belgium. Google will have a strong presence at EMNLP with several of our researchers presenting their research on a diverse set of topics, including language identification, segmentation, semantic parsing and question answering, additionally serving in various levels of organization in the conference. Googlers will also be presenting their papers and participating in the co-located Conference on Computational Natural Language Learning (CoNLL 2018) shared task on multilingual parsing.

In addition to this involvement, we are sharing several new datasets with the academic community that are released with papers published at EMNLP, with the goal of accelerating progress in empirical natural language processing (NLP). These releases are designed to help account for mismatches between the datasets a machine learning model is trained and tested on, and the inputs an NLP system would be asked to handle “in the wild”. All of the datasets we are releasing include realistic, naturally occurring text, and fall into two main categories: 1) challenge sets for well-studied core NLP tasks (part-of-speech tagging, coreference) and 2) datasets to encourage new directions of research on meaning preservation under rephrasings/edits (query well-formedness, split-and-rephrase, atomic edits):
  • Noun-Verb Ambiguity in POS Tagging Dataset: English part-of-speech taggers regularly make egregious errors related to noun-verb ambiguity, despite high accuracies on standard datasets. For example: in “Mark which area you want to distress” several state-of-the-art taggers annotate “Mark” as a noun instead of a verb. We release a new dataset of over 30,000 naturally occurring non-trivial annotated examples of noun-verb ambiguity. Taggers previously indistinguishable from each other have accuracies ranging from 57% to 75% accuracy on this challenge set.
  • Query Wellformedness Dataset: Web search queries are usually “word-salad” style queries with little resemblance to natural language questions (“barack obama height” as opposed to “What is the height of Barack Obama?”). Differentiating a natural language question from a query is of importance to several applications include dialogue. We annotate and release 25,100 queries from the open-source Paralex corpus with ratings on how close they are to well-formed natural language questions.
  • WikiSplit: Split and Rephrase Dataset Extracted from Wikipedia Edits: We extract examples of sentence splits from Wikipedia edits where one sentence gets split into two sentences that together preserve the original meaning of the sentence (E.g. “Street Rod is the first in a series of two games released for the PC and Commodore 64 in 1989.” is split into “Street Rod is the first in a series of two games.” and “It was released for the PC and Commodore 64 in 1989.”) The released corpus contains one million sentence splits with a vocabulary of more than 600,000 words. 
  • WikiAtomicEdits: A Multilingual Corpus of Atomic Wikipedia Edits: Information about how people edit language in Wikipedia can be used to understand the structure of language itself. We pay particular attention to two atomic edits: insertions and deletions that consist of a single contiguous span of text. We extract around 43 million such edits in 8 languages and show that they provide valuable information about entailment and discourse. For example, insertion of “in 1949” adds a prepositional phrase to the sentence “She died there after a long illness” resulting in “She died there in 1949 after a long illness”.
These datasets join the others that Google has recently released, such as Conceptual Captions and GAP Coreference Resolution in addition to our past contributions.

Below is a full list of Google’s involvement and publications being presented at EMNLP and CoNLL (Googlers highlighted in blue). We are particularly happy to announce that the paper “Linguistically-Informed Self-Attention for Semantic Role Labeling” was awarded one of the two Best Long Paper awards. This work was done by our 2017 intern Emma Strubell, Googlers Daniel Andor, David Weiss and Google Faculty Advisor Andrew McCallum. We congratulate these authors, and all other researchers who are presenting their work at the conference.

Area Chairs Include:
Ming-Wei Chang, Marius Pasca, Slav Petrov, Emily Pitler, Meg Mitchell, Taro Watanabe

EMNLP Publications
A Challenge Set and Methods for Noun-Verb Ambiguity
Ali Elkahky, Kellie Webster, Daniel Andor, Emily Pitler

A Fast, Compact, Accurate Model for Language Identification of Codemixed Text
Yuan Zhang, Jason Riesa, Daniel Gillick, Anton Bakalov, Jason Baldridge, David Weiss

AirDialogue: An Environment for Goal-Oriented Dialogue Research
Wei Wei, Quoc Le, Andrew Dai, Jia Li

Content Explorer: Recommending Novel Entities for a Document Writer
Michal Lukasik, Richard Zens

Deep Relevance Ranking using Enhanced Document-Query Interactions
Ryan McDonald, George Brokos, Ion Androutsopoulos

HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering
Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, Christopher D. Manning

Identifying Well-formed Natural Language Questions
Manaal Faruqui, Dipanjan Das

Learning To Split and Rephrase From Wikipedia Edit History
Jan A. Botha, Manaal Faruqui, John Alex, Jason Baldridge, Dipanjan Das

Linguistically-Informed Self-Attention for Semantic Role Labeling
Emma Strubell, Patrick Verga, Daniel Andor, David Weiss, Andrew McCallum

Open Domain Question Answering Using Early Fusion of Knowledge Bases and Text
Haitian Sun, Bhuwan Dhingra, Manzil Zaheer, Kathryn Mazaitis, Ruslan Salakhutdinov, William Cohen

Noise Contrastive Estimation for Conditional Models: Consistency and Statistical Efficiency
Zhuang Ma, Michael Collins

Part-of-Speech Tagging for Code-Switched, Transliterated Texts without Explicit Language Identification
Kelsey Ball, Dan Garrette

Phrase-Indexed Question Answering: A New Challenge for Scalable Document Comprehension
Minjoon Seo, Tom Kwiatkowski, Ankur P. Parikh, Ali Farhadi, Hannaneh Hajishirzi

Policy Shaping and Generalized Update Equations for Semantic Parsing from Denotations
Dipendra Misra, Ming-Wei Chang, Xiaodong He, Wen-tau Yih

Revisiting Character-Based Neural Machine Translation with Capacity and Compression
Colin Cherry, George Foster, Ankur Bapna, Orhan Firat, Wolfgang Macherey

Self-governing neural networks for on-device short text classification
Sujith Ravi, Zornitsa Kozareva

Semi-Supervised Sequence Modeling with Cross-View Training
Kevin Clark, Minh-Thang Luong, Christopher D. Manning, Quoc Le

State-of-the-art Chinese Word Segmentation with Bi-LSTMs
Ji Ma, Kuzman Ganchev, David Weiss

Subgoal Discovery for Hierarchical Dialogue Policy Learning
Da Tang, Xiujun Li, Jianfeng Gao, Chong Wang, Lihong Li, Tony Jebara

SwitchOut: an Efficient Data Augmentation Algorithm for Neural Machine Translation
Xinyi Wang, Hieu Pham, Zihang Dai, Graham Neubig

The Importance of Generation Order in Language Modeling
Nicolas Ford, Daniel Duckworth, Mohammad Norouzi, George Dahl

Training Deeper Neural Machine Translation Models with Transparent Attention
Ankur Bapna, Mia Chen, Orhan Firat, Yuan Cao, Yonghui Wu

Understanding Back-Translation at Scale
Sergey Edunov, Myle Ott, Michael Auli, David Grangier

Unsupervised Natural Language Generation with Denoising Autoencoders
Markus Freitag, Scott Roy

WikiAtomicEdits: A Multilingual Corpus of Wikipedia Edits for Modeling Language and Discourse
Manaal Faruqui, Ellie Pavlick, Ian Tenney, Dipanjan Das

WikiConv: A Corpus of the Complete Conversational History of a Large Online Collaborative Community
Yiqing Hua, Cristian Danescu-Niculescu-Mizil, Dario Taraborelli, Nithum Thain, Jeffery Sorensen, Lucas Dixon

EMNLP Demos
SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing
Taku Kudo, John Richardson

Universal Sentence Encoder for English
Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Brian Strope, Ray Kurzweil

CoNLL Shared Task
Multilingual Parsing from Raw Text to Universal Dependencies
Slav Petrov, co-organizer

Universal Dependency Parsing with Multi-Treebank Models
Aaron Smith, Bernd Bohnet, Miryam de Lhoneux, Joakim Nivre, Yan Shao, Sara Stymne
(Winner of the Universal POS Tagging and Morphological Tagging subtasks, using the open-sourced Meta-BiLSTM tagger)

CoNLL Publication
Sentence-Level Fluency Evaluation: References Help, But Can Be Spared!
Katharina Kann, Sascha Rothe, Katja Filippova

Source: Google AI Blog


Googlers on the road: CLS and OSCON 2018

Next week a veritable who’s who of free and open source software luminaries, maintainers and developers will gather to celebrate the 20th annual OSCON and the 20th anniversary of the Open Source Definition. Naturally, the Google Open Source and Google Cloud teams will be there too!

Program chairs at OSCON 2017, left to right:
Rachel Roumeliotis, Kelsey Hightower, Scott Hanselman.
Photo used with permission from O'Reilly Media.
This year OSCON returns to Portland, Oregon and runs from July 16-19. As usual, it is preceded by the free-to-attend Community Leadership Summit on July 14-15.

If you’re curious about our outreach programs, our approach to open source, or any of the open source projects we’ve released, please find us! We’re eager to chat. You’ll find us and many other Googlers throughout the week on stage, in the expo hall, and at several special events that we’re running, including:
Here’s a rundown of the sessions we’re hosting this year:

Sunday, July 15th (Community Leadership Summit)

11:45am   Asking for time and/or money by Cat Allman

Monday, July 16th (Tutorials)

9:00am    Getting started with TensorFlow by Josh Gordon
1:30pm    Introduction to natural language processing with Python by Barbara Fusinska

Tuesday, July 17th (Tutorials)

9:00am    Istio Day opening remarks by Kelsey Hightower
9:00am    TensorFlow Day opening remarks by Edd Wilder-James
9:05am    Sailing to 1.0: Istio community update by April Nassi
9:05am    The state of TensorFlow by Sandeep Gupta
9:30am    Introduction to fairness in machine learning by Hallie Benjamin
9:55am    Farm to table: A TensorFlow story by Gunhan Gulsoy
11:00am  Hassle-free, scalable machine learning with Kubeflow by Barbara Fusinska
11:05am  Istio: Zero-trust communication security for production services by Samrat Ray, Tao Li, and Mak Ahmad
12:00pm  Project Magenta: Machine learning for music and art by Sherol Chen
1:35pm    Istio à la carte by Daniel Ciruli

Wednesday, July 18th (Sessions)

9:00am    Wednesday opening welcome by Kelsey Hightower
11:50am  Machine learning for continuous integration by Joseph Gregorio
1:45pm    Live-coding a beautiful, performant mobile app from scratch by Emily Fortuna and Matt Sullivan
2:35pm    Powering TensorFlow with big data using Apache Beam, Flink, and Spark by Holden Karau
5:25pm    Teaching the Next Generation to FLOSS by Josh Simmons

Thursday, July 19th (Sessions)

9:00am    Thursday opening welcome by Kelsey Hightower
9:40am    20 years later, open source is as important as ever by Sarah Novotny
11:50am  Google’s approach to distributed systems observability by Jaana B. Dogan
2:35pm    gRPC versus REST: Let the battle begin with Alex Borysov
5:05pm    Shenzhen Go: A visual Go environment for everybody, even professionals by Josh Deprez

We look forward to seeing you and the rest of the community there!

By Josh Simmons, Google Open Source

Google at ICML 2018



Machine learning is a key strategic focus at Google, with highly active groups pursuing research in virtually all aspects of the field, including deep learning and more classical algorithms, exploring theory as well as application. We utilize scalable tools and architectures to build machine learning systems that enable us to solve deep scientific and engineering challenges in areas of language, speech, translation, music, visual processing and more.

As a leader in machine learning research, Google is proud to be a Platinum Sponsor of the thirty-fifth International Conference on Machine Learning (ICML 2018), a premier annual event supported by the International Machine Learning Society taking place this week in Stockholm, Sweden. With over 130 Googlers attending the conference to present publications and host workshops, we look forward to our continued collaboration with the larger ML research community.

If you're attending ICML 2018, we hope you'll visit the Google booth and talk with our researchers to learn more about the exciting work, creativity and fun that goes into solving some of the field's most interesting challenges. Our researchers will also be available to talk about TensorFlow Hub, the latest work from the Magenta project, a Q&A session on the Google AI Residency program and much more. You can also learn more about our research being presented at ICML 2018 in the list below (Googlers highlighted in blue).

ICML 2018 Committees
Board Members include: Andrew McCallumCorinna CortesHugo LarochelleWilliam Cohen
Sponsorship Co-Chair: Ryan Adams

Accepted Publications
Predict and Constrain: Modeling Cardinality in Deep Structured Prediction
Nataly Brukhim, Amir Globerson

Quickshift++: Provably Good Initializations for Sample-Based Mean Shift
Heinrich Jiang, Jennifer Jang, Samory Kpotufe

Learning a Mixture of Two Multinomial Logits
Flavio Chierichetti, Ravi KumarAndrew Tomkins

Structured Evolution with Compact Architectures for Scalable Policy Optimization
Krzysztof Choromanski, Mark Rowland, Vikas Sindhwani, Richard E Turner, Adrian Weller

Fixing a Broken ELBO
Alexander Alemi, Ben Poole, Ian FischerJoshua DillonRif SaurousKevin Murphy

Hierarchical Long-term Video Prediction without Supervision
Nevan Wichers, Ruben Villegas, Dumitru ErhanHonglak Lee

Self-Consistent Trajectory Autoencoder: Hierarchical Reinforcement Learning with Trajectory Embeddings
John Co-Reyes, Yu Xuan Liu, Abhishek Gupta, Benjamin Eysenbach, Pieter Abbeel, Sergey Levine

Well Tempered Lasso
Yuanzhi Li, Yoram Singer

Programmatically Interpretable Reinforcement Learning
Abhinav Verma, Vijayaraghavan Murali, Rishabh Singh, Pushmeet Kohli, Swarat Chaudhuri

Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks
Lechao XiaoYasaman BahriJascha Sohl-DicksteinSamuel SchoenholzJeffrey Pennington

On the Optimization of Deep Networks: Implicit Acceleration by Overparameterization
Sanjeev Arora, Nadav Cohen, Elad Hazan

Scalable Deletion-Robust Submodular Maximization: Data Summarization with Privacy and Fairness Constraints
Ehsan Kazemi, Morteza Zadimoghaddam, Amin Karbasi

Data Summarization at Scale: A Two-Stage Submodular Approach
Marko Mitrovic, Ehsan Kazemi, Morteza Zadimoghaddam, Amin Karbasi

Machine Theory of Mind
Neil Rabinowitz, Frank Perbet, Francis Song, Chiyuan Zhang, S. M. Ali Eslami, Matthew Botvinick

Learning to Optimize Combinatorial Functions
Nir Rosenfeld, Eric Balkanski, Amir Globerson, Yaron Singer

Proportional Allocation: Simple, Distributed, and Diverse Matching with High Entropy
Shipra Agarwal, Morteza ZadimoghaddamVahab Mirrokni

Path Consistency Learning in Tsallis Entropy Regularized MDPs
Yinlam Chow, Ofir NachumMohammad Ghavamzadeh

Efficient Neural Architecture Search via Parameters Sharing
Hieu Pham, Melody Guan, Barret ZophQuoc LeJeff Dean

Adafactor: Adaptive Learning Rates with Sublinear Memory Cost
Noam Shazeer, Mitchell Stern

Learning Memory Access Patterns
Milad HashemiKevin SwerskyJamie Smith, Grant Ayers, Heiner Litz, Jichuan Chang, Christos Kozyrakis, Parthasarathy Ranganathan

SBEED: Convergent Reinforcement Learning with Nonlinear Function Approximation
Bo Dai, Albert Shaw, Lihong Li, Lin Xiao, Niao He, Zhen Liu, Jianshu Chen, Le Song

Scalable Bilinear Pi Learning Using State and Action Features
Yichen Chen, Lihong Li, Mengdi Wang

Distributed Asynchronous Optimization with Unbounded Delays: How Slow Can You Go?
Zhengyuan Zhou, Panayotis Mertikopoulos, Nicholas Bambos, Peter Glynn, Yinyu Ye, Li-Jia Li, Li Fei-Fei

Shampoo: Preconditioned Stochastic Tensor Optimization
Vineet Gupta, Tomer Koren, Yoram Singer

Parallel and Streaming Algorithms for K-Core Decomposition
Hossein Esfandiari, Silvio LattanziVahab Mirrokni

Can Deep Reinforcement Learning Solve Erdos-Selfridge-Spencer Games?
Maithra RaghuAlexander Irpan, Jacob Andreas, Bobby Kleinberg, Quoc Le, Jon Kleinberg

Is Generator Conditioning Causally Related to GAN Performance?
Augustus OdenaJacob BuckmanCatherine OlssonTom BrownChristopher OlahColin RaffelIan Goodfellow

The Mirage of Action-Dependent Baselines in Reinforcement Learning
George TuckerSurya Bhupatiraju, Shixiang Gu, Richard E Turner, Zoubin Ghahramani, Sergey Levine

MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks on Corrupted Labels
Lu Jiang, Zhengyuan Zhou, Thomas Leung, Li-Jia LiLi Fei-Fei

Loss Decomposition for Fast Learning in Large Output Spaces
En-Hsu Yen, Satyen KaleFelix Xinnan YuDaniel Holtmann-RiceSanjiv Kumar, Pradeep Ravikumar

A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music
Adam RobertsJesse EngelColin RaffelCurtis HawthorneDouglas Eck

Smoothed Action Value Functions for Learning Gaussian Policies
Ofir NachumMohammad NorouziGeorge TuckerDale Schuurmans

Fast Decoding in Sequence Models Using Discrete Latent Variables
Lukasz KaiserSamy BengioAurko RoyAshish VaswaniNiki ParmarJakob UszkoreitNoam Shazeer

Accelerating Greedy Coordinate Descent Methods
Haihao Lu, Robert Freund, Vahab Mirrokni

Approximate Leave-One-Out for Fast Parameter Tuning in High Dimensions
Shuaiwen Wang, Wenda Zhou, Haihao Lu, Arian Maleki, Vahab Mirrokni

Image Transformer
Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Noam Shazeer, Alexander Ku, Dustin Tran

Towards End-to-End Prosody Transfer for Expressive Speech Synthesis with Tacotron
RJ Skerry-Ryan, Eric Battenberg, Ying Xiao, Yuxuan Wang, Daisy Stanton, Joel Shor, Ron Weiss, Robert Clark, Rif Saurous

Dynamical Isometry and a Mean Field Theory of RNNs: Gating Enables Signal Propagation in Recurrent Neural Networks
Minmin Chen, Jeffrey Pennington,, Samuel Schoenholz

Style Tokens: Unsupervised Style Modeling, Control and Transfer in End-to-End Speech Synthesis
Yuxuan Wang, Daisy Stanton, Yu Zhang, RJ Skerry-Ryan, Eric Battenberg, Joel ShorYing Xiao, Ye Jia, Fei Ren, Rif Saurous

Constrained Interacting Submodular Groupings
Andrew CotterMahdi Milani FardSeungil YouMaya Gupta, Jeff Bilmes

Reinforcing Adversarial Robustness using Model Confidence Induced by Adversarial Training
Xi Wu, Uyeong Jang, Jiefeng Chen, Lingjiao Chen, Somesh Jha

Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viégas, Rory Sayres

Online Learning with Abstention
Corinna CortesGiulia DeSalvoClaudio GentileMehryar Mohri, Scott Yang

Online Linear Quadratic Control
Alon CohenAvinatan HasidimTomer KorenNevena LazicYishay MansourKunal Talwar

Competitive Caching with Machine Learned Advice
Thodoris Lykouris, Sergei Vassilvitskii

Efficient Neural Audio Synthesis
Nal Kalchbrenner, Erich Elsen, Karen Simonyan, Seb Noury, Norman Casagrande, Edward Lockhart, Florian Stimberg, Aäron van den Oord, Sander Dieleman, Koray Kavukcuoglu

Gradient Descent with Identity Initialization Efficiently Learns Positive Definite Linear Transformations by Deep Residual Networks
Peter Bartlett, Dave Helmbold, Phil Long

Understanding and Simplifying One-Shot Architecture Search
Gabriel BenderPieter-Jan KindermansBarret ZophVijay VasudevanQuoc Le

Approximation Algorithms for Cascading Prediction Models
Matthew Streeter

Learning Longer-term Dependencies in RNNs with Auxiliary Losses
Trieu TrinhAndrew DaiThang LuongQuoc Le

Self-Imitation Learning
Junhyuk Oh, Yijie Guo, Satinder Singh, Honglak Lee

Adaptive Sampled Softmax with Kernel Based Sampling
Guy Blanc, Steffen Rendle

Workshops
2018 Workshop on Human Interpretability in Machine Learning (WHI)
Organizers: Been Kim, Kush Varshney, Adrian Weller
Invited Speakers include: Fernanda ViégasMartin Wattenberg

Exploration in Reinforcement Learning
Organizers: Ben EysenbachSurya BhupatirajuShane Gu, Junhyuk Oh, Vincent Vanhoucke, Oriol Vinyals, Doina Precup

Theoretical Foundations and Applications of Deep Generative Models
Invited speakers include: Honglak Lee

Source: Google AI Blog


Google at CVPR 2018

Posted by Christian Howard, Editor-in-Chief, Google AI Communications

This week, Salt Lake City hosts the 2018 Conference on Computer Vision and Pattern Recognition (CVPR 2018), the premier annual computer vision event comprising the main conference and several co-located workshops and tutorials. As a leader in computer vision research and a Diamond Sponsor, Google will have a strong presence at CVPR 2018 — over 200 Googlers will be in attendance to present papers and invited talks at the conference, and to organize and participate in multiple workshops.

If you are attending CVPR this year, please stop by our booth and chat with our researchers who are actively pursuing the next generation of intelligent systems that utilize the latest machine learning techniques applied to various areas of machine perception. Our researchers will also be available to talk about and demo several recent efforts, including the technology behind portrait mode on the Pixel 2 and Pixel 2 XL smartphones, the Open Images V4 dataset and much more.

You can learn more about our research being presented at CVPR 2018 in the list below (Googlers highlighted in blue)

Organization
Finance Chair: Ramin Zabih

Area Chairs include: Sameer Agarwal, Aseem Agrawala, Jon Barron, Abhinav Shrivastava, Carl Vondrick, Ming-Hsuan Yang

Orals/Spotlights
Unsupervised Discovery of Object Landmarks as Structural Representations
Yuting Zhang, Yijie Guo, Yixin Jin, Yijun Luo, Zhiyuan He, Honglak Lee

DoubleFusion: Real-time Capture of Human Performances with Inner Body Shapes from a Single Depth Sensor
Tao Yu, Zerong Zheng, Kaiwen Guo, Jianhui Zhao, Qionghai Dai, Hao Li, Gerard Pons-Moll, Yebin Liu

Neural Kinematic Networks for Unsupervised Motion Retargetting
Ruben Villegas, Jimei Yang, Duygu Ceylan, Honglak Lee

Burst Denoising with Kernel Prediction Networks
Ben Mildenhall, Jiawen Chen, Jonathan BarronRobert Carroll, Dillon Sharlet, Ren Ng

Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference Benoit Jacob, Skirmantas Kligys, Bo Chen, Matthew Tang, Menglong Zhu, Andrew Howard, Dmitry KalenichenkoHartwig Adam

AVA: A Video Dataset of Spatio-temporally Localized Atomic Visual Actions
Chunhui Gu, Chen Sun, David Ross, Carl Vondrick, Caroline Pantofaru, Yeqing Li, Sudheendra Vijayanarasimhan, George Toderici, Susanna Ricco, Rahul Sukthankar, Cordelia Schmid, Jitendra Malik

Focal Visual-Text Attention for Visual Question Answering
Junwei Liang, Lu Jiang, Liangliang Cao, Li-Jia Li, Alexander G. Hauptmann

Inferring Light Fields from Shadows
Manel Baradad, Vickie Ye, Adam Yedida, Fredo Durand, William Freeman, Gregory Wornell, Antonio Torralba

Modifying Non-Local Variations Across Multiple Views
Tal Tlusty, Tomer Michaeli, Tali Dekel, Lihi Zelnik-Manor

Iterative Visual Reasoning Beyond Convolutions
Xinlei Chen, Li-jia Li, Fei-Fei Li, Abhinav Gupta

Unsupervised Training for 3D Morphable Model Regression
Kyle Genova, Forrester Cole, Aaron Maschinot, Daniel Vlasic, Aaron Sarna, William Freeman

Learning Transferable Architectures for Scalable Image Recognition
Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc Le

The iNaturalist Species Classification and Detection Dataset
Grant van Horn, Oisin Mac Aodha, Yang Song, Yin Cui, Chen Sun, Alex Shepard, Hartwig Adam, Pietro Perona, Serge Belongie

Learning Intrinsic Image Decomposition from Watching the World
Zhengqi Li, Noah Snavely

Learning Intelligent Dialogs for Bounding Box Annotation
Ksenia Konyushkova, Jasper Uijlings, Christoph Lampert, Vittorio Ferrari

Posters
Revisiting Knowledge Transfer for Training Object Class Detectors
Jasper Uijlings, Stefan Popov, Vittorio Ferrari

Rethinking the Faster R-CNN Architecture for Temporal Action Localization
Yu-Wei Chao, Sudheendra Vijayanarasimhan, Bryan Seybold, David Ross, Jia Deng, Rahul Sukthankar

Hierarchical Novelty Detection for Visual Object Recognition
Kibok Lee, Kimin Lee, Kyle Min, Yuting Zhang, Jinwoo Shin, Honglak Lee

COCO-Stuff: Thing and Stuff Classes in Context
Holger Caesar, Jasper Uijlings, Vittorio Ferrari

Appearance-and-Relation Networks for Video Classification
Limin Wang, Wei Li, Wen Li, Luc Van Gool

MorphNet: Fast & Simple Resource-Constrained Structure Learning of Deep Networks
Ariel Gordon, Elad Eban, Bo Chen, Ofir Nachum, Tien-Ju Yang, Edward Choi

Deformable Shape Completion with Graph Convolutional Autoencoders
Or Litany, Alex Bronstein, Michael Bronstein, Ameesh Makadia

MegaDepth: Learning Single-View Depth Prediction from Internet Photos
Zhengqi Li, Noah Snavely

Unsupervised Discovery of Object Landmarks as Structural Representations
Yuting Zhang, Yijie Guo, Yixin Jin, Yijun Luo, Zhiyuan He, Honglak Lee

Burst Denoising with Kernel Prediction Networks
Ben Mildenhall, Jiawen Chen, Jonathan Barron, Robert Carroll, Dillon Sharlet, Ren Ng

Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference Benoit Jacob, Skirmantas Kligys, Bo Chen, Matthew Tang, Menglong Zhu, Andrew Howard, Dmitry Kalenichenko, Hartwig Adam

Pix3D: Dataset and Methods for Single-Image 3D Shape Modeling
Xingyuan Sun, Jiajun Wu, Xiuming Zhang, Zhoutong Zhang, Tianfan Xue, Joshua Tenenbaum, William Freeman

Sparse, Smart Contours to Represent and Edit Images
Tali Dekel, Dilip Krishnan, Chuang Gan, Ce Liu, William Freeman

MaskLab: Instance Segmentation by Refining Object Detection with Semantic and Direction Features
Liang-Chieh Chen, Alexander Hermans, George Papandreou, Florian Schroff, Peng Wang, Hartwig Adam

Large Scale Fine-Grained Categorization and Domain-Specific Transfer Learning
Yin Cui, Yang Song, Chen Sun, Andrew Howard, Serge Belongie

Improved Lossy Image Compression with Priming and Spatially Adaptive Bit Rates for Recurrent Networks
Nick Johnston, Damien Vincent, David Minnen, Michele Covell, Saurabh Singh, Sung Jin Hwang, George Toderici, Troy Chinen, Joel Shor

MobileNetV2: Inverted Residuals and Linear Bottlenecks
Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen

ScanComplete: Large-Scale Scene Completion and Semantic Segmentation for 3D Scans 
Angela Dai, Daniel Ritchie, Martin Bokeloh, Scott Reed, Juergen Sturm, Matthias Nießner

Sim2Real View Invariant Visual Servoing by Recurrent Control
Fereshteh Sadeghi, Alexander Toshev, Eric Jang, Sergey Levine

Alternating-Stereo VINS: Observability Analysis and Performance Evaluation
Mrinal Kanti Paul, Stergios Roumeliotis

Soccer on Your Tabletop
Konstantinos Rematas, Ira Kemelmacher, Brian Curless, Steve Seitz

Unsupervised Learning of Depth and Ego-Motion from Monocular Video Using 3D Geometric Constraints
Reza Mahjourian, Martin Wicke, Anelia Angelova

AVA: A Video Dataset of Spatio-temporally Localized Atomic Visual Actions
Chunhui Gu, Chen Sun, David Ross, Carl Vondrick, Caroline Pantofaru, Yeqing Li, Sudheendra Vijayanarasimhan, George Toderici, Susanna Ricco, Rahul Sukthankar, Cordelia Schmid, Jitendra Malik

Inferring Light Fields from Shadows
Manel Baradad, Vickie Ye, Adam Yedida, Fredo Durand, William Freeman, Gregory Wornell, Antonio Torralba

Modifying Non-Local Variations Across Multiple Views
Tal Tlusty, Tomer Michaeli, Tali Dekel, Lihi Zelnik-Manor

Aperture Supervision for Monocular Depth Estimation
Pratul Srinivasan, Rahul Garg, Neal Wadhwa, Ren Ng, Jonathan Barron

Instance Embedding Transfer to Unsupervised Video Object Segmentation
Siyang Li, Bryan Seybold, Alexey Vorobyov, Alireza Fathi, Qin Huang, C.-C. Jay Kuo

Frame-Recurrent Video Super-Resolution
Mehdi S. M. Sajjadi, Raviteja Vemulapalli, Matthew Brown

Weakly Supervised Action Localization by Sparse Temporal Pooling Network
Phuc Nguyen, Ting Liu, Gautam Prasad, Bohyung Han

Iterative Visual Reasoning Beyond Convolutions
Xinlei Chen, Li-jia Li, Fei-Fei Li, Abhinav Gupta

Learning and Using the Arrow of Time
Donglai Wei, Andrew Zisserman, William Freeman, Joseph Lim

HydraNets: Specialized Dynamic Architectures for Efficient Inference
Ravi Teja Mullapudi, Noam Shazeer, William Mark, Kayvon Fatahalian

Thoracic Disease Identification and Localization with Limited Supervision
Zhe Li, Chong Wang, Mei Han, Yuan Xue, Wei Wei, Li-jia Li, Fei-Fei Li

Inferring Semantic Layout for Hierarchical Text-to-Image Synthesis
Seunghoon Hong, Dingdong Yang, Jongwook Choi, Honglak Lee

Deep Semantic Face Deblurring
Ziyi Shen, Wei-Sheng Lai, Tingfa Xu, Jan Kautz, Ming-Hsuan Yang

Unsupervised Training for 3D Morphable Model Regression
Kyle Genova, Forrester Cole, Aaron Maschinot, Daniel Vlasic, Aaron Sarna, William Freeman

Learning Transferable Architectures for Scalable Image Recognition
Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc Le

Learning Intrinsic Image Decomposition from Watching the World
Zhengqi Li, Noah Snavely

PiCANet: Learning Pixel-wise Contextual Attention for Saliency Detection
Nian Liu, Junwei Han, Ming-Hsuan Yang

Tutorials
Computer Vision for Robotics and Driving
Anelia Angelova, Sanja Fidler

Unsupervised Visual Learning
Pierre Sermanet, Anelia Angelova

UltraFast 3D Sensing, Reconstruction and Understanding of People, Objects and Environments
Sean Fanello, Julien Valentin, Jonathan Taylor, Christoph Rhemann, Adarsh Kowdle, Jürgen SturmChristine Kaeser-Chen, Pavel Pidlypenskyi, Rohit Pandey, Andrea Tagliasacchi, Sameh Khamis, David Kim, Mingsong Dou, Kaiwen Guo, Danhang Tang, Shahram Izadi

Generative Adversarial Networks
Jun-Yan Zhu, Taesung Park, Mihaela Rosca, Phillip Isola, Ian Goodfellow

Source: Google AI Blog


Announcing an updated YouTube-8M, and the 2nd YouTube-8M Large-Scale Video Understanding Challenge and Workshop



Last year, we organized the first YouTube-8M Large-Scale Video Understanding Challenge with Kaggle, in which 742 teams consisting of 946 individuals from 60 countries used the YouTube-8M dataset (2017 edition) to develop classification algorithms which accurately assign video-level labels. The purpose of the competition was to accelerate improvements in large-scale video understanding, representation learning, noisy data modeling, transfer learning and domain adaptation approaches that can help improve the machine-learning models that classify video. In addition to the competition, we hosted an affiliated workshop at CVPR’17, inviting competition top-performers and researchers and share their ideas on how to advance the state-of-the-art in video understanding.

As a continuation of these efforts to accelerate video understanding, we are excited to announce another update to the YouTube-8M dataset, a new Kaggle video understanding challenge and an affiliated 2nd Workshop on YouTube-8M Large-Scale Video Understanding, to be held at the 2018 European Conference on Computer Vision (ECCV'18).
An Updated YouTube-8M Dataset (2018 Edition)
Our YouTube-8M (2018 edition) features a major improvement in the quality of annotations, obtained using a machine learning system that combines audio-visual content with title, description and other metadata to provide more accurate ground truth annotations. The updated version contains 6.1 million URLs, labeled with a vocabulary of 3,862 visual entities, with each video annotated with one or more labels and an average of 3 labels per video. We have also updated the starter code, with updated instructions for downloading and training TensorFlow video annotation models on the dataset.

The 2nd YouTube-8M Video Understanding Challenge
The 2nd YouTube-8M Video Understanding Challenge invites participants to build audio-visual content classification models using YouTube-8M as training data, and then to label an unknown subset of test videos. Unlike last year, we strictly impose a hard limit on model size, encouraging participants to advance a single model within tight budget rather than assembling as many models as possible. Each of the top 5 teams will be awarded $5,000 to support their travel to Munich to attend ECCV’18. For details, please visit the Kaggle competition page.

The 2nd Workshop on YouTube-8M Large-Scale Video Understanding
To be held at ECCV’18, the workshop will consist of invited talks by distinguished researchers, as well as presentations by top-performing challenge participants in order to facilitate the exchange of ideas. We encourage those who wish to attend to submit papers describing their research, experiments, or applications based on YouTube-8M dataset, including papers summarizing their participation in the challenge above. Please refer to the workshop page for more details.

It is our hope that this update to the dataset, along with the new challenge and workshop, will continue to advance the research in large-scale video understanding. We hope you will join us again!

Acknowledgements
This post reflects the work of many machine perception researchers including Sami Abu-El-Haija, Ke Chen, Nisarg Kothari, Joonseok Lee, Hanhan Li, Paul Natsev, Sobhan Naderi Parizi, Rahul Sukthankar, George Toderici, Balakrishnan Varadarajan, as well as Sohier Dane, Julia Elliott, Wendy Kan and Walter Reade from Kaggle. We are also grateful for the support and advice from our partners at YouTube.

Source: Google AI Blog