Monthly Archives: April 2018

Google at ICLR 2018



This week, Vancouver, Canada hosts the 6th International Conference on Learning Representations (ICLR 2018), a conference focused on how one can learn meaningful and useful representations of data for machine learning. ICLR includes conference and workshop tracks, with invited talks along with oral and poster presentations of some of the latest research on deep learning, metric learning, kernel learning, compositional models, non-linear structured prediction, and issues regarding non-convex optimization.

At the forefront of innovation in cutting-edge technology in neural networks and deep learning, Google focuses on both theory and application, developing learning approaches to understand and generalize. As Platinum Sponsor of ICLR 2018, Google will have a strong presence with over 130 researchers attending, contributing to and learning from the broader academic research community by presenting papers and posters, in addition to participating on organizing committees and in workshops.

If you are attending ICLR 2018, we hope you'll stop by our booth and chat with our researchers about the projects and opportunities at Google that go into solving interesting problems for billions of people. You can also learn more about our research being presented at ICLR 2018 in the list below (Googlers highlighted in blue)

Senior Program Chair:
Tara Sainath

Steering Committee includes:
Hugo Larochelle

Oral Contributions
Wasserstein Auto-Encoders
Ilya Tolstikhin, Olivier Bousquet, Sylvain Gelly, Bernhard Scholkopf

On the Convergence of Adam and Beyond (Best Paper Award)
Sashank J. Reddi, Satyen Kale, Sanjiv Kumar

Ask the Right Questions: Active Question Reformulation with Reinforcement Learning
Christian Buck, Jannis Bulian, Massimiliano Ciaramita, Wojciech Gajewski, Andrea Gesmundo, Neil Houlsby, Wei Wang

Beyond Word Importance: Contextual Decompositions to Extract Interactions from LSTMs
W. James Murdoch, Peter J. Liu, Bin Yu

Conference Posters
Boosting the Actor with Dual Critic
Bo Dai, Albert Shaw, Niao He, Lihong Li, Le Song

MaskGAN: Better Text Generation via Filling in the _______
William Fedus, Ian Goodfellow, Andrew M. Dai

Scalable Private Learning with PATE
Nicolas Papernot, Shuang Song, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, Ulfar Erlingsson

Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training
Yujun Lin, Song Han, Huizi Mao, Yu Wang, William J. Dally

Flipout: Efficient Pseudo-Independent Weight Perturbations on Mini-Batches
Yeming Wen, Paul Vicol, Jimmy Ba, Dustin Tran, Roger Grosse

Latent Constraints: Learning to Generate Conditionally from Unconditional Generative Models
Adam Roberts, Jesse Engel, Matt Hoffman

Multi-Mention Learning for Reading Comprehension with Neural Cascades
Swabha Swayamdipta, Ankur P. Parikh, Tom Kwiatkowski

QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension
Adams Wei Yu, David Dohan, Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, Quoc V. Le

Sensitivity and Generalization in Neural Networks: An Empirical Study
Roman Novak, Yasaman Bahri, Daniel A. Abolafia, Jeffrey Pennington, Jascha Sohl-Dickstein

Action-dependent Control Variates for Policy Optimization via Stein Identity
Hao Liu, Yihao Feng, Yi Mao, Dengyong Zhou, Jian Peng, Qiang Liu

An Efficient Framework for Learning Sentence Representations
Lajanugen Logeswaran, Honglak Lee

Fidelity-Weighted Learning
Mostafa Dehghani, Arash Mehrjou, Stephan Gouws, Jaap Kamps, Bernhard Schölkopf

Generating Wikipedia by Summarizing Long Sequences
Peter J. Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, Noam Shazeer

Matrix Capsules with EM Routing
Geoffrey Hinton, Sara Sabour, Nicholas Frosst

Temporal Difference Models: Model-Free Deep RL for Model-Based Control
Sergey Levine, Shixiang Gu, Murtaza Dalal, Vitchyr Pong

Deep Neural Networks as Gaussian Processes
Jaehoon Lee, Yasaman Bahri, Roman Novak, Samuel L. Schoenholz, Jeffrey Pennington, Jascha Sohl-Dickstein

Many Paths to Equilibrium: GANs Do Not Need to Decrease a Divergence at Every Step
William Fedus, Mihaela Rosca, Balaji Lakshminarayanan, Andrew M. Dai, Shakir Mohamed, Ian Goodfellow

Initialization Matters: Orthogonal Predictive State Recurrent Neural Networks
Krzysztof Choromanski, Carlton Downey, Byron Boots

Learning Differentially Private Recurrent Language Models
H. Brendan McMahan, Daniel Ramage, Kunal Talwar, Li Zhang

Learning Latent Permutations with Gumbel-Sinkhorn Networks
Gonzalo Mena, David Belanger, Scott Linderman, Jasper Snoek

Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Benjamin Eysenbach, Shixiang Gu, Julian IbarzSergey Levine

Meta-Learning for Semi-Supervised Few-Shot Classification
Mengye Ren, Eleni Triantafillou, Sachin Ravi, Jake Snell, Kevin Swersky, Josh Tenenbaum, Hugo Larochelle, Richard Zemel

Thermometer Encoding: One Hot Way to Resist Adversarial Examples
Jacob Buckman, Aurko Roy, Colin Raffel, Ian Goodfellow

A Hierarchical Model for Device Placement
Azalia Mirhoseini, Anna Goldie, Hieu Pham, Benoit Steiner, Quoc V. LeJeff Dean

Monotonic Chunkwise Attention
Chung-Cheng Chiu, Colin Raffel

Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples
Kimin Lee, Honglak Lee, Kibok Lee, Jinwoo Shin

Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans

Ensemble Adversarial Training: Attacks and Defenses
Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, Patrick McDaniel

Stochastic Variational Video Prediction
Mohammad Babaeizadeh, Chelsea Finn, Dumitru Erhan, Roy Campbell, Sergey Levine

Depthwise Separable Convolutions for Neural Machine Translation
Lukasz Kaiser, Aidan N. Gomez, Francois Chollet

Don’t Decay the Learning Rate, Increase the Batch Size
Samuel L. Smith, Pieter-Jan Kindermans, Chris Ying, Quoc V. Le

Generative Models of Visually Grounded Imagination
Ramakrishna Vedantam, Ian Fischer, Jonathan Huang, Kevin Murphy

Large Scale Distributed Neural Network Training through Online Distillation
Rohan Anil, Gabriel Pereyra, Alexandre Passos, Robert Ormandi, George E. Dahl, Geoffrey E. Hinton

Learning a Neural Response Metric for Retinal Prosthesis
Nishal P. Shah, Sasidhar Madugula, Alan Litke, Alexander Sher, EJ Chichilnisky, Yoram Singer, Jonathon Shlens

Neumann Optimizer: A Practical Optimization Algorithm for Deep Neural Networks
Shankar Krishnan, Ying Xiao, Rif A. Saurous

A Neural Representation of Sketch Drawings
David HaDouglas Eck

Deep Bayesian Bandits Showdown: An Empirical Comparison of Bayesian Deep Networks for Thompson Sampling
Carlos Riquelme, George Tucker, Jasper Snoek

Generalizing Hamiltonian Monte Carlo with Neural Networks
Daniel Levy, Matthew D. HoffmanJascha Sohl-Dickstein

Leveraging Grammar and Reinforcement Learning for Neural Program Synthesis
Rudy Bunel, Matthew Hausknecht, Jacob Devlin, Rishabh Singh, Pushmeet Kohli

On the Discrimination-Generalization Tradeoff in GANs
Pengchuan Zhang, Qiang Liu, Dengyong Zhou, Tao Xu, Xiaodong He

A Bayesian Perspective on Generalization and Stochastic Gradient Descent
Samuel L. Smith, Quoc V. Le

Learning how to Explain Neural Networks: PatternNet and PatternAttribution
Pieter-Jan Kindermans, Kristof T. Schütt, Maximilian Alber, Klaus-Robert Müller, Dumitru Erhan, Been Kim, Sven Dähne

Skip RNN: Learning to Skip State Updates in Recurrent Neural Networks
Víctor Campos, Brendan Jou, Xavier Giró-i-Nieto, Jordi Torres, Shih-Fu Chang

Towards Neural Phrase-based Machine Translation
Po-Sen Huang, Chong Wang, Sitao Huang, Dengyong Zhou, Li Deng

Unsupervised Cipher Cracking Using Discrete GANs
Aidan N. Gomez, Sicong Huang, Ivan Zhang, Bryan M. Li, Muhammad Osama, Lukasz Kaiser

Variational Image Compression With A Scale Hyperprior
Johannes Ballé, David Minnen, Saurabh Singh, Sung Jin Hwang, Nick Johnston

Workshop Posters
Local Explanation Methods for Deep Neural Networks Lack Sensitivity to Parameter Values
Julius Adebayo, Justin Gilmer, Ian Goodfellow, Been Kim

Stoachastic Gradient Langevin Dynamics that Exploit Neural Network Structure
Zachary Nado, Jasper Snoek, Bowen Xu, Roger Grosse, David Duvenaud, James Martens

Towards Mixed-initiative generation of multi-channel sequential structure
Anna Huang, Sherol Chen, Mark J. Nelson, Douglas Eck

Can Deep Reinforcement Learning Solve Erdos-Selfridge-Spencer Games?
Maithra Raghu, Alex Irpan, Jacob Andreas, Robert Kleinberg, Quoc V. Le, Jon Kleinberg

GILBO: One Metric to Measure Them All
Alexander Alemi, Ian Fischer

HoME: a Household Multimodal Environment
Simon Brodeur, Ethan Perez, Ankesh Anand, Florian Golemo, Luca Celotti, Florian Strub, Jean Rouat, Hugo Larochelle, Aaron Courville

Learning to Learn without Labels
Luke Metz, Niru Maheswaranathan, Brian Cheung, Jascha Sohl-Dickstein

Learning via Social Awareness: Improving Sketch Representations with Facial Feedback
Natasha Jaques, Jesse Engel, David Ha, Fred Bertsch, Rosalind Picard, Douglas Eck

Negative Eigenvalues of the Hessian in Deep Neural Networks
Guillaume Alain, Nicolas Le Roux, Pierre-Antoine Manzagol

Realistic Evaluation of Semi-Supervised Learning Algorithms
Avital Oliver, Augustus Odena, Colin Raffel, Ekin Cubuk, lan Goodfellow

Winner's Curse? On Pace, Progress, and Empirical Rigor
D. Sculley, Jasper Snoek, Alex Wiltschko, Ali Rahimi

Meta-Learning for Batch Mode Active Learning
Sachin Ravi, Hugo Larochelle

To Prune, or Not to Prune: Exploring the Efficacy of Pruning for Model Compression
Michael Zhu, Suyog Gupta

Adversarial Spheres
Justin Gilmer, Luke Metz, Fartash Faghri, Sam Schoenholz, Maithra Raghu,,Martin Wattenberg, Ian Goodfellow

Clustering Meets Implicit Generative Models
Francesco Locatello, Damien Vincent, Ilya Tolstikhin, Gunnar Ratsch, Sylvain Gelly, Bernhard Scholkopf

Decoding Decoders: Finding Optimal Representation Spaces for Unsupervised Similarity Tasks
Vitalii Zhelezniak, Dan Busbridge, April Shen, Samuel L. Smith, Nils Y. Hammerla

Learning Longer-term Dependencies in RNNs with Auxiliary Losses
Trieu Trinh, Quoc Le, Andrew Dai, Thang Luong

Graph Partition Neural Networks for Semi-Supervised Classification
Alexander Gaunt, Danny Tarlow, Marc Brockschmidt, Raquel Urtasun, Renjie Liao, Richard Zemel

Searching for Activation Functions
Prajit Ramachandran, Barret Zoph, Quoc Le

Time-Dependent Representation for Neural Event Sequence Prediction
Yang Li, Nan Du, Samy Bengio

Faster Discovery of Neural Architectures by Searching for Paths in a Large Model
Hieu Pham, Melody Guan, Barret Zoph, Quoc V. Le, Jeff Dean

Intriguing Properties of Adversarial Examples
Ekin Dogus Cubuk, Barret Zoph, Sam Schoenholz, Quoc Le

PPP-Net: Platform-aware Progressive Search for Pareto-optimal Neural Architectures
Jin-Dong Dong, An-Chieh Cheng, Da-Cheng Juan, Wei Wei, Min Sun

The Mirage of Action-Dependent Baselines in Reinforcement Learning
George Tucker, Surya Bhupatiraju, Shixiang Gu, Richard E. Turner, Zoubin Ghahramani, Sergey Levine

Learning to Organize Knowledge with N-Gram Machines
Fan Yang, Jiazhong Nie, William W. Cohen, Ni Lao

Online variance-reducing optimization
Nicolas Le Roux, Reza Babanezhad, Pierre-Antoine Manzagol

Source: Google AI Blog


Google at ICLR 2018



This week, Vancouver, Canada hosts the 6th International Conference on Learning Representations (ICLR 2018), a conference focused on how one can learn meaningful and useful representations of data for machine learning. ICLR includes conference and workshop tracks, with invited talks along with oral and poster presentations of some of the latest research on deep learning, metric learning, kernel learning, compositional models, non-linear structured prediction, and issues regarding non-convex optimization.

At the forefront of innovation in cutting-edge technology in neural networks and deep learning, Google focuses on both theory and application, developing learning approaches to understand and generalize. As Platinum Sponsor of ICLR 2018, Google will have a strong presence with over 130 researchers attending, contributing to and learning from the broader academic research community by presenting papers and posters, in addition to participating on organizing committees and in workshops.

If you are attending ICLR 2018, we hope you'll stop by our booth and chat with our researchers about the projects and opportunities at Google that go into solving interesting problems for billions of people. You can also learn more about our research being presented at ICLR 2018 in the list below (Googlers highlighted in blue)

Senior Program Chairs include:
Tara Sainath

Steering Committee includes:
Hugo Larochelle

Oral Contributions
Wasserstein Auto-Encoders
Ilya Tolstikhin, Olivier Bousquet, Sylvain Gelly, Bernhard Scholkopf

On the Convergence of Adam and Beyond (Best Paper Award)
Sashank J. Reddi, Satyen Kale, Sanjiv Kumar

Ask the Right Questions: Active Question Reformulation with Reinforcement Learning
Christian Buck, Jannis Bulian, Massimiliano Ciaramita, Wojciech Gajewski, Andrea Gesmundo, Neil Houlsby, Wei Wang

Beyond Word Importance: Contextual Decompositions to Extract Interactions from LSTMs
W. James Murdoch, Peter J. Liu, Bin Yu

Conference Posters
Boosting the Actor with Dual Critic
Bo Dai, Albert Shaw, Niao He, Lihong Li, Le Song

MaskGAN: Better Text Generation via Filling in the _______
William Fedus, Ian Goodfellow, Andrew M. Dai

Scalable Private Learning with PATE
Nicolas Papernot, Shuang Song, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, Ulfar Erlingsson

Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training
Yujun Lin, Song Han, Huizi Mao, Yu Wang, William J. Dally

Flipout: Efficient Pseudo-Independent Weight Perturbations on Mini-Batches
Yeming Wen, Paul Vicol, Jimmy Ba, Dustin Tran, Roger Grosse

Latent Constraints: Learning to Generate Conditionally from Unconditional Generative Models
Adam Roberts, Jesse Engel, Matt Hoffman

Multi-Mention Learning for Reading Comprehension with Neural Cascades
Swabha Swayamdipta, Ankur P. Parikh, Tom Kwiatkowski

QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension
Adams Wei Yu, David Dohan, Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, Quoc V. Le

Sensitivity and Generalization in Neural Networks: An Empirical Study
Roman Novak, Yasaman Bahri, Daniel A. Abolafia, Jeffrey Pennington, Jascha Sohl-Dickstein

Action-dependent Control Variates for Policy Optimization via Stein Identity
Hao Liu, Yihao Feng, Yi Mao, Dengyong Zhou, Jian Peng, Qiang Liu

An Efficient Framework for Learning Sentence Representations
Lajanugen Logeswaran, Honglak Lee

Fidelity-Weighted Learning
Mostafa Dehghani, Arash Mehrjou, Stephan Gouws, Jaap Kamps, Bernhard Schölkopf

Generating Wikipedia by Summarizing Long Sequences
Peter J. Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, Noam Shazeer

Matrix Capsules with EM Routing
Geoffrey Hinton, Sara Sabour, Nicholas Frosst

Temporal Difference Models: Model-Free Deep RL for Model-Based Control
Sergey Levine, Shixiang Gu, Murtaza Dalal, Vitchyr Pong

Deep Neural Networks as Gaussian Processes
Jaehoon Lee, Yasaman Bahri, Roman Novak, Samuel L. Schoenholz, Jeffrey Pennington, Jascha Sohl-Dickstein

Many Paths to Equilibrium: GANs Do Not Need to Decrease a Divergence at Every Step
William Fedus, Mihaela Rosca, Balaji Lakshminarayanan, Andrew M. Dai, Shakir Mohamed, Ian Goodfellow

Initialization Matters: Orthogonal Predictive State Recurrent Neural Networks
Krzysztof Choromanski, Carlton Downey, Byron Boots

Learning Differentially Private Recurrent Language Models
H. Brendan McMahan, Daniel Ramage, Kunal Talwar, Li Zhang

Learning Latent Permutations with Gumbel-Sinkhorn Networks
Gonzalo Mena, David Belanger, Scott Linderman, Jasper Snoek

Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Benjamin Eysenbach, Shixiang Gu, Julian IbarzSergey Levine

Meta-Learning for Semi-Supervised Few-Shot Classification
Mengye Ren, Eleni Triantafillou, Sachin Ravi, Jake Snell, Kevin Swersky, Josh Tenenbaum, Hugo Larochelle, Richard Zemel

Thermometer Encoding: One Hot Way to Resist Adversarial Examples
Jacob Buckman, Aurko Roy, Colin Raffel, Ian Goodfellow

A Hierarchical Model for Device Placement
Azalia Mirhoseini, Anna Goldie, Hieu Pham, Benoit Steiner, Quoc V. LeJeff Dean

Monotonic Chunkwise Attention
Chung-Cheng Chiu, Colin Raffel

Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples
Kimin Lee, Honglak Lee, Kibok Lee, Jinwoo Shin

Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans

Ensemble Adversarial Training: Attacks and Defenses
Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, Patrick McDaniel

Stochastic Variational Video Prediction
Mohammad Babaeizadeh, Chelsea Finn, Dumitru Erhan, Roy Campbell, Sergey Levine

Depthwise Separable Convolutions for Neural Machine Translation
Lukasz Kaiser, Aidan N. Gomez, Francois Chollet

Don’t Decay the Learning Rate, Increase the Batch Size
Samuel L. Smith, Pieter-Jan Kindermans, Chris Ying, Quoc V. Le

Generative Models of Visually Grounded Imagination
Ramakrishna Vedantam, Ian Fischer, Jonathan Huang, Kevin Murphy

Large Scale Distributed Neural Network Training through Online Distillation
Rohan Anil, Gabriel Pereyra, Alexandre Passos, Robert Ormandi, George E. Dahl, Geoffrey E. Hinton

Learning a Neural Response Metric for Retinal Prosthesis
Nishal P. Shah, Sasidhar Madugula, Alan Litke, Alexander Sher, EJ Chichilnisky, Yoram Singer, Jonathon Shlens

Neumann Optimizer: A Practical Optimization Algorithm for Deep Neural Networks
Shankar Krishnan, Ying Xiao, Rif A. Saurous

A Neural Representation of Sketch Drawings
David HaDouglas Eck

Deep Bayesian Bandits Showdown: An Empirical Comparison of Bayesian Deep Networks for Thompson Sampling
Carlos Riquelme, George Tucker, Jasper Snoek

Generalizing Hamiltonian Monte Carlo with Neural Networks
Daniel Levy, Matthew D. HoffmanJascha Sohl-Dickstein

Leveraging Grammar and Reinforcement Learning for Neural Program Synthesis
Rudy Bunel, Matthew Hausknecht, Jacob Devlin, Rishabh Singh, Pushmeet Kohli

On the Discrimination-Generalization Tradeoff in GANs
Pengchuan Zhang, Qiang Liu, Dengyong Zhou, Tao Xu, Xiaodong He

A Bayesian Perspective on Generalization and Stochastic Gradient Descent
Samuel L. Smith, Quoc V. Le

Learning how to Explain Neural Networks: PatternNet and PatternAttribution
Pieter-Jan Kindermans, Kristof T. Schütt, Maximilian Alber, Klaus-Robert Müller, Dumitru Erhan, Been Kim, Sven Dähne

Skip RNN: Learning to Skip State Updates in Recurrent Neural Networks
Víctor Campos, Brendan Jou, Xavier Giró-i-Nieto, Jordi Torres, Shih-Fu Chang

Towards Neural Phrase-based Machine Translation
Po-Sen Huang, Chong Wang, Sitao Huang, Dengyong Zhou, Li Deng

Unsupervised Cipher Cracking Using Discrete GANs
Aidan N. Gomez, Sicong Huang, Ivan Zhang, Bryan M. Li, Muhammad Osama, Lukasz Kaiser

Variational Image Compression With A Scale Hyperprior
Johannes Ballé, David Minnen, Saurabh Singh, Sung Jin Hwang, Nick Johnston

Workshop Posters
Local Explanation Methods for Deep Neural Networks Lack Sensitivity to Parameter Values
Julius Adebayo, Justin Gilmer, Ian Goodfellow, Been Kim

Stoachastic Gradient Langevin Dynamics that Exploit Neural Network Structure
Zachary Nado, Jasper Snoek, Bowen Xu, Roger Grosse, David Duvenaud, James Martens

Towards Mixed-initiative generation of multi-channel sequential structure
Anna Huang, Sherol Chen, Mark J. Nelson, Douglas Eck

Can Deep Reinforcement Learning Solve Erdos-Selfridge-Spencer Games?
Maithra Raghu, Alex Irpan, Jacob Andreas, Robert Kleinberg, Quoc V. Le, Jon Kleinberg

GILBO: One Metric to Measure Them All
Alexander Alemi, Ian Fischer

HoME: a Household Multimodal Environment
Simon Brodeur, Ethan Perez, Ankesh Anand, Florian Golemo, Luca Celotti, Florian Strub, Jean Rouat, Hugo Larochelle, Aaron Courville

Learning to Learn without Labels
Luke Metz, Niru Maheswaranathan, Brian Cheung, Jascha Sohl-Dickstein

Learning via Social Awareness: Improving Sketch Representations with Facial Feedback
Natasha Jaques, Jesse Engel, David Ha, Fred Bertsch, Rosalind Picard, Douglas Eck

Negative Eigenvalues of the Hessian in Deep Neural Networks
Guillaume Alain, Nicolas Le Roux, Pierre-Antoine Manzagol

Realistic Evaluation of Semi-Supervised Learning Algorithms
Avital Oliver, Augustus Odena, Colin Raffel, Ekin Cubuk, lan Goodfellow

Winner's Curse? On Pace, Progress, and Empirical Rigor
D. Sculley, Jasper Snoek, Alex Wiltschko, Ali Rahimi

Meta-Learning for Batch Mode Active Learning
Sachin Ravi, Hugo Larochelle

To Prune, or Not to Prune: Exploring the Efficacy of Pruning for Model Compression
Michael Zhu, Suyog Gupta

Adversarial Spheres
Justin Gilmer, Luke Metz, Fartash Faghri, Sam Schoenholz, Maithra Raghu,,Martin Wattenberg, Ian Goodfellow

Clustering Meets Implicit Generative Models
Francesco Locatello, Damien Vincent, Ilya Tolstikhin, Gunnar Ratsch, Sylvain Gelly, Bernhard Scholkopf

Decoding Decoders: Finding Optimal Representation Spaces for Unsupervised Similarity Tasks
Vitalii Zhelezniak, Dan Busbridge, April Shen, Samuel L. Smith, Nils Y. Hammerla

Learning Longer-term Dependencies in RNNs with Auxiliary Losses
Trieu Trinh, Quoc Le, Andrew Dai, Thang Luong

Graph Partition Neural Networks for Semi-Supervised Classification
Alexander Gaunt, Danny Tarlow, Marc Brockschmidt, Raquel Urtasun, Renjie Liao, Richard Zemel

Searching for Activation Functions
Prajit Ramachandran, Barret Zoph, Quoc Le

Time-Dependent Representation for Neural Event Sequence Prediction
Yang Li, Nan Du, Samy Bengio

Faster Discovery of Neural Architectures by Searching for Paths in a Large Model
Hieu Pham, Melody Guan, Barret Zoph, Quoc V. Le, Jeff Dean

Intriguing Properties of Adversarial Examples
Ekin Dogus Cubuk, Barret Zoph, Sam Schoenholz, Quoc Le

PPP-Net: Platform-aware Progressive Search for Pareto-optimal Neural Architectures
Jin-Dong Dong, An-Chieh Cheng, Da-Cheng Juan, Wei Wei, Min Sun

The Mirage of Action-Dependent Baselines in Reinforcement Learning
George Tucker, Surya Bhupatiraju, Shixiang Gu, Richard E. Turner, Zoubin Ghahramani, Sergey Levine

Learning to Organize Knowledge with N-Gram Machines
Fan Yang, Jiazhong Nie, William W. Cohen, Ni Lao

Online variance-reducing optimization
Nicolas Le Roux, Reza Babanezhad, Pierre-Antoine Manzagol

Video is everywhere – helping brands find their audience in the era of convergence

With cord-cutting on the rise, brands have been looking for new ways to connect with an important part of their audience that are harder than ever to reach. According to fresh Nielsen data, more than half of 18 to 49 year-olds in the US are either light viewers of TV or do not subscribe to TV; but over 90 percent of these people watch YouTube.1 Today we’re introducing a new set of opportunities on YouTube to help brands reach these viewers across content and devices.


YouTube audiences on TV screens

We’re amidst the second major shift in how people watch video on YouTube. In the past few years, we witnessed mobile viewership exceed desktop, marking the first major shift in how people interacted with YouTube. Now, in 2018, viewers are returning to that original, purpose-built device for video viewing – the television set.

At YouTube we’ve brought people back to the big screen by building a rich YouTube experience for set-top boxes, gaming consoles, streaming devices and smart TVs of all stripes. And now TV screens are our fastest growing screen, counting over 150 million hours of watch time per day.2

We heard from advertisers that they want in, so we have been working to make it easy for you to find your most engaged, valuable audience while they are watching YouTube on a TV set, with the new TV screens device type. In the coming months, we’ll add TV screens – joining computers, mobile phones and tablets – to AdWords and DoubleClick Bid Manager, so advertisers globally can tailor their campaigns for this environment – for example, by using a different creative.

We’ve already seen that people react positively to ads on the TV screen – based on Ipsos Lab Experiments, YouTube ads shown on TV drove a significant lift in ad recall and purchase intent, with an average lift of 47 percent and 35 percent respectively.    


YouTube audiences on every screen

And for brands who want help reaching cord cutters, we now offer a new segment in AdWords and DoubleClick Bid Manager called “light TV viewers.” Advertisers will be able to reach people who consume most of their television and video content online and might be harder to reach via traditional media. This audience is reachable on YouTube across computers, mobile, tablets, and TV screens.


Welcoming YouTube TV to Google Preferred

Last year we launched YouTube TV, a new way to enjoy cable-free live TV. Now a year in, YouTube TV continues to gain momentum – we’ve recently added new networks to our service, expanded availability to over 85 percent of US households in nearly 100 TV markets, and announced partnerships with major sports leagues. For the first time, this upcoming broadcast season advertisers will be able to access full length TV inventory in Google Preferred.

Content from some cable networks in the US will be part of Google Preferred lineups so that brands can continue to engage their audience across all platforms. This means advertisers will be able to get both the most popular YouTube content and traditional TV content in a single campaign – plus, we’ll dynamically insert these ads, giving advertisers the ability to show relevant ads to the right audiences, rather than just showing everyone the same ad as they might on traditional TV.

As marketers continue to break the silos and think of holistic media plans, we’re excited to enable the opportunity. Because while TV screen viewing is big and growing fast, video is everywhere and the key is connecting with viewers wherever they watch.


1. Google commissioned Nielsen custom fusion study. Desktop, mobile and TV fusion. TV measurement of television distribution sources and total minutes viewed. Reach among persons 18-49. Light TV viewers represent the bottom tercile of total TV watchers based on total minutes viewed. October 2017.

2. YouTube Internal Data, Global, Accurate as of Jan 2018. Based on seven day average of watch time for TV screen devices, which include smart TVs, Roku/Apple TV and game consoles.

3. Google/Ipsos Lab Experiment, US, March 2018 (32 ads, 800 US residents 18-64 y/o).

Source: Google Ads


Dev Channel Update for Desktop

The dev channel has been updated to 68.0.3409.2 for Windows, Mac and Linux.


A partial list of changes is available in the log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.

Abdul Syed
Google Chrome

Registration for the Associate Cloud Engineer beta exam is now open



Mastering a discipline depends on learning the fundamentals of a craft before you can aspire to the next level.

To this point, we’ve developed a new certification exam, Associate Cloud Engineer, that identifies individuals who have the foundational skills necessary to use Google Cloud Console to deploy applications, monitor operations, and manage enterprise solutions. We are excited to announce that registration for the Associate Cloud Engineer beta exam is now open.

As businesses move in growing numbers to cloud-based environments, the need to hire or fill existing skills gaps with individuals proficient in cloud technology has skyrocketed. Unfortunately, there is a clear lack of people with the requisite skills to work with cloud technologies.

If you’re an aspiring cloud architect or data engineer who is technically proficient in the Google Cloud environment but don’t have years of experience designing cloud solutions, this certification is for you. The Associate Cloud Engineer is an entry point to our professional-level cloud certifications, Cloud Architect and Data Engineer, which recognize individuals who can use Google Cloud Platform (GCP) to solve more complex and strategic business problems.

Demonstrate that you have mastered the fundamental cloud skills as an Associate Cloud Engineer so you can take your next steps to become a Google Cloud Certified professional.

  • The beta exam is now open for registration. The testing period runs May 9-30, 2018
  • To earn this certification, you must successfully pass our Associate Cloud Engineer exam
  • Save 40% on the cost of certification by participating in this beta
  • The length of the exam is four hours 

When you become an Associate Cloud Engineer, you show potential employers that you have the essential skills to work on GCP. So, what are you waiting for?

Register to take the beta exam today.

We updated our job posting guidelines

Last year, we launched job search on Google to connect more people with jobs. When you provide Job Posting structured data, it helps drive more relevant traffic to your page by connecting job seekers with your content. To ensure that job seekers are getting the best possible experience, it's important to follow our Job Posting guidelines.

We've recently made some changes to our Job Posting guidelines to help improve the job seeker experience.

  • Remove expired jobs
  • Place structured data on the job's detail page
  • Make sure all job details are present in the job description

Remove expired jobs

When job seekers put in effort to find a job and apply, it can be very discouraging to discover that the job that they wanted is no longer available. Sometimes, job seekers only discover that the job posting is expired after deciding to apply for the job. Removing expired jobs from your site may drive more traffic because job seekers are more confident when jobs that they visit on your site are still open for application. For more information on how to remove a job posting, see Remove a job posting.


Place structured data on the job's detail page

Job seekers find it confusing when they land on a list of jobs instead of the specific job's detail page. To fix this, put structured data on the most detailed leaf page possible. Don't add structured data to pages intended to present a list of jobs (for example, search result pages) and only add it to the most specific page describing a single job with its relevant details.

Make sure all job details are present in the job description

We've also noticed that some sites include information in the JobPosting structured data that is not present anywhere in the job posting. Job seekers are confused when the job details they see in Google Search don't match the job description page. Make sure that the information in the JobPosting structured data always matches what's on the job posting page. Here are some examples:

  • If you add salary information to the structured data, then also add it to the job posting. Both salary figures should match.
  • The location in the structured data should match the location in the job posting.

Providing structured data content that is consistent with the content of the job posting pages not only helps job seekers find the exact job that they were looking for, but may also drive more relevant traffic to your job postings and therefore increase the chances of finding the right candidates for your jobs.

If your site violates the Job Posting guidelines (including the guidelines in this blog post), we may take manual action against your site and it may not be eligible for display in the jobs experience on Google Search. You can submit a reconsideration request to let us know that you have fixed the problem(s) identified in the manual action notification. If your request is approved, the manual action will be removed from your site or page.

For more information, visit our Job Posting developer documentation and our JobPosting FAQ.

Stable Channel Update for Chrome OS

The Stable channel has been updated to 66.0.3359.137 (Platform version: 10452.74.0) for most Chrome OS devices. This build contains a number of bug fixes and security updates. 
Systems will be receiving updates over the next several days.

New Features
  • Adding new keyboard shortcut to move windows from display-to-display
  • New Chrome OS Keyboard Shortcut Helper
  • External Display Settings enhancements
  • Enable Video Recording in Chrome Camera App
  • Overview window animation improvements
  • Extending more devices support for Magic Tether
  • Picture in Picture Magnification
  • Ability to zoom up to 20x with Chrome OS magnifier
  • Ability to adjust full screen mag zoom level through pinch gesture
  • Screen sharing support for Android apps installed via the Google Play Store
  • Add Google Play into first login Opt-in window 
  • Google Play GDPR support
  • Google Play Maximized Window support improvements
  • Automatically pass user credentials from Chrome OS login to network 802.1x authentication
  • Native printing support extended to Google Play applications 
  • Add Sync notice during initial sign-in 

Security Fixes
Note: Access to bug details and links may be kept restricted until a majority of users are updated with a fix. We will also retain restrictions if the bug exists in a third party library that other projects similarly depend on, but haven’t yet fixed.
  • Intel devices on 3.8 kernels received the KPTI mitigation against Meltdown with Chrome OS 66. All Chrome OS devices are now protected against Meltdown.
If you find new issues, please let us know by visiting our forum or filing a bug. Interested in switching channels? Find out how. You can submit feedback using ‘Report an issue...’ in the Chrome menu (3 vertical dots in the upper right corner of the browser).

Josafat Garcia
Google Chrome

The High Five: two newborn babies and a “Cursed Child”

This week, babies were born and lettuce was thrown out. Here’s a look at some top Search trends from the week, with data from theGoogle News Lab.

What’s in a name?

A lot, if you’re a Royal. After Prince William and Kate Middleton welcomed a son into the world this week, searches for “name of new royal baby” went up more than 3,000 percent. The newly-named Prince Louis’s siblings were also of interest—searches for “Prince George full name” went up 1,000 percent, and “Who is Princess Charlotte named after?” was also a trending question.

Trying to conjure up some tickets

Muggles and wizards alike are wondering “How much are tickets to Harry Potter and the Cursed Child?” (we’re guessing it’ll be a lot of Galleons). They may be ditching Orlando for New York—the play was more popular in Search than the Wizarding World of Harry Potter, but not quite as popular as “Summer: The Donna Summer Musical,” which was the most searched Broadway play this week.

More baby news

An image of a shirtless Dwayne “The Rock” Johnson holding his newborn baby girl went viral this week, and people oooh-ed and aaah-ed all over Search (interest in “the rock new baby” went up 2,750 percent). This was after he got a heartfelt invitation to prom from a superfan in Minnesota, which caused searches for “the rock prom” to go up 1,850 percent. Quite the week!

Lettuce warn you

Search questions are a mixed bag, but here’s one that stood out this week: “Is it safe to eat romaine lettuce yet?” If you’d like some side trends with your salad, there’s been a 1,000 percent increase in searches for “ecoli virus,” and the most searches for “e. coli” are coming from Alaska, Montana and Idaho.

An ending to marvel at

“The Avengers: Infinity War” hit the big screen this week, and there’s one thing on everyone’s mind: “Who dies in Infinity War?” Searches for “infinity war spoilers who dies” went up nearly 1,000 percent this week. We won’t spoil anything, but according to one top Search question—”How many post-credit scenes are there in Infinity War?”—you should stick around until the very end.

Source: Search


The High Five: two newborn babies and a “Cursed Child”

This week, babies were born and lettuce was thrown out. Here’s a look at some top Search trends from the week, with data from theGoogle News Lab.

What’s in a name?

A lot, if you’re a Royal. After Prince William and Kate Middleton welcomed a son into the world this week, searches for “name of new royal baby” went up more than 3,000 percent. The newly-named Prince Louis’s siblings were also of interest—searches for “Prince George full name” went up 1,000 percent, and “Who is Princess Charlotte named after?” was also a trending question.

Trying to conjure up some tickets

Muggles and wizards alike are wondering “How much are tickets to Harry Potter and the Cursed Child?” (we’re guessing it’ll be a lot of Galleons). They may be ditching Orlando for New York—the play was more popular in Search than the Wizarding World of Harry Potter, but not quite as popular as “Summer: The Donna Summer Musical,” which was the most searched Broadway play this week.

More baby news

An image of a shirtless Dwayne “The Rock” Johnson holding his newborn baby girl went viral this week, and people oooh-ed and aaah-ed all over Search (interest in “the rock new baby” went up 2,750 percent). This was after he got a heartfelt invitation to prom from a superfan in Minnesota, which caused searches for “the rock prom” to go up 1,850 percent. Quite the week!

Lettuce warn you

Search questions are a mixed bag, but here’s one that stood out this week: “Is it safe to eat romaine lettuce yet?” If you’d like some side trends with your salad, there’s been a 1,000 percent increase in searches for “ecoli virus,” and the most searches for “e. coli” are coming from Alaska, Montana and Idaho.

An ending to marvel at

“The Avengers: Infinity War” hit the big screen this week, and there’s one thing on everyone’s mind: “Who dies in Infinity War?” Searches for “infinity war spoilers who dies” went up nearly 1,000 percent this week. We won’t spoil anything, but according to one top Search question—”How many post-credit scenes are there in Infinity War?”—you should stick around until the very end.

Source: Search


It’s story time, and the Google Assistant has a tale for you

Your family is probably tired of hearing the same old stories about your high school glory days, or about that one time you rolled five Yahtzees in a row at game night. So the next time you’re in need of a story (probably today—it’s National Tell a Story Day!), the Google Assistant has some good ones to share. Gather ‘round:

  • For the classics like “Little Red Riding Hood,” “Cinderella” and “Sleeping Beauty,” say “Hey Google, tell me a story.” 
  • To hear short stories from NPR told by real people across the United States, say “Hey Google, tell me a StoryCorps story” 
  • When you say “Hey Google, tell me a story about motherhood,” you’ll hear beautiful, two-minute interviews between mothers and their children. Keep that one in mind in a few weeks for Mother’s Day. 
  • If you want a fun, interactive adventure with the kids, get Mickey Mouse involved with “Hey Google, talk to Mickey Mouse Story Time.” Or Lightning McQueen can put story time into high gear–just say, “Hey Google, talk to Cars Adventure.” 
  • For times when you want five lines and some rhymes, say “Hey Google, give me a limerick.”
When story time’s done, grab the Yahtzee board and try to make it six rolls in a row.

The end.