Research at Google and ICLR 2016



This week, San Juan, Puerto Rico hosts the 4th International Conference on Learning Representations (ICLR 2016), a conference focused on how one can learn meaningful and useful representations of data for Machine Learning. ICLR includes conference and workshop tracks, with invited talks along with oral and poster presentations of some of the latest research on deep learning, metric learning, kernel learning, compositional models, non-linear structured prediction, and issues regarding non-convex optimization.

At the forefront of innovation in cutting-edge technology in Neural Networks and Deep Learning, Google focuses on both theory and application, developing learning approaches to understand and generalize. As Platinum Sponsor of ICLR 2016, Google will have a strong presence with over 40 researchers attending (many from the Google Brain team and Google DeepMind), contributing to and learning from the broader academic research community by presenting papers and posters, in addition to participating on organizing committees and in workshops.

If you are attending ICLR 2016, we hope you’ll stop by our booth and chat with our researchers about the projects and opportunities at Google that go into solving interesting problems for billions of people. You can also learn more about our research being presented at ICLR 2016 in the list below (Googlers highlighted in blue).

Organizing Committee

Program Chairs
Samy Bengio, Brian Kingsbury

Area Chairs include:
John Platt, Tara Sanaith

Oral Sessions

Neural Programmer-Interpreters (Best Paper Award Recipient)
Scott Reed, Nando de Freitas

Net2Net: Accelerating Learning via Knowledge Transfer
Tianqi Chen, Ian Goodfellow, Jon Shlens

Conference Track Posters

Prioritized Experience Replay
Tom Schau, John Quan, Ioannis Antonoglou, David Silver

Reasoning about Entailment with Neural Attention
Tim Rocktäschel, Edward GrefenstetteKarl Moritz Hermann, Tomáš Kočiský, Phil Blunsom

Neural Programmer: Inducing Latent Programs With Gradient Descent
Arvind Neelakantan, Quoc Le, Ilya Sutskever

MuProp: Unbiased Backpropagation For Stochastic Neural Networks
Shixiang Gu, Sergey Levine, Ilya Sutskever, Andriy Mnih

Multi-Task Sequence to Sequence Learning
Minh-Thang Luong, Quoc LeIlya Sutskever, Oriol Vinyals, Lukasz Kaiser

A Test of Relative Similarity for Model Selection in Generative Models
Eugene Belilovsky, Wacha Bounliphone, Matthew Blaschko, Ioannis Antonoglou, Arthur Gretton

Continuous control with deep reinforcement learning
Timothy Lillicrap, Jonathan HuntAlexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra

Policy Distillation
Andrei Rusu, Sergio Gomez, Caglar Gulcehre, Guillaume Desjardins, James Kirkpatrick, Razvan Pascanu, Volodymyr Mnih, Koray Kavukcuoglu, Raia Hadsell

Neural Random-Access Machines
Karol Kurach, Marcin Andrychowicz, Ilya Sutskever

Variable Rate Image Compression with Recurrent Neural Networks
George Toderici, Sean O'Malley, Damien Vincent, Sung Jin Hwang, Michele Covell, Shumeet Baluja, Rahul Sukthankar, David Minnen

Order Matters: Sequence to Sequence for Sets
Oriol Vinyals, Samy Bengio, Manjunath Kudlur

Grid Long Short-Term Memory
Nal Kalchbrenner, Alex Graves, Ivo Danihelka

Neural GPUs Learn Algorithms
Lukasz Kaiser, Ilya Sutskever

ACDC: A Structured Efficient Linear Layer
Marcin Moczulski, Misha Denil, Jeremy Appleyard, Nando de Freitas

Workshop Track Posters

Revisiting Distributed Synchronous SGD
Jianmin Chen, Rajat Monga, Samy Bengio, Rafal Jozefowicz

Black Box Variational Inference for State Space Models
Evan Archer, Il Memming Park, Lars Buesing, John Cunningham, Liam Paninski

A Minimalistic Approach to Sum-Product Network Learning for Real Applications
Viktoriya Krakovna, Moshe Looks

Efficient Inference in Occlusion-Aware Generative Models of Images
Jonathan Huang, Kevin Murphy

Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning
Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke

Deep Autoresolution Networks
Gabriel Pereyra, Christian Szegedy

Learning visual groups from co-occurrences in space and time
Phillip Isola, Daniel Zoran, Dilip Krishnan, Edward H. Adelson

Adding Gradient Noise Improves Learning For Very Deep Networks
Arvind Neelakantan, Luke Vilnis, Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, James Martens

Adversarial Autoencoders
Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow

Generating Sentences from a Continuous Space
Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal Jozefowicz, Samy Bengio

Beta Channel Update for Chrome OS

The Beta channel has been updated to 50.0.2661.91(Platform version: 7978.66.0) for all Chrome OS devices. This build contains a number of bug fixes, security updates and feature enhancements.  A list of changes can be found here.

If you find new issues, please let us know by visiting our forum or filing a bug. Interested in switching channels? Find out how. You can submit feedback using ‘Report an issue...’ in the Chrome menu (3 horizontal bars in the upper right corner of the browser).

Ketaki Deshpande
Google Chrome

Handling Android ad events in Unity

Registering for banner and interstitial ad events is a handy way for Unity developers using the Google Mobile Ads Unity Plugin to track ad lifecycle events -- things like when an ad is loaded or when an ad click causes the app to be backgrounded. For Unity developers deploying to the Android platform, though, it's important to be aware that ad event handler methods are not invoked on the main thread. As a consequence, Unity methods that must be called on the main thread cannot be executed within ad event handler methods.

Consider the following example:


AudioSource audio = GetComponent<AudioSource>();
...
public void HandleInterstitialClosed(object sender, EventArgs args)
{
audio.mute = false;
}

The code above, which modifies the volume of an audio source from within an ad event handler method, results in the following error:

ArgumentException: set_volume can only be called from the main thread

To get around this, we recommend setting a flag when an event happens, and polling for a state change within the Update() method of your Unity script.

For actions required to be performed on the main thread after showing an ad, set a flag on the OnAdClosed ad event. The update method can poll the value of this flag and perform actions as necessary. The code below illustrates how to implement this approach.


private bool interstitialClosed;

void Start()
{
InterstitialAd interstitial = new InterstitialAd("YOUR_AD_UNIT_ID");
interstitial.OnAdClosed += HandleInterstitialClosed;
interstitialClosed = false;
...
}

void Update()
{
if (interstitialClosed)
{
// Perform actions here.
audio.mute = false;
}
}

public void HandleInterstitialClosed(object sender, EventArgs args)
{
interstitialClosed = true;
}

If you have any questions about Unity integration, you can reach us on our forum. You can also find our quick-start guide here. Remember that you can also find us on Google+, where we have updates on all of our Google Ads developer products.

Introducing our AdMob Student App Challenge judges

Now that we’re two months away from the deadline of June 28, 2016 to submit your app and business report for the AdMob Student App Challenge, we’d like to introduce our judges via a three-part blog post. They’re a panel of six industry experts who will judge the final round of the judging process and decide the Grand Prize winner.

The Grand Prize winner will score a week-long trip to San Francisco, including a visit to Google’s headquarters in Mountain View, as well as have their app featured on the Google Play store. To help you better prepare, we’d like to share some of the insights we gained from getting to know them as well as what, in their view, makes a great app!

Chris Akhavan
President of Publishing @Glu Mobile, a leading global developer in gaming

What is your background and experience working with apps?

I'm currently the President of Publishing at Glu Mobile (we make mobile games like Kim Kardashian: Hollywood, Racing Rivals, and Cooking Dash), and prior to that I was the SVP of Partnerships at Tapjoy (a mobile ad network I joined in the early days and helped grow from 10 people to 300+ and $120MM+ [revenue?]/year).

What is the most important thing you look for when reviewing an app?

A simple and clean user experience. Great mobile apps immediately delight users within the first 30 seconds and deliver value with ease.

What tip(s) would you would give to a new app developer building their first app?

The biggest mistake I see new developers make is forgetting that they are designing for a very small device. I often see new developers using tiny fonts that are hard to read on a phone, and placing too many intricate buttons in the UI. Look at apps like Instagram and Clash Royale for inspiration on clear and simple mobile design.

Anything else you want student developers to know?

I'm excited to check out your apps!

Purnima Kochikar
Director of Business Development for Google Play, Google

What is your background and experience working with apps?

I lead the team that works with all the apps and games developers on Android/Google Play globally. I was also a software engineer in my past life and wrote apps - but that was a LONG time ago.

What is the most important thing you look for when reviewing an app?

  • Utility (does it have a clear purpose - and that could be fun)
  • Beauty (is it well designed?)
  • Creativity (is it an innovative solution for the problem being tackled?)

What are some golden rules of good app design? 

The rule I like best is the 1-minute value - the user should get the full sense of your app within a minute.  Uber is a great example - within a minute you get all the information you need about finding a ride. To be able to do that Uber has reduced input required from the user by using the sensors on the device - such as GPS.

Anything else you want student developers to know?

Follow your heart - build something to solve a problem or create an fun experience that truly matters to you. The best apps are those that come from a deep-rooted interest in the topic.

Well folks, there you have it! We hope that these tips and advice can help guide you as you continue to build your app! Stay tuned for two more posts about our judges in the coming weeks.

If you’d like to learn more about the judging process please visit our AdMob Challenge judges page for more details. Lastly, remember to continue to follow us on AdMob G+ and Twitter, and keep an eye on #AdMobSAC16 too, for regular updates on the challenge.

Posted by Andres Calzada, AdMob Student App Challenge Team

Source: Inside AdMob


While doves were crying: this week on Google Cloud Platform



It was a sad week for music lovers with the news of Prince’s passing, but we take comfort in the fact that things are rocking and rolling on GCP.

The cloud blogosphere was dominated by things that set GCP apart. Take Google App Engine. First released in 2007, it’s taken a while for the world to fully grok its value  even internally. In “Why Google App Engine rocks: A Google engineer’s take” Google Cloud director of technical support Luke Stone gives a full recounting of his team’s experience with App Engine and other managed services like Google BigQuery. He describes how his team was blown away by their productivity gains, and urges all developers to try out the platform-as-a-service route.

Digital gaming store Humble Bundle corroborates this sentiment. In the weekly GCP Podcast, Humble Bundle engineering manager Andy Oxfeld describes how the video game retailer relies on App Engine to scale its website up and down to meet fluctuating demand for its limited-time games. He also describes how the team uses Task Queues, dedicated memcache for faster load times, Google Cloud Storage and BigQuery, to name a few. Check it out.

Storage was another hot topic this week. One of the week’s most talked about posts comes from database luminary Mosha Pasumansky and technical lead for Dremel/BigQuery, in which he discusses Capacitor, BigQuery’s columnar storage format. Long story short, Capacitor advances the state of the art of columnar data encoding, and when combined with Google Cloud Platform’s Colossus distributed file system, provides super-fast and secure queries with little to no effort on the part of BigQuery users. Woohoo!

But sometimes you need to do something a little less flashy  like resize a persistent disk. If you’ve been wondering how to do that on Google Compute Engine, wonder no more  GCP developer advocate Mete Atamel has put together a one-minute video tutorial on YouTube that walks you through the basic steps. Best of all, you don’t even need to reboot the associated VM!

Finally, Google was at OpenStack Summit in Austin this week, where Google partner CoreOS demonstrated ‘Stackanetes’  running OpenStack as a managed Kubernetes service. You can also hear Google product manager Craig McLuckie discuss the benefits to this approach on The Cube with Wikibon analysts Stu Miniman and Brian Gracely. McLuckie also shares his thoughts on working with the open source community, and Google’s evolution from an Internet to a Cloud company.

DeepMind moves to TensorFlow



At DeepMind, we conduct state-of-the-art research on a wide range of algorithms, from deep learning and reinforcement learning to systems neuroscience, towards the goal of building Artificial General Intelligence. A key factor in facilitating rapid progress is the software environment used for research. For nearly four years, the open source Torch7 machine learning library has served as our primary research platform, combining excellent flexibility with very fast runtime execution, enabling rapid prototyping. Our team has been proud to contribute to the open source project in capacities ranging from occasional bug fixes to being core maintainers of several crucial components.

With Google’s recent open source release of TensorFlow, we initiated a project to test its suitability for our research environment. Over the last six months, we have re-implemented more than a dozen different projects in TensorFlow to develop a deeper understanding of its potential use cases and the tradeoffs for research. Today we are excited to announce that DeepMind will start using TensorFlow for all our future research. We believe that TensorFlow will enable us to execute our ambitious research goals at much larger scale and an even faster pace, providing us with a unique opportunity to further accelerate our research programme.

As one of the core contributors of Torch7, I have had the pleasure of working closely with an excellent community of developers and researchers, and it has been amazing to see all the great work that has been built on top of the platform and the impact this has had on the field. Torch7 is currently being used by Facebook, Twitter, and many start-ups and academic labs as well as DeepMind, and I’m proud of the significant contribution it has made to a large community in both research and industry. Our transition to TensorFlow represents a new chapter, and I feel very excited about the prospect of DeepMind contributing heavily to another great open source machine learning platform that everyone can use to advance the state-of-the-art.

Mobile Ads Garage Episode 2: Implementing AdMob Banner Ads

The Mobile Ads Garage has returned with its second episode. In this video, you'll see screencasts and detailed breakdowns of how to implement banner ads for both iOS and Android. Plus, you'll get links to guides, samples, and other great resources.


If you like the video, save the Mobile Ads Garage playlist to your YouTube Playlist collection and you'll never miss an episode.

We’d love to hear which AdMob features you’d like to learn more about. The comment sections for the videos are open, and you're welcome to toss out ideas for new episodes and examples you'd like to see. If you have a technical question relating to something discussed in one of the episodes, you can bring it to our support forum.

Improving Content ID for creators

[Cross-posted from the YouTube Creator blog]

At YouTube, one of our core values is a belief in the freedom of opportunity. We believe anyone should have the opportunity to earn money from the videos they create and turn their channels into successful businesses. That’s why we opened up the YouTube Partner Program nine years ago and why we remain the only platform where anyone with an idea and a camera can turn their videos into full time jobs.

We understand just how important revenue is to our creator community, and we’ve been listening closely to concerns about the loss of monetization during the Content ID dispute process. Currently videos that are claimed and disputed don’t earn revenue for anyone, which is an especially frustrating experience for creators if that claim ends up being incorrect while a video racks up views in its first few days.

Today, we’re announcing a major step to help fix that frustrating experience. We’re developing a new solution that will allow videos to earn revenue while a Content ID claim is being disputed. Here’s how it will work: when both a creator and someone making a claim choose to monetize a video, we will continue to run ads on that video and hold the resulting revenue separately. Once the Content ID claim or dispute is resolved, we’ll pay out that revenue to the appropriate party.

We’re working on this new system now and hope to roll it out to all YouTube partners in the coming months. Here’s a closer look at how it’ll work once it’s live:


We strongly believe in fair use and believe that this improvement to Content ID will make a real difference. In addition to our work on the Content ID dispute process, we’re also paying close attention to creators’ concerns about copyright claims on videos they believe may be fair use. We want to help both the YouTube community and copyright owners alike better understand what fair use looks like online, which is why we launched our fair use protection program last year and recently introduced new Help Center pages on this topic.

Even though Content ID claims are disputed less than 1% of the time, we agree that this process could be better. Making sure our Content ID tools are being used properly is deeply important to us, so we’ve built a dedicated team to monitor this. Using a combination of algorithms and manual review, this team has resolved millions of invalid claims in the last year alone, and acted on millions more before they impacted creators. The team also restricts feature access and even terminates a partner’s access to Content ID tools if we find they are repeatedly abusing these tools.

We will continue to invest in both people and technology to make sure that Content ID keeps working for creators and rightsholders. We want to thank everyone who’s shared their concerns about unintended effects from Content ID claims. It’s allowed us to create a better system for everyone and we hope to share more updates soon.

Improving Content ID for creators

[Cross-posted from the YouTube Creator blog]

At YouTube, one of our core values is a belief in the freedom of opportunity. We believe anyone should have the opportunity to earn money from the videos they create and turn their channels into successful businesses. That’s why we opened up the YouTube Partner Program nine years ago and why we remain the only platform where anyone with an idea and a camera can turn their videos into full time jobs.

We understand just how important revenue is to our creator community, and we’ve been listening closely to concerns about the loss of monetization during the Content ID dispute process. Currently videos that are claimed and disputed don’t earn revenue for anyone, which is an especially frustrating experience for creators if that claim ends up being incorrect while a video racks up views in its first few days.

Today, we’re announcing a major step to help fix that frustrating experience. We’re developing a new solution that will allow videos to earn revenue while a Content ID claim is being disputed. Here’s how it will work: when both a creator and someone making a claim choose to monetize a video, we will continue to run ads on that video and hold the resulting revenue separately. Once the Content ID claim or dispute is resolved, we’ll pay out that revenue to the appropriate party.

We’re working on this new system now and hope to roll it out to all YouTube partners in the coming months. Here’s a closer look at how it’ll work once it’s live:


We strongly believe in fair use and believe that this improvement to Content ID will make a real difference. In addition to our work on the Content ID dispute process, we’re also paying close attention to creators’ concerns about copyright claims on videos they believe may be fair use. We want to help both the YouTube community and copyright owners alike better understand what fair use looks like online, which is why we launched our fair use protection program last year and recently introduced new Help Center pages on this topic.

Even though Content ID claims are disputed less than 1% of the time, we agree that this process could be better. Making sure our Content ID tools are being used properly is deeply important to us, so we’ve built a dedicated team to monitor this. Using a combination of algorithms and manual review, this team has resolved millions of invalid claims in the last year alone, and acted on millions more before they impacted creators. The team also restricts feature access and even terminates a partner’s access to Content ID tools if we find they are repeatedly abusing these tools.

We will continue to invest in both people and technology to make sure that Content ID keeps working for creators and rightsholders. We want to thank everyone who’s shared their concerns about unintended effects from Content ID claims. It’s allowed us to create a better system for everyone and we hope to share more updates soon.

IMA SDK for Android now available on JCenter

In our ongoing efforts to make developing with the IMA SDK easier, we’re pleased to announce that as of version 3.2.1, the IMA SDK for Android is now available on JCenter.

With this release, it's now quicker than ever to integrate with the IMA SDK. Simply make sure you include JCenter in your list of repositories:


repositories {
jcenter()
}

Then, in your build.gradle's dependencies, include the following compile directive:


compile 'com.google.ads.interactivemedia.v3:interactivemedia:3.2.1'

If you're modifying an existing sample, make sure to remove the IMA SDK JAR file from your libs folder. This directive includes the SDK, and if you already have the SDK JAR in libs, you’ll get errors for having two copies of the same library.

If you have any questions about these changes, feel free to contact us via the support forum.