Tag Archives: data science

Revisiting the Unreasonable Effectiveness of Data



There has been remarkable success in the field of computer vision over the past decade, much of which can be directly attributed to the application of deep learning models to this machine perception task. Furthermore, since 2012 there have been significant advances in representation capabilities of these systems due to (a) deeper models with high complexity, (b) increased computational power and (c) availability of large-scale labeled data. And while every year we get further increases in computational power and the model complexity (from 7-layer AlexNet to 101-layer ResNet), available datasets have not scaled accordingly. A 101-layer ResNet with significantly more capacity than AlexNet is still trained with the same 1M images from ImageNet circa 2011. As researchers, we have always wondered: if we scale up the amount of training data 10x, will the accuracy double? How about 100x or maybe even 300x? Will the accuracy plateau or will we continue to see increasing gains with more and more data?
While GPU computation power and model sizes have continued to increase over the last five years, the size of the largest training dataset has surprisingly remained constant.
In our paper, “Revisiting Unreasonable Effectiveness of Data in Deep Learning Era”, we take the first steps towards clearing the clouds of mystery surrounding the relationship between `enormous data' and deep learning. Our goal was to explore: (a) if visual representations can be still improved by feeding more and more images with noisy labels to currently existing algorithms; (b) the nature of the relationship between data and performance on standard vision tasks such as classification, object detection and image segmentation; (c) state-of-the-art models for all the tasks in computer vision using large-scale learning.

Of course, the elephant in the room is where can we obtain a dataset that is 300x larger than ImageNet? At Google, we have been continuously working on building such datasets automatically to improve computer vision algorithms. Specifically, we have built an internal dataset of 300M images that are labeled with 18291 categories, which we call JFT-300M. The images are labeled using an algorithm that uses complex mixture of raw web signals, connections between web-pages and user feedback. This results in over one billion labels for the 300M images (a single image can have multiple labels). Of the billion image labels, approximately 375M are selected via an algorithm that aims to maximize label precision of selected images. However, there is still considerable noise in the labels: approximately 20% of the labels for selected images are noisy. Since there is no exhaustive annotation, we have no way to estimate the recall of the labels.

Our experimental results validate some of the hypotheses but also generate some unexpected surprises:
  • Better Representation Learning Helps. Our first observation is that large-scale data helps in representation learning which in-turn improves the performance on each vision task we study. Our findings suggest that a collective effort to build a large-scale dataset for pretraining is important. It also suggests a bright future for unsupervised and semi-supervised representation learning approaches. It seems the scale of data continues to overpower noise in the label space.
  • Performance increases linearly with orders of magnitude of training data.  Perhaps the most surprising finding is the relationship between performance on vision tasks and the amount of training data (log-scale) used for representation learning. We find that this relationship is still linear! Even at 300M training images, we do not observe any plateauing effect for the tasks studied.
  • Object detection performance when pre-trained on different subsets of JFT-300M from scratch. x-axis is the dataset size in log-scale, y-axis is the detection performance in mAP@[.5,.95] on COCO-minival subset.
  • Capacity is Crucial. We also observe that to fully exploit 300M images, one needs higher capacity (deeper) models. For example, in case of ResNet-50 the gain on COCO object detection benchmark is much smaller (1.87%) compared to (3%) when using ResNet-152.
  • New state of the art results. Our paper presents new state-of-the-art results on several benchmarks using the models learned from JFT-300M. For example, a single model (without any bells and whistles) can now achieve 37.4 AP as compared to 34.3 AP on the COCO detection benchmark.
It is important to highlight that the training regime, learning schedules and parameters we used are based on our understanding of training ConvNets with 1M images from ImageNet. Since we do not search for the optimal set of hyper-parameters in this work (which would have required considerable computational effort), it is highly likely that these results are not the best ones you can obtain when using this scale of data. Therefore, we consider the quantitative performance reported to be an underestimate of the actual impact of data.

This work does not focus on task-specific data, such as exploring if more bounding boxes affects model performance. We believe that, although challenging, obtaining large scale task-specific data should be the focus of future study. Furthermore, building a dataset of 300M images should not be a final goal - as a community, we should explore if models continue to improve in the regime of even larger (1 billion+ image) datasets.

Core Contributors
Chen Sun, Abhinav Shrivastava, Saurabh Singh, and Abhinav Gupta

Acknowledgments
This work would not have been possible without the significant efforts of the Image Understanding and Expander teams at Google who built the massive JFT dataset. We would specifically like to thank Tom Duerig, Neil Alldrin, Howard Zhou, Lu Chen, David Cai, Gal Chechik, Zheyun Feng, Xiangxin Zhu and Rahul Sukthankar for their help. Also big thanks to the VALE team for APIs and specifically, Jonathan Huang, George Papandreou, Liang-Chieh Chen and Kevin Murphy for helpful discussions.

KDD 2015 Best Research Paper Award: “Algorithms for Public-Private Social Networks”



The 21st ACM conference on Knowledge Discovery and Data Mining (KDD’15), a main venue for academic and industry research in data management, information retrieval, data mining and machine learning, was held last week in Sydney, Australia. In the past several years, Google has been actively participating in KDD, with several Googlers presenting work at the conference in the research and industrial tracks. This year Googlers presented 12 papers at KDD (listed below, with Googlers in blue), all of which are freely available at the ACM Digital Library.

One of these papers, Efficient Algorithms for Public-Private Social Networks, co-authored by Googlers Ravi Kumar, Silvio Lattanzi, Vahab Mirrokni, former Googler intern Alessandro Epasto and research visitor Flavio Chierichetti, was awarded Best Research Paper. The inspiration for this paper comes from studying social networks and the importance of addressing privacy issues in analyzing such networks.

Privacy issues dictate the way information is shared among the members of the social network. In the simplest case, a user can mark some of her friends as private; this would make the connections (edges) between this user and these friends visible only to the user. In a different instantiation of privacy, a user can be a member of a private group; in this case, all the edges among the group members are to be considered private. Thus, each user in the social network has her own view of the link structure of the network. These privacy issues also influence the way in which the network itself can be viewed and processed by algorithms. For example, one cannot use the list of private friends of user X for suggesting potential friends or public news items to another user on the network, but one can use this list for the purpose of suggesting friends for user X.

As a result, enforcing these privacy guarantees translates to solving a different algorithmic problem for each user in the network, and for this reason, developing algorithms that process these social graphs and respect these privacy guarantees can become computationally expensive. In a recent study, Dey et al. crawled a snapshot of 1.4 million New York City Facebook users and reported that 52.6% of them hid their friends list. As more users make a larger portion of their social neighborhoods private, these computational issues become more important.

Motivated by the above, this paper introduces the public-private model of graphs, where each user (node) in the public graph has an associated private graph. In this model, the public graph is visible to everyone, and the private graph at each node is visible only to each specific user. Thus, any given user sees their graph as a union of their private graph and the public graph.

From algorithmic point of view, the paper explores two powerful computational paradigms for efficiently studying large graphs, namely, sketching and sampling, and focuses on some key problems in social networks such as similarity ranking, and clustering. In the sketching model, the paper shows how to efficiently approximate the neighborhood function, which in turn can be used to approximate various notions of centrality scores for each node - such centrality scores like the PageRank score have important applications in ranking and recommender systems. In the sampling model, the paper focuses on all-pair shortest path distances, node similarities, and correlation clustering, and develop algorithms that computes these notions on a given public-private graph and at the same time. The paper also illustrates the effectiveness of this model and the computational efficiency of the algorithms by performing experiments on real-world social networks.

The public-private model is an abstraction that can be used to develop efficient social network algorithms. This work leaves a number of open interesting research directions such as: obtaining efficient algorithms for the densest subgraph/community detection problems, influence maximization, computing other pairwise similarity scores, and most importantly, recommendation systems.

KDD’15 Papers, co-authored by Googlers:

Efficient Algorithms for Public-Private Social Networks (Best Paper Award)
Flavio Chierichetti, Alessandro Epasto, Ravi Kumar, Silvio Lattanzi, Vahab Mirrokni

Large-Scale Distributed Bayesian Matrix Factorization using Stochastic Gradient MCMC
Sungjin Ahn, Anoop Korattikara, Nathan Liu, Suju Rajan, Max Welling

TimeMachine: Timeline Generation for Knowledge-Base Entities
Tim Althoff, Xin Luna Dong, Kevin Murphy, Safa Alai, Van Dang, Wei Zhang

Algorithmic Cartography: Placing Points of Interest and Ads on Maps
Mohammad Mahdian, Okke Schrijvers, Sergei Vassilvitskii

Stream Sampling for Frequency Cap Statistics
Edith Cohen

Dirichlet-Hawkes Processes with Applications to Clustering Continuous-Time Document Streams
Nan Du, Mehrdad Farajtabar, Amr Ahmed, Alexander J.Smola, Le Song

Adaptation Algorithm and Theory Based on Generalized Discrepancy
Corinna Cortes, Mehryar Mohri, Andrés Muñoz Medina (now at Google)

Estimating Local Intrinsic Dimensionality
Laurent Amsaleg, Oussama Chelly, Teddy Furon, Stéphane Girard, Michael E. Houle Ken-ichi Kawarabayashi, Michael Nett

Unified and Contrasting Cuts in Multiple Graphs: Application to Medical Imaging Segmentation
Chia-Tung Kuo, Xiang Wang, Peter Walker, Owen Carmichael, Jieping Ye, Ian Davidson

Going In-depth: Finding Longform on the Web
Virginia Smith, Miriam Connor, Isabelle Stanton

Annotating needles in the haystack without looking: Product information extraction from emails
Weinan Zhang, Amr Ahmed, Jie Yang, Vanja Josifovski, Alexander Smola

Focusing on the Long-term: It's Good for Users and Business
Diane Tang, Henning Hohnhold, Deirdre O'Brien