Tag Archives: open source release

Supercharge your Computer Vision models with the TensorFlow Object Detection API

Crossposted on the Google Research Blog

At Google, we develop flexible state-of-the-art machine learning (ML) systems for computer vision that not only can be used to improve our products and services, but also spur progress in the research community. Creating accurate ML models capable of localizing and identifying multiple objects in a single image remains a core challenge in the field, and we invest a significant amount of time training and experimenting with these systems.
Detected objects in a sample image (from the COCO dataset) made by one of our models.
Image credit: Michael Miley, original image
Last October, our in-house object detection system achieved new state-of-the-art results, and placed first in the COCO detection challenge. Since then, this system has generated results for a number of research publications1,2,3,4,5,6,7 and has been put to work in Google products such as NestCam, the similar items and style ideas feature in Image Search and street number and name detection in Street View.

Today we are happy to make this system available to the broader research community via the TensorFlow Object Detection API. This codebase is an open source framework built on top of TensorFlow that makes it easy to construct, train and deploy object detection models.  Our goals in designing this system was to support state-of-the-art models while allowing for rapid exploration and research.  Our first release contains the following:
The SSD models that use MobileNet are lightweight, so that they can be comfortably run in real time on mobile devices. Our winning COCO submission in 2016 used an ensemble of the Faster RCNN models, which are are more computationally intensive but significantly more accurate.  For more details on the performance of these models, see our CVPR 2017 paper.

Are you ready to get started?
We’ve certainly found this code to be useful for our computer vision needs, and we hope that you will as well.  Contributions to the codebase are welcome and please stay tuned for our own further updates to the framework. To get started, download the code here and try detecting objects in some of your own images using the Jupyter notebook, or training your own pet detector on Cloud ML engine!

By Jonathan Huang, Research Scientist and Vivek Rathod, Software Engineer

Acknowledgements
The release of the Tensorflow Object Detection API and the pre-trained model zoo has been the result of widespread collaboration among Google researchers with feedback and testing from product groups. In particular we want to highlight the contributions of the following individuals:

Core Contributors: Derek Chow, Chen Sun, Menglong Zhu, Matthew Tang, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Jasper Uijlings, Viacheslav Kovalevskyi, Kevin Murphy

Also special thanks to: Andrew Howard, Rahul Sukthankar, Vittorio Ferrari, Tom Duerig, Chuck Rosenberg, Hartwig Adam, Jing Jing Long, Victor Gomes, George Papandreou, Tyler Zhu

References
  1. Speed/accuracy trade-offs for modern convolutional object detectors, Huang et al., CVPR 2017 (paper describing this framework)
  2. Towards Accurate Multi-person Pose Estimation in the Wild, Papandreou et al., CVPR 2017
  3. YouTube-BoundingBoxes: A Large High-Precision Human-Annotated Data Set for Object Detection in Video, Real et al., CVPR 2017 (see also our blog post)
  4. Beyond Skip Connections: Top-Down Modulation for Object Detection, Shrivastava et al., arXiv preprint arXiv:1612.06851, 2016
  5. Spatially Adaptive Computation Time for Residual Networks, Figurnov et al., CVPR 2017
  6. AVA: A Video Dataset of Spatio-temporally Localized Atomic Visual Actions, Gu et al., arXiv preprint arXiv:1705.08421, 2017
  7. MobileNets: Efficient convolutional neural networks for mobile vision applications, Howard et al., arXiv preprint arXiv:1704.04861, 2017

MobileNets: Open Source Models for Efficient On-Device Vision

Crossposted on the Google Research Blog

Deep learning has fueled tremendous progress in the field of computer vision in recent years, with neural networks repeatedly pushing the frontier of visual recognition technology. While many of those technologies such as object, landmark, logo and text recognition are provided for internet-connected devices through the Cloud Vision API, we believe that the ever-increasing computational power of mobile devices can enable the delivery of these technologies into the hands of our users, anytime, anywhere, regardless of internet connection. However, visual recognition for on device and embedded applications poses many challenges — models must run quickly with high accuracy in a resource-constrained environment making use of limited computation, power and space.

Today we are pleased to announce the release of MobileNets, a family of mobile-first computer vision models for TensorFlow, designed to effectively maximize accuracy while being mindful of the restricted resources for an on-device or embedded application. MobileNets are small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. They can be built upon for classification, detection, embeddings and segmentation similar to how other popular large scale models, such as Inception, are used.
Example use cases include detection, fine-grain classification, attributes and geo-localization.
This release contains the model definition for MobileNets in TensorFlow using TF-Slim, as well as 16 pre-trained ImageNet classification checkpoints for use in mobile projects of all sizes. The models can be run efficiently on mobile devices with TensorFlow Mobile.
Model Checkpoint
Million MACs
Million Parameters
Top-1 Accuracy
Top-5 Accuracy
569
4.24
70.7
89.5
418
4.24
69.3
88.9
291
4.24
67.2
87.5
186
4.24
64.1
85.3
317
2.59
68.4
88.2
233
2.59
67.4
87.3
162
2.59
65.2
86.1
104
2.59
61.8
83.6
150
1.34
64.0
85.4
110
1.34
62.1
84.0
77
1.34
59.9
82.5
49
1.34
56.2
79.6
41
0.47
50.6
75.0
34
0.47
49.0
73.6
21
0.47
46.0
70.7
14
0.47
41.3
66.2
Choose the right MobileNet model to fit your latency and size budget. The size of the network in memory and on disk is proportional to the number of parameters. The latency and power usage of the network scales with the number of Multiply-Accumulates (MACs) which measures the number of fused Multiplication and Addition operations. Top-1 and Top-5 accuracies are measured on the ILSVRC dataset.
We are excited to share MobileNets with the open source community. Information for getting started can be found at the TensorFlow-Slim Image Classification Library. To learn how to run models on-device please go to TensorFlow Mobile. You can read more about the technical details of MobileNets in our paper, MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications.

By Andrew G. Howard, Senior Software Engineer and Menglong Zhu, Software Engineer

Acknowledgements
MobileNets were made possible with the hard work of many engineers and researchers throughout Google. Specifically we would like to thank:

Core Contributors: Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam

Special thanks to: Benoit Jacob, Skirmantas Kligys, George Papandreou, Liang-Chieh Chen, Derek Chow, Sergio Guadarrama, Jonathan Huang, Andre Hentz, Pete Warden

Open sourcing the Firebase SDKs

Today, at Google I/O 2017, we are pleased to announce that we are taking our first steps towards open sourcing our client libraries. By making our SDKs open, we’re aiming to show our commitment to greater transparency and to building a stronger developer community. To help further that goal, we’ll be using GitHub as a core part of our own toolchain to enable all of you to contribute as well. As you find issues in our code, from inconsistent style to bugs, you can file issues through the standard GitHub issue tracker. You can also find our project in the Google Open Source directory. We’re really looking forward to your pull requests!

What’s open?

We’re starting by open sourcing several products in our iOS, JavaScript, Java, Node.js and Python SDKs. We'll be looking at open sourcing our Android SDK as well. The SDKs are being licensed under Apache 2.0, the same flexible license as existing Firebase open source projects like FirebaseUI.

Let's take a look at each repo:

Firebase iOS SDK 4.0

https://github.com/firebase/firebase-ios-sdk

With the launch of the Firebase iOS 4.0 SDKs we have made several improvements to the developer experience, such as more idiomatic API names for our Swift users. By open sourcing our iOS SDKs we hope to provide an additional avenue for you to give us feedback on such features. For this first release we are open sourcing our Realtime Database, Auth, Cloud Storage and Cloud Messaging (FCM) SDKs, but going forward we intend to release more.

Because we aren't yet able to open source some of the Firebase components, the full product build process isn't available. While you can use this repo to build a FirebaseDev pod, our libraries distributed through CocoaPods will continue to be static frameworks for the time being. We are continually looking for ways to improve the developer experience for developers, however you integrate.

Our GitHub README provides more details on how you build, test and contribute to our iOS SDKs.

Firebase JavaScript SDK 4.0

https://github.com/firebase/firebase-js-sdk

We are excited to announce that we are open sourcing our Realtime Database, Cloud Storage and Cloud Messaging (FCM) SDKs for JavaScript. We’ll have a couple of improvements hot on the heels of this initial release, including open sourcing Firebase Authentication. We are also in the process of releasing the source maps for our components, which we expect would really improve the debuggability of your app.

Our GitHub repo includes instructions on how you can build, test and contribute.

Firebase Admin SDKs

Node.js: https://github.com/firebase/firebase-admin-node
Java: https://github.com/firebase/firebase-admin-java
Python: https://github.com/firebase/firebase-admin-python

We are happy to announce that all three of our Admin SDKs for accessing Firebase on privileged environments are now fully open source, including our recently-launched Python SDK. While we continue to explore supporting more languages, we encourage you to use our source as inspiration to enable Firebase for your environment (and if you do, we'd love to hear about it!)

We're really excited to see what you do with the updated SDKs - as always reach out to us with feedback or questions in the Firebase-Talk Google Group, on Stack Overflow, via the Firebase Support team, and now on GitHub for SDK issues and pull requests! And to read about the other improvements to Firebase that launched at Google I/O, head over to the Firebase blog.

By Salman Qadri, Firebase Product Manager

Introducing tf-seq2seq: An Open Source Sequence-to-Sequence Framework in TensorFlow

Crossposted on the Google Research Blog

Last year, we announced Google Neural Machine Translation (GNMT), a sequence-to-sequence (“seq2seq”) model which is now used in Google Translate production systems. While GNMT achieved huge improvements in translation quality, its impact was limited by the fact that the framework for training these models was unavailable to external researchers.

Today, we are excited to introduce tf-seq2seq, an open source seq2seq framework in TensorFlow that makes it easy to experiment with seq2seq models and achieve state-of-the-art results. To that end, we made the tf-seq2seq codebase clean and modular, maintaining full test coverage and documenting all of its functionality.

Our framework supports various configurations of the standard seq2seq model, such as depth of the encoder/decoder, attention mechanism, RNN cell type, or beam size. This versatility allowed us to discover optimal hyperparameters and outperform other frameworks, as described in our paper, “Massive Exploration of Neural Machine Translation Architectures.”

A seq2seq model translating from Mandarin to English. At each time step, the encoder takes in one Chinese character and its own previous state (black arrow), and produces an output vector (blue arrow). The decoder then generates an English translation word-by-word, at each time step taking in the last word, the previous state, and a weighted combination of all the outputs of the encoder (aka attention [3], depicted in blue) and then producing the next English word. Please note that in our implementation we use wordpieces [4] to handle rare words.
In addition to machine translation, tf-seq2seq can also be applied to any other sequence-to-sequence task (i.e. learning to produce an output sequence given an input sequence), including machine summarization, image captioning, speech recognition, and conversational modeling. We carefully designed our framework to maintain this level of generality and provide tutorials, preprocessed data, and other utilities for machine translation.

We hope that you will use tf-seq2seq to accelerate (or kick off) your own deep learning research. We also welcome your contributions to our GitHub repository, where we have a variety of open issues that we would love to have your help with!

Acknowledgments:
We’d like to thank Eugene Brevdo, Melody Guan, Lukasz Kaiser, Quoc V. Le, Thang Luong, and Chris Olah for all their help. For a deeper dive into how seq2seq models work, please see the resources below.

References:
[1] Massive Exploration of Neural Machine Translation Architectures, Denny Britz, Anna Goldie, Minh-Thang Luong, Quoc Le
[2] Sequence to Sequence Learning with Neural Networks, Ilya Sutskever, Oriol Vinyals, Quoc V. Le. NIPS, 2014
[3] Neural Machine Translation by Jointly Learning to Align and Translate, Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio. ICLR, 2015
[4] Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation, Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean. Technical Report, 2016
[5] Attention and Augmented Recurrent Neural Networks, Chris Olah, Shan Carter. Distill, 2016
[6] Neural Machine Translation and Sequence-to-sequence Models: A Tutorial, Graham Neubig
[7] Sequence-to-Sequence Models, TensorFlow.org

By Anna Goldie and Denny Britz, Research Software Engineer and Google Brain Resident, Google Brain Team

Noto Serif CJK is here!

Crossposted from the Google Developers Blog

Today, in collaboration with Adobe, we are responding to the call for Serif! We are pleased to announce Noto Serif CJK, the long-awaited companion to Noto Sans CJK released in 2014. Like Noto Sans CJK, Noto Serif CJK supports Simplified Chinese, Traditional Chinese, Japanese, and Korean, all in one font.

A serif-style CJK font goes by many names: Song (宋体) in Mainland China, Ming (明體) in Hong Kong, Macao and Taiwan, Minchō (明朝) in Japan, and Myeongjo (명조) or Batang (바탕) in Korea. The names and writing styles originated during the Song and Ming dynasties in China, when China's wood-block printing technique became popular. Characters were carved along the grain of the wood block. Horizontal strokes were easy to carve and vertical strokes were difficult; this resulted in thinner horizontal strokes and wider vertical ones. In addition, subtle triangular ornaments were added to the end of horizontal strokes to simulate Chinese Kai (楷体) calligraphy. This style continues today and has become a popular typeface style.

Serif fonts, which are considered more traditional with calligraphic aesthetics, are often used for long paragraphs of text such as body text of web pages or ebooks. Sans-serif fonts are often used for user interfaces of websites/apps and headings because of their simplicity and modern feeling.

Design of '永' ('eternity') in Noto Serif and Sans CJK. This ideograph is famous for having the most important elements of calligraphic strokes. It is often used to evaluate calligraphy or typeface design.

The Noto Serif CJK package offers the same features as Noto Sans CJK:

  • It has comprehensive character coverage for the four languages. This includes the full coverage of CJK Ideographs with variation support for four regions, Kangxi radicals, Japanese Kana, Korean Hangul and other CJK symbols and letters in the Unicode Basic Multilingual Plane of Unicode. It also provides a limited coverage of CJK Ideographs in Plane 2 of Unicode, as necessary to support standards from China and Japan.


Simplified Chinese
Supports GB 18030 and China’s latest standard Table of General Chinese Characters (通用规范汉字表) published in 2013.
Traditional Chinese
Supports BIG5, and Traditional Chinese glyphs are compliant to glyph standard of Taiwan Ministry of Education (教育部國字標準字體).
Japanese
Supports all of the kanji in  JIS X 0208, JIS X 0213, and JIS X 0212 to include all kanji in Adobe-Japan1-6.
Korean
The best font for typesetting classic Korean documents in Hangul and Hanja such as Humninjeongeum manuscript, a UNESCO World Heritage.
Supports over 1.5 million archaic Hangul syllables and 11,172 modern syllables as well as all CJK ideographs in KS X 1001 and KS X 1002
Noto Serif CJK’s support of character and glyph set standards for the four languages
  • It respects diversity of regional writing conventions for the same character. The example below shows the four glyphs of '述' (describe) in four languages that have subtle differences.
From left to right are glyphs of '述' in S. Chinese, T. Chinese, Japanese and Korean. This character means "describe".
  • It is offered in seven weights: ExtraLight, Light, Regular, Medium, SemiBold, Bold, and Black. Noto Serif CJK supports 43,027 encoded characters and includes 65,535 glyphs (the maximum number of glyphs that can be included in a single font). The seven weights, when put together, have almost a half-million glyphs. The weights are compatible with Google's Material Design standard fonts, Roboto, Noto Sans and Noto Serif(Latin-Greek-Cyrillic fonts in the Noto family).
Seven weights of Noto Serif CJK
    • It supports vertical text layout and is compliant with the Unicode vertical text layout standard. The shape, orientation, and position of particular characters (e.g., brackets and kana letters) are changed when the writing direction of the text is vertical.



    The sheer size of this project also required regional expertise! Glyph design would not have been possible without leading East Asian type foundries Changzhou SinoType Technology, Iwata Corporation, and Sandoll Communications.

    Noto Serif CJK is open source under the SIL Open Font License, Version 1.1. We invite individual users to install and use these fonts in their favorite authoring apps; developers to bundle these fonts with your apps, and OEMs to embed them into their devices. The fonts are free for everyone to use!

    Noto Serif CJK font download:https://www.google.com/get/noto
    Noto Serif CJK on GitHub:https://github.com/googlei18n/noto-cjk
    Adobe's landing page for this release: http://adobe.ly/SourceHanSerif
    Source Han Serif on GitHub: https://github.com/adobe-fonts/source-han-serif/tree/release/

    By Xiangye Xiao and Jungshik Shin, Internationalization Engineering team

    A New Home for Google Open Source

    Google Open Source logo
    Free and open source software has been part of our technical and organizational foundation since Google’s early beginnings. From servers running the Linux kernel to an internal culture of being able to patch any other team's code, open source is part of everything we do. In return, we've released millions of lines of open source code, run programs like Google Summer of Code and Google Code-in, and sponsor open source projects and communities through organizations like Software Freedom Conservancy, the Apache Software Foundation, and many others.
    Today, we’re launching opensource.google.com, a new website for Google Open Source that ties together all of our initiatives with information on how we use, release, and support open source.

    This new site showcases the breadth and depth of our love for open source. It will contain the expected things: our programs, organizations we support, and a comprehensive list of open source projects we've released. But it also contains something unexpected: a look under the hood at how we "do" open source.

    Helping you find interesting open source

    One of the tenets of our philosophy towards releasing open source code is that "more is better." We don't know which projects will find an audience, so we help teams release code whenever possible. As a result, we have released thousands of projects under open source licenses ranging from larger products like TensorFlow, Go, and Kubernetes to smaller projects such as Light My Piano, Neuroglancer and Periph.io. Some are fully supported while others are experimental or just for fun. With so many projects spread across 100 GitHub organizations and our self-hosted Git service, it can be difficult to see the scope and scale of our open source footprint.

    To provide a more complete picture, we are launching a directory of our open source projects which we will expand over time. For many of these projects we are also adding information about how they are used inside Google. In the future, we hope to add more information about project lifecycle and maturity.

    How we do open source

    Open source is about more than just code; it's also about community and process. Participating in open source projects and communities as a large corporation comes with its own unique set of challenges. In 2014, we helped form the TODO Group, which provides a forum to collaborate and share best practices among companies that are deeply committed to open source. Inspired by many discussions we've had over the years, today we are publishing our internal documentation for how we do open source at Google.

    These docs explain the process we follow for releasing new open source projects, submitting patches to others' projects, and how we manage the open source code that we bring into the company and use ourselves. But in addition to the how, it outlines why we do things the way we do, such as why we only use code under certain licenses or why we require contributor license agreements for all patches we receive.

    Our policies and procedures are informed by many years of experience and lessons we've learned along the way. We know that our particular approach to open source might not be right for everyone—there's more than one way to do open source—and so these docs should not be read as a "how-to" guide. Similar to how it can be valuable to read another engineer's source code to see how they solved a problem, we hope that others find value in seeing how we approach and think about open source at Google.

    To hear a little more about the backstory of the new Google Open Source site, we invite you to listen to the latest episode from our friends at The Changelog. We hope you enjoy exploring the new site!

    By Will Norris, Open Source Programs Office

    An Upgrade to SyntaxNet, New Models and a Parsing Competition

    At Google, we continuously improve the language understanding capabilities used in applications ranging from generation of email responses to translation. Last summer, we open-sourced SyntaxNet, a neural-network framework for analyzing and understanding the grammatical structure of sentences. Included in our release was Parsey McParseface, a state-of-the-art model that we had trained for analyzing English, followed quickly by a collection of pre-trained models for 40 additional languages, which we dubbed Parsey's Cousins. While we were excited to share our research and to provide these resources to the broader community, building machine learning systems that work well for languages other than English remains an ongoing challenge. We are excited to announce a few new research resources, available now, that address this problem.

    SyntaxNet Upgrade
    We are releasing a major upgrade to SyntaxNet. This upgrade incorporates nearly a year’s worth of our research on multilingual language understanding, and is available to anyone interested in building systems for processing and understanding text. At the core of the upgrade is a new technology that enables learning of richly layered representations of input sentences. More specifically, the upgrade extends TensorFlow to allow joint modeling of multiple levels of linguistic structure, and to allow neural-network architectures to be created dynamically during processing of a sentence or document.

    Our upgrade makes it, for example, easy to build character-based models that learn to compose individual characters into words (e.g. ‘c-a-t’ spells ‘cat’). By doing so, the models can learn that words can be related to each other because they share common parts (e.g. ‘cats’ is the plural of ‘cat’ and shares the same stem; ‘wildcat’ is a type of ‘cat’). Parsey and Parsey’s Cousins, on the other hand, operated over sequences of words. As a result, they were forced to memorize words seen during training and relied mostly on the context to determine the grammatical function of previously unseen words.

    As an example, consider the following (meaningless but grammatically correct) sentence:
    This sentence was originally coined by Andrew Ingraham who explained: “You do not know what this means; nor do I. But if we assume that it is English, we know that the doshes are distimmed by the gostak. We know too that one distimmer of doshes is a gostak." Systematic patterns in morphology and syntax allow us to guess the grammatical function of words even when they are completely novel: we understand that ‘doshes’ is the plural of the noun ‘dosh’ (similar to the ‘cats’ example above) or that ‘distim’ is the third person singular of the verb distim. Based on this analysis we can then derive the overall structure of this sentence even though we have never seen the words before.

    ParseySaurus
    To showcase the new capabilities provided by our upgrade to SyntaxNet, we are releasing a set of new pretrained models called ParseySaurus. These models use the character-based input representation mentioned above and are thus much better at predicting the meaning of new words based both on their spelling and how they are used in context. The ParseySaurus models are far more accurate than Parsey’s Cousins (reducing errors by as much as 25%), particularly for morphologically-rich languages like Russian, or agglutinative languages like Turkish and Hungarian. In those languages there can be dozens of forms for each word and many of these forms might never be observed during training - even in a very large corpus.

    Consider the following fictitious Russian sentence, where again the stems are meaningless, but the suffixes define an unambiguous interpretation of the sentence structure:
    Even though our Russian ParseySaurus model has never seen these words, it can correctly analyze the sentence by inspecting the character sequences which constitute each word. In doing so, the system can determine many properties of the words (notice how many more morphological features there are here than in the English example). To see the sentence as ParseySaurus does, here is a visualization of how the model analyzes this sentence:
    Each square represents one node in the neural network graph, and lines show the connections between them. The left-side “tail” of the graph shows the model consuming the input as one long string of characters. These are intermittently passed to the right side, where the rich web of connections shows the model composing words into phrases and producing a syntactic parse. Check out the full-size rendering here.

    A Competition
    You might be wondering whether character-based modeling are all we need or whether there are other techniques that might be important. SyntaxNet has lots more to offer, like beam search and different training objectives, but there are of course also many other possibilities. To find out what works well in practice we are helping co-organize, together with Charles University and other colleagues, a multilingual parsing competition at this year’s Conference on Computational Natural Language Learning (CoNLL) with the goal of building syntactic parsing systems that work well in real-world settings and for 45 different languages.

    The competition is made possible by the Universal Dependencies (UD) initiative, whose goal is to develop cross-linguistically consistent treebanks. Because machine learned models can only be as good as the data that they have access to, we have been contributing data to UD since 2013. For the competition, we partnered with UD and DFKI to build a new multilingual evaluation set consisting of 1000 sentences that have been translated into 20+ different languages and annotated by linguists with parse trees. This evaluation set is the first of its kind (in the past, each language had its own independent evaluation set) and will enable more consistent cross-lingual comparisons. Because the sentences have the same meaning and have been annotated according to the same guidelines, we will be able to get closer to answering the question of which languages might be harder to parse.

    We hope that the upgraded SyntaxNet framework and our the pre-trained ParseySaurus models will inspire researchers to participate in the competition. We have additionally created a tutorial showing how to load a Docker image and train models on the Google Cloud Platform, to facilitate participation by smaller teams with limited resources. So, if you have an idea for making your own models with the SyntaxNet framework, sign up to compete! We believe that the configurations that we are releasing are a good place to start, but we look forward to seeing how participants will be able to extend and improve these models or perhaps create better ones!

    Thanks to everyone involved who made this competition happen, including our collaborators at UD-Pipe, who provide another baseline implementation to make it easy to enter the competition. Happy parsing from the main developers, Chris Alberti, Daniel Andor, Ivan Bogatyy, Mark Omernick, Zora Tung and Ji Ma!

    By David Weiss and Slav Petrov, Research Scientists

    Announcing Google Radio PHY Test, aka “Graphyte”, as part of the Chromium Project

    With many different Chromebook models for sale from several different OEMs, the Chrome OS Factory team interfaces with many different Contract Manufacturers (CMs), Original Device Manufacturers (ODMs), and factory teams. The Google Chromium OS Factory Software Platform, a suite of factory tools provided to Chrome OS partners, allows any factory team to quickly bring up a Chrome OS manufacturing test line.

    Today, we are announcing that this platform has been extended to remove the friction of bringing up wireless verification test systems with a component called Google Radio PHY Test or “Graphyte.” Graphyte is an open source software framework that can be used and extended by the wireless ecosystem of chipset companies, test solution providers, and wireless device manufacturers, as opposed to the traditional approach of vendor-specific solutions. It is developed in Python and capable of running on Linux and Chrome OS test stations with an initial focus on Wi-Fi, Bluetooth, and 802.15.4 physical layer verification.

    Verifying that a wireless device is working properly requires chipset- and instrument-specific software which coordinate transmitting and measuring power and signal quality across channels, bandwidths, and data rates. Graphyte provides high-level API abstractions for controlling wireless chipsets and test instruments, allowing anyone to develop a “plugin” for a given chipset or instrument.

    Graphyte architecture.

    We’ve worked closely with industry leaders like Intel and LitePoint to ensure the Graphyte APIs have the right level of abstraction, and it is already being used in production on multiple manufacturing lines and several different products.

    To get started, use Git to clone our repository. You can learn more by reading the Graphyte User Manual and checking out the example of how to use Graphyte in a real test. You can get involved by joining our mailing list. If you’d like to contribute, please follow the Chromium OS Developer Guide.

    To get started with the LitePoint Graphyte plugin, please contact LitePoint directly. To get started with the Intel Graphyte plugin, please contact Intel directly.

    Happy testing!

    By Kurt Williams, Technical Program Manager

    Announcing Guetzli: A New Open Source JPEG Encoder

    Crossposted on the Google Research Blog

    At Google, we care about giving users the best possible online experience, both through our own services and products and by contributing new tools and industry standards for use by the online community. That’s why we’re excited to announce Guetzli, a new open source algorithm that creates high quality JPEG images with file sizes 35% smaller than currently available methods, enabling webmasters to create webpages that can load faster and use even less data.

    Guetzli [guɛtsli] — cookie in Swiss German — is a JPEG encoder for digital images and web graphics that can enable faster online experiences by producing smaller JPEG files while still maintaining compatibility with existing browsers, image processing applications and the JPEG standard. From the practical viewpoint this is very similar to our Zopfli algorithm, which produces smaller PNG and gzip files without needing to introduce a new format; and different than the techniques used in RNN-based image compression, RAISR, and WebP, which all need client and ecosystem changes for compression gains at internet scale.

    The visual quality of JPEG images is directly correlated to its multi-stage compression process: color space transform, discrete cosine transform, and quantization. Guetzli specifically targets the quantization stage in which the more visual quality loss is introduced, the smaller the resulting file. Guetzli strikes a balance between minimal loss and file size by employing a search algorithm that tries to overcome the difference between the psychovisual modeling of JPEG's format, and Guetzli’s psychovisual model, which approximates color perception and visual masking in a more thorough and detailed way than what is achievable by simpler color transforms and the discrete cosine transform. However, while Guetzli creates smaller image file sizes, the tradeoff is that these search algorithms take significantly longer to create compressed images than currently available methods.

    orig-libjpeg-guetzli.png
    Figure 1. 16x16 pixel synthetic example of  a phone line  hanging against a blue sky — traditionally a case where JPEG compression algorithms suffer from artifacts. Uncompressed original is on the left. Guetzli (on the right) shows less ringing artefacts than libjpeg (middle) and has a smaller file size.
    And while Guetzli produces smaller image file sizes without sacrificing quality, we additionally found that in experiments where compressed image file sizes are kept constant that human raters consistently preferred the images Guetzli produced over libjpeg images, even when the libjpeg files were the same size or even slightly larger. We think this makes the slower compression a worthy tradeoff.

    montage-cats-zoom-eye2.png
    Figure 2. 20x24 pixel zoomed areas from a picture of a cat’s eye. Uncompressed original on the left. Guetzli (on the right) shows less ringing artefacts than libjpeg (middle) without requiring a larger file size.
    It is our hope that webmasters and graphic designers will find Guetzli useful and apply it to their photographic content, making users’ experience smoother on image-heavy websites in addition to reducing load times and bandwidth costs for mobile users. Last, we hope that the new explicitly psychovisual approach in Guetzli will inspire further image and video compression research.

    By Robert Obryk and Jyrki Alakuijala, Software Engineers, Google Research Europe

    Another option for file sharing

    Originally posted on the Google Security Blog

    Existing mechanisms for file sharing are so fragmented that people waste time on multi-step copying and repackaging. With the new open source project Upspin, we aim to improve the situation by providing a global name space to name all your files. Given an Upspin name, a file can be shared securely, copied efficiently without "download" and "upload", and accessed by anyone with permission from anywhere with a network connection.

    Our target audience is personal users, families, or groups of friends. Although Upspin might have application in enterprise environments, we think that focusing on the consumer case enables easy-to-understand and easy-to-use sharing.

    File names begin with the user's email address followed by a slash-separated Unix-like path name:


    Any user with appropriate permission can access the contents of this file by using Upspin services to evaluate the full path name, typically via a FUSE filesystem so that unmodified applications just work. Upspin names usually identify regular static files and directories, but may point to dynamic content generated by devices such as sensors or services.

    If the user wishes to share a directory (the unit at which sharing privileges are granted), she adds a file called Access to that directory. In that file she describes the rights she wishes to grant and the users she wishes to grant them to. For instance,


    allows Joe and Mae to read any of the files in the directory holding the Access file, and also in its subdirectories. As well as limiting who can fetch bytes from the server, this access is enforced end-to-end cryptographically, so cleartext only resides on Upspin clients, and use of cloud storage does not extend the trust boundary.

    Upspin looks a bit like a global file system, but its real contribution is a set of interfaces, protocols, and components from which an information management system can be built, with properties such as security and access control suited to a modern, networked world. Upspin is not an "app" or a web service, but rather a suite of software components, intended to run in the network and on devices connected to it, that together provide a secure, modern information storage and sharing network. Upspin is a layer of infrastructure that other software and services can build on to facilitate secure access and sharing. This is an open source contribution, not a Google product. We have not yet integrated with the Key Transparency server, though we expect to eventually, and for now use a similar technique of securely publishing all key updates. File storage is inherently an archival medium without forward secrecy; loss of the user's encryption keys implies loss of content, though we do provide for key rotation.

    It’s early days, but we’re encouraged by the progress and look forward to feedback and contributions. To learn more, see the GitHub repository at Upspin.

    By Andrew Gerrand, Eric Grosse, Rob Pike, Eduardo Pinheiro and Dave Presotto, Google Software Engineers