Tag Archives: open source release

Introducing Cartographer

We are happy to announce the open source release of Cartographer, a real-time simultaneous localization and mapping (SLAM) library in 2D and 3D with ROS support.

SLAM algorithms combine data from various sensors (e.g. LIDAR, IMU and cameras) to simultaneously compute the position of the sensor and a map of the sensor’s surroundings. For example, consider this approach to drawing a floor plan of your living room:
  • Grab a laser rangefinder, stand in the middle of the room, and draw an X on a piece of paper.
  • Measure the distance from where you’re standing to any wall.
  • Draw a line on the paper where the wall is and write down the distance between the X (your position) and the wall.
  • Measure the distance from where you’re standing to another wall and add it to the drawing as well.
  • Now, move to another part of the room.
  • Since the walls (hopefully) haven’t moved, you can measure your distance to the same two walls to determine your new position.


SLAM is an essential component of autonomous platforms such as self driving cars, automated forklifts in warehouses, robotic vacuum cleaners, and UAVs.

Cartographer builds globally consistent maps in real-time across a broad range of sensor configurations common in academia and industry. The following video is a demonstration of Cartographer’s real-time loop closure:


A detailed description of Cartographer’s 2D algorithms can be found in our ICRA 2016 paper.

Thanks to ROS integration and support from external contributors, Cartographer is ready to use on several robot platforms with ROS support:
At Google, Cartographer has enabled a range of applications from mapping museums and transit hubs to enabling new visualizations of famous buildings.

We recognize the value of high quality datasets to the research community. That’s why, thanks to cooperation with the Deutsches Museum (the largest tech museum in the world), we are also releasing three years of LIDAR and IMU data collected using our 2D and 3D mapping backpack platforms during the development and testing of Cartographer.


Our focus is on advancing and democratizing SLAM as a technology. Currently, Cartographer is heavily focused on LIDAR SLAM. Through continued development and community contributions, we hope to add both support for more sensors and platforms as well as new features, such as lifelong mapping and localizing in a pre-existing map.

By Damon Kohler, Wolfgang Hess, and Holger Rapp, Google Engineering

Introducing the Open Images Dataset

Originally posted on the Google Research Blog

In the last few years, advances in machine learning have enabled Computer Vision to progress rapidly, allowing for systems that can automatically caption images to apps that can create natural language replies in response to shared photos. Much of this progress can be attributed to publicly available image datasets, such as ImageNet and COCO for supervised learning, and YFCC100M for unsupervised learning.

Today, we introduce Open Images, a dataset consisting of ~9 million URLs to images that have been annotated with labels spanning over 6000 categories. We tried to make the dataset as practical as possible: the labels cover more real-life entities than the 1000 ImageNet classes, there are enough images to train a deep neural network from scratch and the images are listed as having a Creative Commons Attribution license*.

The image-level annotations have been populated automatically with a vision model similar to Google Cloud Vision API. For the validation set, we had human raters verify these automated labels to find and remove false positives. On average, each image has about 8 labels assigned. Here are some examples:
Annotated images form the Open Images dataset. Left: Ghost Arches by Kevin Krejci. Right: Some Silverware by J B. Both images used under CC BY 2.0 license
We have trained an Inception v3 model based on Open Images annotations alone, and the model is good enough to be used for fine-tuning applications as well as for other things, like DeepDream or artistic style transfer which require a well developed hierarchy of filters. We hope to improve the quality of the annotations in Open Images the coming months, and therefore the quality of models which can be trained.

The dataset is a product of a collaboration between Google, CMU and Cornell universities, and there are a number of research papers built on top of the Open Images dataset in the works. It is our hope that datasets like Open Images and the recently released YouTube-8M will be useful tools for the machine learning community.

By Ivan Krasin and Tom Duerig, Software Engineers

* While we tried to identify images that are licensed under a Creative Commons Attribution license, we make no representations or warranties regarding the license status of each image and you should verify the license for each image yourself.

.NET and PowerShell tooling for the Google Cloud Platform

Last month Google made an announcement unveiling support for Visual Studio, C#, PowerShell, Microsoft SQL Server and more on the Google Cloud Platform. With so many  new features, it is easy to gloss over some of the technical aspects of the announcement, especially the fact that all of the developer tooling and libraries are open source and available on GitHub.

This post will go into some of the details behind the new C# libraries, PowerShell cmdlets, and Visual Studio extension. All three products are open source, have an exciting roadmap for the future and are hungry for your feedback.

C# bindings for Google APIs

Source: https://github.com/googlecloudplatform/google-cloud-dotnet
Docs: https://cloud.google.com/dotnet/

For years, Google has had innovative technologies powering its data centers, unfortunately Google’s internal APIs and technology couldn’t directly benefit you and your software. That was, until the Google Cloud Platform started exposing public APIs for things like machine learning, storage, logging etc. With these APIs publicly available, you can add powerful capabilities to your apps without needing to manage complex infrastructure.

There have been C# bindings for Google APIs for years. In fact, Google receives hundreds of millions of API calls from C# clients every day. But newer APIs, especially those from the Google Cloud Platform, require more advanced features like bidirectional streaming. That’s why rather than using HTTP/REST many newer Google APIs are built on top of gRPC, a high performance, open source universal RPC framework.

But don’t worry, we have C# bindings for those gRPC-based APIs too; all of it open source and on GitHub.

In both cases, the client library is the result of a C# code generator. We take the API’s discovery document (analogous to a WSDL) and generate C# code. gRPC APIs require more careful design than other APIs, but the end product is the same. Once built, the API libraries are published to NuGet.

C# code generators for Google APIs isn’t the entire story.

Source code generated from tools can look foreign at times. So for libraries where the codegen isn’t good enough, we have hand-written wrappers to provide a better, more idiomatic experience. In some cases -- such as CRUD operations using the Datastore API -- the hand-written library cuts down on the required lines of code by half.

Finally, support for C# doesn’t just mean code. We are also working to ensure Google APIs are supported on different runtimes too. Most Google APIs work on the cross-platform .NET Core runtime and we are continuing to expand support.

PowerShell support

Source: https://github.com/googlecloudplatform/google-cloud-powershell
Docs: http://googlecloudplatform.github.io/google-cloud-powershell/

C# support is great when you are writing full applications, but for DevOps, scripting is more typical. The Cloud SDK provides command-line tools (gcloud, gsutil) for managing cloud resources, but when running on Windows, Windows PowerShell is a dramatically more productive environment. Google Cloud tools for PowerShell is a set of cmdlets so you can manage your Google cloud resources. They are strongly typed, and integrate seamlessly with other PowerShell tools. For example, to learn more about a cmdlet, just use Get-Help.

In designing the PowerShell cmdlets, the main goal was to be idiomatic. We wanted to follow the best practices and guidelines so PowerShell novices and pros alike could use our cmdlets. Of course, if we have anything wrong, please log an issue on the GitHub repository. Pull requests are also welcome.

Visual Studio

Source: https://github.com/googlecloudplatform/google-cloud-visualstudio
Docs: https://cloud.google.com/visual-studio/

The C# and PowerShell features should help developers using Google services. But the biggest impact on developer productivity comes from being inside the Visual Studio IDE.

From within Visual Studio you can search for new extensions and find the Google Cloud Platform Extension for Visual Studio. It provides tools for viewing/managing data stored in Google Cloud Storage and Google Cloud SQL. It also provides support for deploying ASP.NET 4.x applications to Google Compute Engine.

It is only the first release and we have some big plans for the future. You can see a lot of the short-term features we have planned by looking at the issues list in GitHub. Like making Google APIs light up for the new .NET Core runtime, being able to deploy ASP.NET Core applications to Google App Engine or Google Container Engine will be huge. Stay tuned for a future blog post about how to run C# on Google App Engine Flexible Environment, as well.

We’re just getting started

Hopefully you share my enthusiasm for Google’s ongoing development in .NET tooling. Not only is it exciting to be able to take advantage of Google Cloud Platform technologies, but also to see a future where .NET Core enables C# code to run cross-platform.

But to be successful we need your help.

If you have questions, be sure to ask on Stack Overflow (e.g. the google-cloud-visualstudio or google-cloud-powershell tags). If you have problems, please open issues on GitHub (libraries, VS, PowerShell). If you still have trouble, participate in the google-cloud-dev group.

The team here at Google is thrilled to be working with the .NET stack and your feedback is immensely helpful in prioritizing things.

By Chris Smith, Software Engineer

A sizzling open source release for the Australian Election site

Originally posted on the Geo Developers Blog

One of the best parts of my job at Google is 20 percent time. While I was hired to help developers use Google’s APIs, I value the time I'm afforded to be a student myself—to learn new technologies and solve real-world problems. A few weeks prior to the recent Australian election an opportunity presented itself. A small team in Sydney set their sights on helping the 15 million voters stay informed of how to participate, track real-time results, and (of course) find the closest election sausage sizzle!


Our team of designers, engineers and product managers didn't have an immediate sense of how to attack the problem. What we did have was the power of Google’s APIs, programming languages, and Cloud hosting with Firebase and Google Cloud Platform.



The result is a mish-mash of some technologies we'd been wanting to learn more about. We're open sourcing the ausvotes.withgoogle.com repository to give developers a sense of what happens when you get a handful of engineers in a room with a clear goal and a immovable deadline.

The Election AU 2016 repository uses:

  • Go from Google App Engine instances to serve the appropriate level of detail for users' viewport queries from memory at very low latency, and
  • Dart to render the live result maps on top of Google Maps JavaScript API using Firebase real time database updates.

A product is only as good as the attention and usage is receives. Our team was really happy with the results of our work:

  • 406,000 people used our maps, including 217,000 on election day.
  • We had 139 stories in the media.
  • Our map was also embedded in major news websites, such as Sky News.

Complete setup and installation instructions are available in the GitHub README.

By Brett Morgan, Developer Programs Engineer

Angular, version 2: proprioception-reinforcement

Originally posted on the Angular Blog

Today, at a special meetup at Google HQ, we announced the final release version of Angular 2, the full-platform successor to Angular 1.

What does "final" mean? Stability that's been validated across a wide range of use cases, and a framework that's been optimized for developer productivity, small payload size, and performance. With ahead-of-time compilation and built-in lazy-loading, we’ve made sure that you can deploy the fastest, smallest applications across the browser, desktop, and mobile environments. This release also represents huge improvements to developer productivity with the Angular CLI and styleguide.

Angular 1 first solved the problem of how to develop for an emerging web. Six years later, the challenges faced by today’s application developers, and the sophistication of the devices that applications must support, have both changed immensely. With this release, and its more capable versions of the Router, Forms, and other core APIs, today you can build amazing apps for any platform. If you prefer your own approach, Angular is also modular and flexible, so you can use your favorite third-party library or write your own.

From the beginning, we built Angular in collaboration with the open source development community. We are grateful to the large number of contributors who dedicated time to submitting pull requests, issues, and repro cases, who discussed and debated design decisions, and validated (and pushed back on) our RCs. We wish we could have brought every one of you in person to our meetup so you could celebrate this milestone with us tonight!


What’s next?

Angular is now ready for the world, and we’re excited for you to join the thousands of developers already building with Angular 2.  But what’s coming next for Angular?

A few of the things you can expect in the near future from the Angular team:

  • Bug fixes and non-breaking features for APIs marked as stable
  • More guides and live examples specific to your use cases
  • More work on animations
  • Angular Material 2
  • Moving WebWorkers out of experimental
  • More features and more languages for Angular Universal
  • Even more speed and payload size improvements

Semantic Versioning

We heard loud and clear that our RC labeling was confusing. To make it easy to manage dependencies on stable Angular releases, starting today with Angular 2.0.0, we will move to semantic versioning.  Angular versioning will then follow the MAJOR.MINOR.PATCH scheme as described by semver:

  1. the MAJOR version gets incremented when incompatible API changes are made to stable APIs,
  2. the MINOR version gets incremented when backwards-compatible functionality are added,
  3. the PATCH version gets incremented when backwards-compatible bug are fixed.

Moving Angular to semantic versioning ensures rapid access to the newest features for our component and tooling ecosystem, while preserving a consistent and reliable development environment for production applications that depend on stability between major releases, but still benefit from bug fixes and new APIs.

Contributors

Aaron Frost, Aaron (Ron) Tsui, Adam Bradley, Adil Mourahi, agpreynolds, Ajay Ambre, Alberto Santini, Alec Wiseman, Alejandro Caravaca Puchades, Alex Castillo, Alex Eagle, Alex Rickabaugh, Alex Wolfe, Alexander Bachmann, Alfonso Presa, Ali Johnson, Aliaksei Palkanau, Almero Steyn, Alyssa Nicoll, Alxandr, André Gil, Andreas Argelius, Andreas Wissel, Andrei Alecu, Andrei Tserakhau, Andrew, Andrii Nechytailov, Ansel Rosenberg, Anthony Zotti, Anton Moiseev, Artur Meyster, asukaleido, Aysegul Yonet, Aziz Abbas, Basarat Ali Syed, BeastCode, Ben Nadel, Bertrand Laporte, Blake La Pierre, Bo Guo, Bob Nystrom, Borys Semerenko, Bradley Heinz, Brandon Roberts, Brendan Wyse, Brian Clark, Brian Ford, Brian Hsu, dozingcat, Brian Yarger, Bryce Johnson, CJ Avilla, cjc343, Caitlin Potter, Cédric Exbrayat, Chirayu Krishnappa, Christian Weyer, Christoph Burgdorf, Christoph Guttandin, Christoph Hoeller, Christoffer Noring, Chuck Jazdzewski, Cindy, Ciro Nunes, Codebacca, Cody Lundquist, Cody-Nicholson, Cole R Lawrence, Constantin Gavrilete, Cory Bateman, Craig Doremus, crisbeto, Cuel, Cyril Balit, Cyrille Tuzi, Damien Cassan, Dan Grove, Dan Wahlin, Daniel Leib, Daniel Rasmuson, dapperAuteur, Daria Jung, David East, David Fuka, David Reher, David-Emmanuel Divernois, Davy Engone, Deborah Kurata, Derek Van Dyke, DevVersion, Dima Kuzmich, Dimitrios Loukadakis, Dmitriy Shekhovtsov, Dmitry Patsura, Dmitry Zamula, Dmytro Kulyk, Donald Spencer, Douglas Duteil, dozingcat, Drew Moore, Dylan Johnson, Edd Hannay, Edouard Coissy, eggers, elimach, Elliott Davis, Eric Jimenez, Eric Lee Carraway, Eric Martinez, Eric Mendes Dantas, Eric Tsang, Essam Al Joubori, Evan Martin, Fabian Raetz, Fahimnur Alam, Fatima Remtullah, Federico Caselli, Felipe Batista, Felix Itzenplitz, Felix Yan, Filip Bruun, Filipe Silva, Flavio Corpa, Florian Knop, Foxandxss, Gabe Johnson, Gabe Scholz, GabrielBico, Gautam krishna.R, Georgii Dolzhykov, Georgios Kalpakas, Gerd Jungbluth, Gerard Sans, Gion Kunz, Gonzalo Ruiz de Villa, Grégory Bataille, Günter Zöchbauer, Hank Duan, Hannah Howard, Hans Larsen, Harry Terkelsen, Harry Wolff, Henrique Limas, Henry Wong, Hiroto Fukui, Hongbo Miao, Huston Hedinger, Ian Riley, Idir Ouhab Meskine, Igor Minar, Ioannis Pinakoulakis, The Ionic Team, Isaac Park, Istvan Novak, Itay Radotzki, Ivan Gabriele, Ivey Padgett, Ivo Gabe de Wolff, J. Andrew Brassington, Jack Franklin, Jacob Eggers, Jacob MacDonald, Jacob Richman, Jake Garelick, James Blacklock, James Ward, Jason Choi, Jason Kurian, Jason Teplitz, Javier Ros, Jay Kan, Jay Phelps, Jay Traband, Jeff Cross, Jeff Whelpley, Jennifer Bland, jennyraj, Jeremy Attali, Jeremy Elbourn, Jeremy Wilken, Jerome Velociter, Jesper Rønn-Jensen, Jesse Palmer, Jesús Rodríguez, Jesús Rodríguez, Jimmy Gong, Joe Eames, Joel Brewer, John Arstingstall, John Jelinek IV, John Lindquist, John Papa, John-David Dalton, Jonathan Miles, Joost de Vries, Jorge Cruz, Josef Meier, Josh Brown, Josh Gerdes, Josh Kurz, Josh Olson, Josh Thomas, Joseph Perrott, Joshua Otis, Josu Guiterrez, Julian Motz, Julie Ralph, Jules Kremer, Justin DuJardin, Kai Ruhnau, Kapunahele Wong, Kara Erickson, Kathy Walrath, Keerti Parthasarathy, Kenneth Hahn, Kevin Huang, Kevin Kirsche, Kevin Merckx, Kevin Moore, Kevin Western, Konstantin Shcheglov, Kurt Hong, Levente Morva, laiso, Lina Lu, LongYinan, Lucas Mirelmann, Luka Pejovic, Lukas Ruebbelke, Marc Fisher, Marc Laval, Marcel Good, Marcy Sutton, Marcus Krahl, Marek Buko, Mark Ethan Trostler, Martin Gontovnikas, Martin Probst, Martin Staffa, Matan Lurey, Mathias Raacke, Matias Niemelä, Matt Follett, Matt Greenland, Matt Wheatley, Matteo Suppo, Matthew Hill, Matthew Schranz, Matthew Windwer, Max Sills, Maxim Salnikov, Melinda Sarnicki Bernardo, Michael Giambalvo, Michael Goderbauer, Michael Mrowetz, Michael-Rainabba Richardson, Michał Gołębiowski, Mikael Morlund, Mike Ryan, Minko Gechev, Miško Hevery, Mohamed Hegazy, Nan Schweiger, Naomi Black, Nathan Walker, The NativeScript Team, Nicholas Hydock, Nick Mann, Nick Raphael, Nick Van Dyck, Ning Xia, Olivier Chafik, Olivier Combe, Oto Dočkal, Pablo Villoslada Puigcerber, Pascal Precht, Patrice Chalin, Patrick Stapleton, Paul Gschwendtner, Pawel Kozlowski, Pengfei Yang, Pete Bacon Darwin, Pete Boere, Pete Mertz, Philip Harrison, Phillip Alexander, Phong Huynh, Polvista, Pouja, Pouria Alimirzaei, Prakal, Prayag Verma, Rado Kirov, Raul Jimenez, Razvan Moraru, Rene Weber, Rex Ye, Richard Harrington, Richard Kho, Richard Sentino, Rob Eisenberg, Rob Richardson, Rob Wormald, Robert Ferentz, Robert Messerle, Roberto Simonetti, Rodolfo Yabut, Sam Herrmann, Sam Julien, Sam Lin, Sam Rawlins, Sammy Jelin, Sander Elias, Scott Hatcher, Scott Hyndman, Scott Little, ScottSWu, Sebastian Hillig, Sebastian Müller, Sebastián Duque, Sekib Omazic, Shahar Talmi, Shai Reznik, Sharon DiOrio, Shannon Ayres, Shefali Sinha, Shlomi Assaf, Shuhei Kagawa, Sigmund Cherem, Simon Hürlimann (CyT), Simon Ramsay, Stacy Gay, Stephen Adams, Stephen Fluin, Steve Mao, Steve Schmitt, Suguru Inatomi, Tamas Csaba, Ted Sander, Tero Parviainen, Thierry Chatel, Thierry Templier, Thomas Burleson, Thomas Henley, Tim Blasi, Tim Ruffles, Timur Meyster, Tobias Bosch, Tony Childs, Tom Ingebretsen, Tom Schoener, Tommy Odom, Torgeir Helgevold, Travis Kaufman, Trotyl Yu, Tycho Grouwstra, The Typescript Team, Uli Köhler, Uri Shaked, Utsav Shah, Valter Júnior, Vamsi V, Vamsi Varikuti, Vanga Sasidhar, Veikko Karsikko, Victor Berchet, Victor Mejia, Victor Savkin, Vinci Rufus, Vijay Menon, Vikram Subramanian, Vivek Ghaisas, Vladislav Zarakovsky, Vojta Jina, Ward Bell, Wassim Chegham, Wenqian Guo, Wesley Cho, Will Ngo, William Johnson, William Welling, Wilson Mendes Neto, Wojciech Kwiatek, Yang Lin, Yegor Jbanov, Zach Bjornson, Zhicheng Wang, and many more...

With gratitude and appreciation, and anticipation to see what you'll build next, welcome to the next stage of Angular.

By Jules Kremer, Angular Team

Introducing OpenType Font Variations

Cześć and hello from the ATypI conference in Warsaw! Together with Microsoft, Apple and Adobe, we’re happy to announce the launch of variable fonts as part of OpenType 1.8, the newest version of the font standard. With variable fonts, your device can display text in myriads of weights, widths, or other stylistic variations from a single font file with less space and bandwidth.
 OpenType variable fonts support OpenType Layout variation.
To prevent that the $ sign becomes a black blob,
the stroke disappears at a certain weight.


At Google, we started tinkering with variable fonts about two years ago. We were fascinated by the typographic opportunities, and we got really excited when we realized that variable fonts would also help to save space and bandwidth. We proposed reviving Apple’s TrueType GX variations in OpenType, and started experimenting with it in our tools. The folks at Microsoft then started a four-way collaboration between Microsoft, Apple, Adobe, and Google, together with experts from type foundries and tool makers. Microsoft did the spec work; Apple brought their existing technology and expertise; Adobe updated their CFF format into CFF2; and we brought the tools and testing we’d been developing.  After months of intense polishing, the specification is now finished.

On the Google end, we did a lot of work to build, edit and display variable fonts:
As always, all our font tools are free and open source for everyone to use and contribute.

Now that the spec is public, we can finish the work by merging the changes upstream so that our code will soon flow into products. We’ll also update Noto to support variations (for many writing systems, the sources are already there — the rest will follow). Much more work lies ahead, for example, implementing variations in Google Fonts. Together with other browser makers, we’re already working on a proposal to extend CSS fonts with variations. Once everyone agrees on the format, we’ll support it in Google Chrome. And there are many other challenges ahead, like incorporating font variations into other Google products—so it will be a busy time for us!  We are incredibly excited that an amazing technology from 23 years ago is coming back to life again today. Huge thanks to our friends at Adobe, Apple, and Microsoft for a great collaboration!

To learn more, read Introducing OpenType Variable Fonts, or talk to us at the FontTools group.

By Behdad Esfahbod and Sascha Brawer, Fonts and Text Rendering, Google Internationalization

Opening up Science Journal

Science Journal is an app that turns your Android phone into a mobile science tool, allowing you to use the sensors in your phone to explore the world around you. The Making & Science team launched Science Journal a few months ago at Bay Area Maker Faire 2016 and have been excited to see different projects people have done with it all over the world!

Today we are happy to announce that we are releasing Science Journal 1.1 on the Google Play Store and also publishing the core source for the app. Open source software and hardware has been hugely beneficial to the science education ecosystem. By open sourcing, we’ll be able to improve the app faster and also to provide the community with an example of a modern Android app built with Material Design principles.

One important feature in Science Journal is the ability to connect to external devices over Bluetooth LE. We have open source firmware which runs on several Arduino microcontrollers already. In the near future, we will provide alternate ways to get your sensor data into Science Journal: stay tuned (or follow along with our commits)!

We believe that anyone can be a scientist anywhere. Science doesn’t just happen in the classroom or lab. Tools like Science Journal let you see how the world works with just your phone and now you can explore how Science Journal itself works, too. Give it a try and let us know what you think!

By Justin Koh, Software Engineer

A Google Santa Tracker update from Santa’s Elves


Originally posted on the Google Developers Blog

By Sam Thorogood, Developer Programs Engineer


Today, we're announcing that the open source version of Google's Santa Tracker has been updated with the Android and web experiences that ran in December 2015. We extended, enhanced and upgraded our code, and you can see how we used our developer products - including Firebase and Polymer - to build a fun, educational and engaging experience.


To get started, you can check out the code on GitHub at google/santa-tracker-weband google/santa-tracker-android. Both repositories include instructions so you can build your own version.
Santa Tracker isn’t just about watching Santa’s progress as he delivers presents on December 24. Visitors can also have fun with the winter-inspired experiences, games and educational content by exploring Santa's Village while Santa prepares for his big journey throughout the holidays.
Below is a summary of what we’ve released as open source.

Android app

  • The Santa Tracker Android app is a single APK, supporting all devices, such as phones, tablets and TVs, running Ice Cream Sandwich (4.0) and up. The source code for the app can be found here.
  • Santa Tracker leverages Firebase features, including Remote Config API, App Invites to invite your friends to play along, and Firebase Analytics to help our elves better understand users of the app.
  • Santa’s Village is a launcher for videos, games and the tracker that responds well to multiple devices such as phones and tablets. There's even an alternative launcher based on the Leanback user interface for Android TVs.


  • Games on Santa Tracker Android are built using many technologies such as JBox2D (gumball game), Android view hierarchy (memory match game) and OpenGL with special rendering engine (jetpack game). We've also included a holiday-themed variation of Pie Noon, a fun game that works on Android TV, your phone, and inside Google Cardboard's VR.

Android Wear



  • The custom watch faces on Android Wear provide a personalized touch. Having Santa or one of his friendly elves tell the time brings a smile to all. Building custom watch faces is a lot of fun but providing a performant, battery friendly watch face requires certain considerations. The watch face source code can be found here.
  • Santa Tracker uses notifications to let users know when Santa has started his journey. The notifications are further enhanced to provide a great experience on wearables using custom backgrounds and actions that deep link into the app.

On the web



  • Santa Tracker is mobile-first: this year's experience was built for the mobile web, including an amazing brand new, interactive - yet fully responsive, village: with three breakpoints, touch gesture support and support for the Web App Manifest.
  • To help us develop Santa at scale, we've upgraded to Polymer 1.0+. Santa Tracker's use of Polymer demonstrates how easy it is to package code into reusable components. Every housein Santa's Village is a custom element, only loaded when needed, minimizing the startup cost of Santa Tracker.


  • Many of the amazing new games (like Present Bounce) were built with the latest JavaScript standards (ES6) and are compiled to support older browsers via the Google Closure Compiler.
  • Santa Tracker's interactive and fun experience is enhanced using the Web Animations API, a standardized JavaScript APIfor unifying animated content.
  • We simplified the Chromecast support this year, focusing on a great screensaver that would countdown to the big event on December 24th - and occasionally autoplay some of the great video content from around Santa's Village.
We hope that this update inspires you to make your own magical experiences based on all the interesting and exciting components that came together to make Santa Tracker!

Which languages convey the most information in the least space? Introducing the Unimorph dataset.

Several years ago a science journalist asked me which languages could pack the most information into a 140-character Tweet. Because Twitter defines a character roughly as a single Unicode code point, this turns out to be an easy question to answer. Chinese almost certainly rates as the most “compact” language from that point of view because a single Chinese character represents a whole morpheme (in linguist terminology, a minimal unit of meaning) whereas an English letter only represents a part of a morpheme. The Chinese equivalent of I don’t eat meat, which in English takes 16 characters including spaces is 我不吃肉, which takes just four.

But this question relates to a broader question that as a linguist I have often been asked: which languages are the most “efficient” at conveying information? Or, which languages can convey the same information in the smallest amount of space? Untethered by the idiosyncrasies of Twitter, this question becomes quite difficult to answer. What do you mean by “space”? Number of characters? Number of bytes? Number of syllables? Each of these has its own problems. And perhaps more crucially, what do you mean by “information”? The Shannon notion of information does not straightforwardly apply here.

A group of us at Google set out to answer this question, or at least to provide the form that an answer would have to take. We had the resources and experience needed to annotate data in multiple languages, and we were able to divert some of those resources to this task. The results were published in a paper presented at the 2014 International Conference on Language Resources and Evaluation in Reykjavík, Iceland.

We are now releasing the data on GitHub. The data consist of 85 sentences typical of the kinds of sentences generated by Google Now, translated into eight typologically diverse languages: English, French, Italian, German, Russian, Arabic, Korean, Chinese, which include some highly inflected and uninflected languages, and various types of morphology including inflectional and agglutinative. The data were annotated by one to three annotators depending on the language, with morphological information, counts of the marked features and other information. The main data file is in HTML, color coded by language, which makes it easy to browse but also easy to extract into other formats.

Since the basic information conveyed by each sentence can be assumed to be the same across languages, the main focus of the research was on the additional information that each language marks, and cannot avoid marking. For example, the English sentence:

Use my location for the search results and other services.

has the French translation:

Utilisez ma position pour les résultats de recherche et d'autres services.

The verb ending -ez, in boldface above marks “addressee respect”, a bit of information that is missing from the English original.  One could have used a different ending on the French verb, but then that would not avoid this bit of information—it would be choosing to mark lack of respect, or familiarity with the addressee.

In the paper we tried various ways of measuring the differing information content of the languages relative to various definitions of “space”. Considering all the factors together, we concluded that the languages that conveyed the most information in a given amount of space were highly inflected languages like Russian, with uninflected languages like Chinese actually being the “least efficient” at conveying information.

We don’t expect this to be the final answer, which is why we are releasing the data as open source in the hopes that others will find it useful and maybe can even extend it to more sentences or a wider variety of languages. Ultimately though, any answer to the question of which languages convey the most information in the smallest amount of space must seriously address what is meant by “information”, and must pay heed to the famous maxim by the Russian linguist Roman Jakobson (1959) that “languages differ essentially in what they must convey and not in what they may convey.”

By Richard Sproat, Research Scientist

Making Rubyists more comfortable on Google Cloud Platform

One of the many open source efforts at Google is the Google Cloud Platform (GCP) native libraries for our most popular languages. One of these libraries is the gcloud-ruby project on GitHub which is released as the gcloud gem on rubygems.org. There are several gems for accessing Google Cloud Platform resources from Ruby but this gem is different. It is hand coded by Rubyists for Rubyists and that has some distinct advantages.

Many of us have had experience working with libraries that are clearly ported from another language. I usually talk about them as Ruby with a Java accent or Python with a Perl accent. Generally they work just fine but you can run into some low level friction — sometimes things just don’t feel right. Native gems written by members of the community solve this problem. In the case of gcloud-ruby there are some really concrete examples.

First, gcloud-ruby uses syntax that is similar to other popular Ruby libraries. For example, the syntax for specifying a table schema in BigQuery (Google Cloud Platform's very large scale data warehouse) looks like this:

table = dataset.create_table "baby_names" do |schema|
schema.string "name"
schema.string "sex"
schema.integer "number"
end

Creating the same table in popular Ruby on Rails looks like this:

create_table "baby_names" do |schema|
schema.string "name"
schema.string "sex"
schema.integer "number
end

The two are nearly identical. That makes getting up to speed on BigQuery easier and quicker than it would be if the Ruby library didn't use patterns that are already known to the majority of Rubyists. 

Another way the gcloud-ruby library meets the community where it is at is by embracing the community's fondness for doing things several different ways. In Ruby there are often several correct ways to do a given task.

The gcloud-ruby library is no exception. There are a few different ways to authenticate and create the objects you use to interact with the API. Ruby also has many common methods that have aliases. In the standard library Enumerable#map and Enumerable#collect actually run the same code path for example. In gcloud-ruby the vision API uses aliases. Google Cloud Vision provides a single endpoint: annotate. gcloud-ruby has an annotate method but also aliases this method as mark and detect if those make more sense to you (detect is the method that makes the most sense to my brain so that's the one I use). By providing a couple of different aliases it can mean the first thing you try is more likely to work. This speeds up development time and makes learning the library easier. 

The last way the gcloud-ruby gem makes Rubyists feel at home is by having comprehensive tests, a common value and popular discussion topic for the Ruby community. gcloud-ruby uses minitest-spec for testing, a popular choice that most Rubyists can easily read. When I was learning the storage API I looked at the tests for storage to learn how to use the library. There is outstanding documentation as well for those who prefer learning that way but I'm so used to looking at tests that I really appreciated that gcloud-ruby has well written and easily accessible tests.

Above are three examples of how hand-coded libraries from within the community can improve the user experience when learning to use tools. Of course, doing all the development on GitHub in the open also helps. Users can easily see what bugs people have run into and what features are next up in the production queue. And if a user has a feature request (like the previously mentioned Cloud Vision support) they can create a GitHub issue.

If you’re a Rubyist, give gcloud-ruby a shot and let us know what you think!

By Aja Hammerly, Developer Advocate