Tag Archives: releases

An open source font system for everyone

Originally posted on the Google Developers Blog

A big challenge in sharing digital information around the world is “tofu”—the blank boxes that appear when a computer or website isn’t able to display text: ⯐. Tofu can create confusion, a breakdown in communication, and a poor user experience.

Five years ago we set out to address this problem via the Noto—aka “No more tofu”—font project. Today, Google’s open source Noto font family provides a beautiful and consistent digital type for every symbol in the Unicode standard, covering more than 800 languages and 110,000 characters.

A few samples of the 110,000+ characters covered by Noto fonts.
The Noto project started as a necessity for Google’s Android and ChromeOS operating systems. When we began, we did not realize the enormity of the challenge. It required design and technical testing in hundreds of languages, and expertise from specialists in specific scripts. In Arabic, for example, each character has four glyphs (i.e., shapes a character can take) that change depending on the text that comes after it. In Indic languages, glyphs may be reordered or even split into two depending on the surrounding text.

The key to achieving this milestone has been partnering with experts in the field of type and font design, including Monotype, Adobe, and an amazing network of volunteer reviewers. Beyond “no more tofu” in the common languages used every day, Noto will be used to preserve the history and culture of rare languages through digitization. As new characters are introduced into the Unicode standard, Google will add these into the Noto font family.

Google has a deep commitment to openness and the accessibility and innovation that come with it. The full Noto font family, design source files, and the font building pipeline are available for free at the links below. In the spirit of sharing and communication across borders and cultures, please use and enjoy! 
By Xiangye Xiao and Bob Jung, Internationalization

Introducing Cartographer

We are happy to announce the open source release of Cartographer, a real-time simultaneous localization and mapping (SLAM) library in 2D and 3D with ROS support.

SLAM algorithms combine data from various sensors (e.g. LIDAR, IMU and cameras) to simultaneously compute the position of the sensor and a map of the sensor’s surroundings. For example, consider this approach to drawing a floor plan of your living room:
  • Grab a laser rangefinder, stand in the middle of the room, and draw an X on a piece of paper.
  • Measure the distance from where you’re standing to any wall.
  • Draw a line on the paper where the wall is and write down the distance between the X (your position) and the wall.
  • Measure the distance from where you’re standing to another wall and add it to the drawing as well.
  • Now, move to another part of the room.
  • Since the walls (hopefully) haven’t moved, you can measure your distance to the same two walls to determine your new position.


SLAM is an essential component of autonomous platforms such as self driving cars, automated forklifts in warehouses, robotic vacuum cleaners, and UAVs.

Cartographer builds globally consistent maps in real-time across a broad range of sensor configurations common in academia and industry. The following video is a demonstration of Cartographer’s real-time loop closure:


A detailed description of Cartographer’s 2D algorithms can be found in our ICRA 2016 paper.

Thanks to ROS integration and support from external contributors, Cartographer is ready to use on several robot platforms with ROS support:
At Google, Cartographer has enabled a range of applications from mapping museums and transit hubs to enabling new visualizations of famous buildings.

We recognize the value of high quality datasets to the research community. That’s why, thanks to cooperation with the Deutsches Museum (the largest tech museum in the world), we are also releasing three years of LIDAR and IMU data collected using our 2D and 3D mapping backpack platforms during the development and testing of Cartographer.


Our focus is on advancing and democratizing SLAM as a technology. Currently, Cartographer is heavily focused on LIDAR SLAM. Through continued development and community contributions, we hope to add both support for more sensors and platforms as well as new features, such as lifelong mapping and localizing in a pre-existing map.

By Damon Kohler, Wolfgang Hess, and Holger Rapp, Google Engineering

Introducing OpenType Font Variations

Cześć and hello from the ATypI conference in Warsaw! Together with Microsoft, Apple and Adobe, we’re happy to announce the launch of variable fonts as part of OpenType 1.8, the newest version of the font standard. With variable fonts, your device can display text in myriads of weights, widths, or other stylistic variations from a single font file with less space and bandwidth.
 OpenType variable fonts support OpenType Layout variation.
To prevent that the $ sign becomes a black blob,
the stroke disappears at a certain weight.


At Google, we started tinkering with variable fonts about two years ago. We were fascinated by the typographic opportunities, and we got really excited when we realized that variable fonts would also help to save space and bandwidth. We proposed reviving Apple’s TrueType GX variations in OpenType, and started experimenting with it in our tools. The folks at Microsoft then started a four-way collaboration between Microsoft, Apple, Adobe, and Google, together with experts from type foundries and tool makers. Microsoft did the spec work; Apple brought their existing technology and expertise; Adobe updated their CFF format into CFF2; and we brought the tools and testing we’d been developing.  After months of intense polishing, the specification is now finished.

On the Google end, we did a lot of work to build, edit and display variable fonts:
As always, all our font tools are free and open source for everyone to use and contribute.

Now that the spec is public, we can finish the work by merging the changes upstream so that our code will soon flow into products. We’ll also update Noto to support variations (for many writing systems, the sources are already there — the rest will follow). Much more work lies ahead, for example, implementing variations in Google Fonts. Together with other browser makers, we’re already working on a proposal to extend CSS fonts with variations. Once everyone agrees on the format, we’ll support it in Google Chrome. And there are many other challenges ahead, like incorporating font variations into other Google products—so it will be a busy time for us!  We are incredibly excited that an amazing technology from 23 years ago is coming back to life again today. Huge thanks to our friends at Adobe, Apple, and Microsoft for a great collaboration!

To learn more, read Introducing OpenType Variable Fonts, or talk to us at the FontTools group.

By Behdad Esfahbod and Sascha Brawer, Fonts and Text Rendering, Google Internationalization

Opening up Science Journal

Science Journal is an app that turns your Android phone into a mobile science tool, allowing you to use the sensors in your phone to explore the world around you. The Making & Science team launched Science Journal a few months ago at Bay Area Maker Faire 2016 and have been excited to see different projects people have done with it all over the world!

Today we are happy to announce that we are releasing Science Journal 1.1 on the Google Play Store and also publishing the core source for the app. Open source software and hardware has been hugely beneficial to the science education ecosystem. By open sourcing, we’ll be able to improve the app faster and also to provide the community with an example of a modern Android app built with Material Design principles.

One important feature in Science Journal is the ability to connect to external devices over Bluetooth LE. We have open source firmware which runs on several Arduino microcontrollers already. In the near future, we will provide alternate ways to get your sensor data into Science Journal: stay tuned (or follow along with our commits)!

We believe that anyone can be a scientist anywhere. Science doesn’t just happen in the classroom or lab. Tools like Science Journal let you see how the world works with just your phone and now you can explore how Science Journal itself works, too. Give it a try and let us know what you think!

By Justin Koh, Software Engineer

A Google Santa Tracker update from Santa’s Elves


Originally posted on the Google Developers Blog

By Sam Thorogood, Developer Programs Engineer


Today, we're announcing that the open source version of Google's Santa Tracker has been updated with the Android and web experiences that ran in December 2015. We extended, enhanced and upgraded our code, and you can see how we used our developer products - including Firebase and Polymer - to build a fun, educational and engaging experience.


To get started, you can check out the code on GitHub at google/santa-tracker-weband google/santa-tracker-android. Both repositories include instructions so you can build your own version.
Santa Tracker isn’t just about watching Santa’s progress as he delivers presents on December 24. Visitors can also have fun with the winter-inspired experiences, games and educational content by exploring Santa's Village while Santa prepares for his big journey throughout the holidays.
Below is a summary of what we’ve released as open source.

Android app

  • The Santa Tracker Android app is a single APK, supporting all devices, such as phones, tablets and TVs, running Ice Cream Sandwich (4.0) and up. The source code for the app can be found here.
  • Santa Tracker leverages Firebase features, including Remote Config API, App Invites to invite your friends to play along, and Firebase Analytics to help our elves better understand users of the app.
  • Santa’s Village is a launcher for videos, games and the tracker that responds well to multiple devices such as phones and tablets. There's even an alternative launcher based on the Leanback user interface for Android TVs.


  • Games on Santa Tracker Android are built using many technologies such as JBox2D (gumball game), Android view hierarchy (memory match game) and OpenGL with special rendering engine (jetpack game). We've also included a holiday-themed variation of Pie Noon, a fun game that works on Android TV, your phone, and inside Google Cardboard's VR.

Android Wear



  • The custom watch faces on Android Wear provide a personalized touch. Having Santa or one of his friendly elves tell the time brings a smile to all. Building custom watch faces is a lot of fun but providing a performant, battery friendly watch face requires certain considerations. The watch face source code can be found here.
  • Santa Tracker uses notifications to let users know when Santa has started his journey. The notifications are further enhanced to provide a great experience on wearables using custom backgrounds and actions that deep link into the app.

On the web



  • Santa Tracker is mobile-first: this year's experience was built for the mobile web, including an amazing brand new, interactive - yet fully responsive, village: with three breakpoints, touch gesture support and support for the Web App Manifest.
  • To help us develop Santa at scale, we've upgraded to Polymer 1.0+. Santa Tracker's use of Polymer demonstrates how easy it is to package code into reusable components. Every housein Santa's Village is a custom element, only loaded when needed, minimizing the startup cost of Santa Tracker.


  • Many of the amazing new games (like Present Bounce) were built with the latest JavaScript standards (ES6) and are compiled to support older browsers via the Google Closure Compiler.
  • Santa Tracker's interactive and fun experience is enhanced using the Web Animations API, a standardized JavaScript APIfor unifying animated content.
  • We simplified the Chromecast support this year, focusing on a great screensaver that would countdown to the big event on December 24th - and occasionally autoplay some of the great video content from around Santa's Village.
We hope that this update inspires you to make your own magical experiences based on all the interesting and exciting components that came together to make Santa Tracker!

Which languages convey the most information in the least space? Introducing the Unimorph dataset.

Several years ago a science journalist asked me which languages could pack the most information into a 140-character Tweet. Because Twitter defines a character roughly as a single Unicode code point, this turns out to be an easy question to answer. Chinese almost certainly rates as the most “compact” language from that point of view because a single Chinese character represents a whole morpheme (in linguist terminology, a minimal unit of meaning) whereas an English letter only represents a part of a morpheme. The Chinese equivalent of I don’t eat meat, which in English takes 16 characters including spaces is 我不吃肉, which takes just four.

But this question relates to a broader question that as a linguist I have often been asked: which languages are the most “efficient” at conveying information? Or, which languages can convey the same information in the smallest amount of space? Untethered by the idiosyncrasies of Twitter, this question becomes quite difficult to answer. What do you mean by “space”? Number of characters? Number of bytes? Number of syllables? Each of these has its own problems. And perhaps more crucially, what do you mean by “information”? The Shannon notion of information does not straightforwardly apply here.

A group of us at Google set out to answer this question, or at least to provide the form that an answer would have to take. We had the resources and experience needed to annotate data in multiple languages, and we were able to divert some of those resources to this task. The results were published in a paper presented at the 2014 International Conference on Language Resources and Evaluation in Reykjavík, Iceland.

We are now releasing the data on GitHub. The data consist of 85 sentences typical of the kinds of sentences generated by Google Now, translated into eight typologically diverse languages: English, French, Italian, German, Russian, Arabic, Korean, Chinese, which include some highly inflected and uninflected languages, and various types of morphology including inflectional and agglutinative. The data were annotated by one to three annotators depending on the language, with morphological information, counts of the marked features and other information. The main data file is in HTML, color coded by language, which makes it easy to browse but also easy to extract into other formats.

Since the basic information conveyed by each sentence can be assumed to be the same across languages, the main focus of the research was on the additional information that each language marks, and cannot avoid marking. For example, the English sentence:

Use my location for the search results and other services.

has the French translation:

Utilisez ma position pour les résultats de recherche et d'autres services.

The verb ending -ez, in boldface above marks “addressee respect”, a bit of information that is missing from the English original.  One could have used a different ending on the French verb, but then that would not avoid this bit of information—it would be choosing to mark lack of respect, or familiarity with the addressee.

In the paper we tried various ways of measuring the differing information content of the languages relative to various definitions of “space”. Considering all the factors together, we concluded that the languages that conveyed the most information in a given amount of space were highly inflected languages like Russian, with uninflected languages like Chinese actually being the “least efficient” at conveying information.

We don’t expect this to be the final answer, which is why we are releasing the data as open source in the hopes that others will find it useful and maybe can even extend it to more sentences or a wider variety of languages. Ultimately though, any answer to the question of which languages convey the most information in the smallest amount of space must seriously address what is meant by “information”, and must pay heed to the famous maxim by the Russian linguist Roman Jakobson (1959) that “languages differ essentially in what they must convey and not in what they may convey.”

By Richard Sproat, Research Scientist

Making Rubyists more comfortable on Google Cloud Platform

One of the many open source efforts at Google is the Google Cloud Platform (GCP) native libraries for our most popular languages. One of these libraries is the gcloud-ruby project on GitHub which is released as the gcloud gem on rubygems.org. There are several gems for accessing Google Cloud Platform resources from Ruby but this gem is different. It is hand coded by Rubyists for Rubyists and that has some distinct advantages.

Many of us have had experience working with libraries that are clearly ported from another language. I usually talk about them as Ruby with a Java accent or Python with a Perl accent. Generally they work just fine but you can run into some low level friction — sometimes things just don’t feel right. Native gems written by members of the community solve this problem. In the case of gcloud-ruby there are some really concrete examples.

First, gcloud-ruby uses syntax that is similar to other popular Ruby libraries. For example, the syntax for specifying a table schema in BigQuery (Google Cloud Platform's very large scale data warehouse) looks like this:

table = dataset.create_table "baby_names" do |schema|
schema.string "name"
schema.string "sex"
schema.integer "number"
end

Creating the same table in popular Ruby on Rails looks like this:

create_table "baby_names" do |schema|
schema.string "name"
schema.string "sex"
schema.integer "number
end

The two are nearly identical. That makes getting up to speed on BigQuery easier and quicker than it would be if the Ruby library didn't use patterns that are already known to the majority of Rubyists. 

Another way the gcloud-ruby library meets the community where it is at is by embracing the community's fondness for doing things several different ways. In Ruby there are often several correct ways to do a given task.

The gcloud-ruby library is no exception. There are a few different ways to authenticate and create the objects you use to interact with the API. Ruby also has many common methods that have aliases. In the standard library Enumerable#map and Enumerable#collect actually run the same code path for example. In gcloud-ruby the vision API uses aliases. Google Cloud Vision provides a single endpoint: annotate. gcloud-ruby has an annotate method but also aliases this method as mark and detect if those make more sense to you (detect is the method that makes the most sense to my brain so that's the one I use). By providing a couple of different aliases it can mean the first thing you try is more likely to work. This speeds up development time and makes learning the library easier. 

The last way the gcloud-ruby gem makes Rubyists feel at home is by having comprehensive tests, a common value and popular discussion topic for the Ruby community. gcloud-ruby uses minitest-spec for testing, a popular choice that most Rubyists can easily read. When I was learning the storage API I looked at the tests for storage to learn how to use the library. There is outstanding documentation as well for those who prefer learning that way but I'm so used to looking at tests that I really appreciated that gcloud-ruby has well written and easily accessible tests.

Above are three examples of how hand-coded libraries from within the community can improve the user experience when learning to use tools. Of course, doing all the development on GitHub in the open also helps. Users can easily see what bugs people have run into and what features are next up in the production queue. And if a user has a feature request (like the previously mentioned Cloud Vision support) they can create a GitHub issue.

If you’re a Rubyist, give gcloud-ruby a shot and let us know what you think!

By Aja Hammerly, Developer Advocate

Omnitone: Spatial audio on the web


Spatial audio is a key element for an immersive virtual reality (VR) experience. By bringing spatial audio to the web, the browser can be transformed into a complete VR media player with incredible reach and engagement. That’s why the Chrome WebAudio team has created and is releasing the Omnitone project, an open source spatial audio renderer with the cross-browser support.

Our challenge was to introduce the audio spatialization technique called ambisonics so the user can hear the full-sphere surround sound on the browser. In order to achieve this, we implemented the ambisonic decoding with binaural rendering using web technology. There are several paths for introducing a new feature into the web platform, but we chose to use only the Web Audio API. In doing so, we can reach a larger audience with this cross-browser technology, and we can also avoid the lengthy standardization process for introducing a new Web Audio component. This is possible because the Web Audio API provides all the necessary building blocks for this audio spatialization technique.



Omnitone Audio Processing Diagram

The AmbiX format recording, which is the target of the Omnitone decoder, contains 4 channels of audio that are encoded using ambisonics, which can then be decoded into an arbitrary speaker setup. Instead of the actual speaker array, Omnitone uses 8 virtual speakers based on an the head-related transfer function (HRTF) convolution to render the final audio stream binaurally. This binaurally-rendered audio can convey a sense of space when it is heard through headphones.

The beauty of this mechanism lies in the sound-field rotation applied to the incoming spatial audio stream. The orientation sensor of a VR headset or a smartphone can be linked to Omnitone’s decoder to seamlessly rotate the entire sound field. The rest of the spatialization process will be handled automatically by Omnitone. A live demo can be found at the project landing page.

Throughout the project, we worked closely with the Google VR team for their VR audio expertise. Not only was their knowledge on the spatial audio a tremendous help for the project, but the collaboration also ensured identical audio spatialization across all of Google’s VR applications - both on the web and Android (e.g. Google VR SDK, YouTube Android app). The Spatial Media Specification and HRTF sets are great examples of the Google VR team’s efforts, and Omnitone is built on top of this specification and HRTF sets.

With emerging web-based VR projects like WebVR, Omnitone’s audio spatialization can play a critical role in a more immersive VR experience on the web. Web-based VR applications will also benefit from high-quality streaming spatial audio, as the Chrome Media team has recently added FOA compression to the open source audio codec Opus. More exciting things like VR view integration, higher-order ambisonics and mobile web support will also be coming soon to Omnitone.

We look forward to seeing what people do with Omnitone now that it's open source. Feel free to reach out to us or leave a comment with your thoughts and feedback on the issue tracker on GitHub.

By Hongchan Choi and Raymond Toy, Chrome Team

Due to the incomplete implementation of multichannel audio decoding on various browsers, Omnitone does not support mobile web at the time of writing.

Kubernetes 1.3 is here!

With all of the excitement being generated around the Kubernetes 1.3 release and the first anniversary of Kubernetes 1.0 (#k8sbday), now is a great time to point out some of the features that enterprise users should be taking note of.

If you’re not familiar with Kubernetes, let me get you up to speed.

Kubernetes is an open-source container automation framework that builds upon 15 years of experience of running production workloads at Google. Once you declare a desired state, Kubernetes works to drive your system toward that state. As a developer this means less time handling trivial tasks that a computer can automate and more time focusing on developing applications that provide value to users.

Additionally, Kubernetes aims to be a framework that you can operate at planetary scale, run anywhere, and never outgrow.

With the release of Kubernetes 1.3, Kubernetes is closer than ever to meeting those goals; the 1.3 release adds exciting features such as:
Aside from features, the coolest part about working with Kubernetes is hearing user stories. I’ll soon be publishing an interview with Joseph Jacks, co-founder of Kismatic, the enterprise Kubernetes company, on the Kubernetes blog.

Joseph is very active in the Kubernetes community and has extensive experience with Kubernetes in production. In the interview I ask him why he bet his business on Kubernetes, what could be better, and how he sees Kubernetes growing in the near future.

Kubernetes has many, many features to offer that I didn’t get to cover in this short write-up. If you know anyone that needs to ramp up on Kubernetes, the easiest way is the free course I created with Kelsey Hightower, Scalable Microservices with Kubernetes. The course covers the basic features of Kubernetes. If you want an overview of what’s new in Kubernetes 1.3, feel free to look at the “What’s new in Kubernetes 1.3” video or slides.

Finally for a more in-depth look at the 1.3 release, make sure to check out: 5 days of Kubernetes 1.3 blog series.

Want to learn more about container orchestration and cloud native platforms? Here’s some recommended reading to follow up with:
By Carter Morgan, Developer Programs Engineer

GitHub on BigQuery: Analyze all the code



Google, in collaboration with GitHub, is releasing an incredible new open dataset on Google BigQuery. So far you've been able to monitor and analyze GitHub's pulse since 2011 (thanks GitHub Archive project!) and today we're adding the perfect complement to this. What could you do if you had access to analyze all the open source software in the world, with just one SQL command?

The Google BigQuery Public Datasets program now offers a full snapshot of the content of more than 2.8 million open source GitHub repositories in BigQuery. Thanks to our new collaboration with GitHub, you'll have access to analyze the source code of almost 2 billion files with a simple (or complex) SQL query. This will open the doors to all kinds of new insights and advances that we're just beginning to envision.

For example, let's say you're the author of a popular open source library. Now you'll be able to find every open source project on GitHub that's using it. Even more, you'll be able to guide the future of your project by analyzing how it's being used, and improve your APIs based on what your users are actually doing with it.

On the security side, we've seen how the most popular open source projects benefit from having multiple eyes and hands working on them. This visibility helps projects get hardened and buggy code cleaned up. What if you could search for errors with similar patterns in every other open source project? Would you notify their authors and send them pull requests? Well, now you can. Some concepts to keep in mind while working with BigQuery and the GitHub contents dataset:
To learn more, read GitHub's announcement and try some sample queries. Share your queries and findings in our reddit.com/r/bigquery and Hacker News posts. The ideas are endless, and I'll start collecting tips and links to other articles on this post on Medium.

Stay curious!