Category Archives: Open Source Blog

News about Google’s open source projects and programs

A Google Santa Tracker update from Santa’s Elves


Originally posted on the Google Developers Blog

By Sam Thorogood, Developer Programs Engineer


Today, we're announcing that the open source version of Google's Santa Tracker has been updated with the Android and web experiences that ran in December 2015. We extended, enhanced and upgraded our code, and you can see how we used our developer products - including Firebase and Polymer - to build a fun, educational and engaging experience.


To get started, you can check out the code on GitHub at google/santa-tracker-weband google/santa-tracker-android. Both repositories include instructions so you can build your own version.
Santa Tracker isn’t just about watching Santa’s progress as he delivers presents on December 24. Visitors can also have fun with the winter-inspired experiences, games and educational content by exploring Santa's Village while Santa prepares for his big journey throughout the holidays.
Below is a summary of what we’ve released as open source.

Android app

  • The Santa Tracker Android app is a single APK, supporting all devices, such as phones, tablets and TVs, running Ice Cream Sandwich (4.0) and up. The source code for the app can be found here.
  • Santa Tracker leverages Firebase features, including Remote Config API, App Invites to invite your friends to play along, and Firebase Analytics to help our elves better understand users of the app.
  • Santa’s Village is a launcher for videos, games and the tracker that responds well to multiple devices such as phones and tablets. There's even an alternative launcher based on the Leanback user interface for Android TVs.


  • Games on Santa Tracker Android are built using many technologies such as JBox2D (gumball game), Android view hierarchy (memory match game) and OpenGL with special rendering engine (jetpack game). We've also included a holiday-themed variation of Pie Noon, a fun game that works on Android TV, your phone, and inside Google Cardboard's VR.

Android Wear



  • The custom watch faces on Android Wear provide a personalized touch. Having Santa or one of his friendly elves tell the time brings a smile to all. Building custom watch faces is a lot of fun but providing a performant, battery friendly watch face requires certain considerations. The watch face source code can be found here.
  • Santa Tracker uses notifications to let users know when Santa has started his journey. The notifications are further enhanced to provide a great experience on wearables using custom backgrounds and actions that deep link into the app.

On the web



  • Santa Tracker is mobile-first: this year's experience was built for the mobile web, including an amazing brand new, interactive - yet fully responsive, village: with three breakpoints, touch gesture support and support for the Web App Manifest.
  • To help us develop Santa at scale, we've upgraded to Polymer 1.0+. Santa Tracker's use of Polymer demonstrates how easy it is to package code into reusable components. Every housein Santa's Village is a custom element, only loaded when needed, minimizing the startup cost of Santa Tracker.


  • Many of the amazing new games (like Present Bounce) were built with the latest JavaScript standards (ES6) and are compiled to support older browsers via the Google Closure Compiler.
  • Santa Tracker's interactive and fun experience is enhanced using the Web Animations API, a standardized JavaScript APIfor unifying animated content.
  • We simplified the Chromecast support this year, focusing on a great screensaver that would countdown to the big event on December 24th - and occasionally autoplay some of the great video content from around Santa's Village.
We hope that this update inspires you to make your own magical experiences based on all the interesting and exciting components that came together to make Santa Tracker!

From Google Summer of Code to Game of Thrones on the Back of a JavaScript Dragon (Part 2)

This guest post is a part of a short series about Tatyana Goldberg, Guy Yachdav and Christian Dallago and the journey that was inspired by their participation as Google Summer of Code mentors for the BioJS project. Don’t miss the first post in the series. Heads up, this post contains spoilers for Game of Thrones seasons 5 and 6!

We built on the Google Summer of Code (GSoC) philosophy and the lessons we learned from participating in 2014 by starting a JavaScript Technology class at the Technical University of Munich (TUM).

We began with two dozen students who worked on expanding the BioJS visualization library. Our class became popular quickly and the number of applicants doubled each semester (nearly 180 applicants for 40 seats in the 2016 summer term).

In 2016 our team grew to include Christian Dallago, who had joined as a GSoC mentor. Together we decided to break with tradition of our course’s previous semesters. Instead of focusing on data visualization, we wanted to introduce students to data science with JavaScript. To get our students fully engaged, we decided the project would center on data from the hit TV show, Game of Thrones.

Our aim was to create an online portal for Game of Thrones fans which would:
  1. Provide the most comprehensive, structured and open data set about the Game of Thrones world accessible via API.
  2. Present an interactive map based on JavaScript.
  3. Listen to what people are saying on Twitter about each of the show’s characters.
  4. Use machine learning algorithms to predict the likelihood of each character’s death.
Our plan worked — the students were engaged. It was a beautiful sight to see: GitHub repos humming with activity as each dev team delved deeper into their projects. As a project manager, you know you’ve got something good when issues are being opened and closed at 4:00 AM!

The results were mind blowing. In 50 days of programming, 36 students opened over 1,200 issues and pull requests, pushed 3,300 commits, released four apps to NPM, and, of course, produced one absolutely amazing website.

The website amasses data from 2,028 characters. Our map shows 240 landmarks and the paths traveled by 28 characters. Our Twitter sentiment analysis tool analyzed over 3 million tweets. And we launched the first ever machine learning-based prediction algorithm that predicts the likelihood of dying for the 1,451 characters in the show that are still alive.

image02fix.png
Visualization of Twitter sentiment analysis data for Jon Snow during season 5 of Game of Thrones. The X axis shows the timeline and the Y axis shows the number of positive (green) and negative (red) tweets. Each tweet is analyzed by an algorithm using a neural network to determine whether the tweet’s writer has a positive, negative or neutral attitude toward the character. 
Since launch, the site’s popularity has skyrocketed. Following our press release, we were covered by over 1,500 media outlets, most notably Time, The GuardianRolling Stone, Daily Mail, BBC, Reuters, The Telegraph, CNET and many more. HowStuffWorks, The Vulture and others produced videos about the site and Chris Hardwick’s Comedy Central show did a segment about us. We've also given countless interviews to TV, radio and newspapers.

Blog2_Figure1_v3.png
Google Analytics for the website. Left chart shows the number of visitors to the website during the first week after launch, reaching over 73K visitors on April 25th. Right chart shows the number of visitors at a given time point during the same week.
The most exciting part of the project was predicting the likelihood that any given character would die using machine learning. Machine learning algorithms find rules and patterns in the data, things that humans cannot obviously and simply detect. Once the rules and patterns are identified, we apply machine learning to make inferences or predictions from novel, previously unseen, data sets.

Warning: The next paragraphs contain spoilers for seasons 5 and 6 of Game of Thrones!

In order to predict the likelihood of a character’s death, we collected information about all of the characters that appeared in books 1 to 5 and analyzed over 30 features, including age, gender, marital status and others. Then we used a support vector machine (SVM) to statistically compare the features of characters, both dead and alive, to predict who would get the axe next. Our prediction was correct for 74% of all cases and surprised us by placing a number of characters thought to be relatively safe in grave danger.

According to our predictions, Jon Snow, who was seemingly betrayed and murdered by fellow members of the Night’s Watch at the end of season 5, had only an 11% chance of dying. Indeed, Jon has risen from the dead in the second episode of season 6! We also predicted that the rulers of Dorn (Doran and Trystane) Martell are at a high likelihood of death and, as predicted, they were taken out in the first episode of the new season.

Of course, as is always the case with predictions, there were also misses. We didn’t expect Roose Bolton to be killed off nor did we see Hodor’s departure coming.

This experience was an amazing ride for our team and it all started with Google Summer of Code! In the next post we’ll share what followed and where we see ourselves heading in the future.

By Guy Yachdav, Tatyana Goldberg and Christian Dallago, BioJS

Which languages convey the most information in the least space? Introducing the Unimorph dataset.

Several years ago a science journalist asked me which languages could pack the most information into a 140-character Tweet. Because Twitter defines a character roughly as a single Unicode code point, this turns out to be an easy question to answer. Chinese almost certainly rates as the most “compact” language from that point of view because a single Chinese character represents a whole morpheme (in linguist terminology, a minimal unit of meaning) whereas an English letter only represents a part of a morpheme. The Chinese equivalent of I don’t eat meat, which in English takes 16 characters including spaces is 我不吃肉, which takes just four.

But this question relates to a broader question that as a linguist I have often been asked: which languages are the most “efficient” at conveying information? Or, which languages can convey the same information in the smallest amount of space? Untethered by the idiosyncrasies of Twitter, this question becomes quite difficult to answer. What do you mean by “space”? Number of characters? Number of bytes? Number of syllables? Each of these has its own problems. And perhaps more crucially, what do you mean by “information”? The Shannon notion of information does not straightforwardly apply here.

A group of us at Google set out to answer this question, or at least to provide the form that an answer would have to take. We had the resources and experience needed to annotate data in multiple languages, and we were able to divert some of those resources to this task. The results were published in a paper presented at the 2014 International Conference on Language Resources and Evaluation in Reykjavík, Iceland.

We are now releasing the data on GitHub. The data consist of 85 sentences typical of the kinds of sentences generated by Google Now, translated into eight typologically diverse languages: English, French, Italian, German, Russian, Arabic, Korean, Chinese, which include some highly inflected and uninflected languages, and various types of morphology including inflectional and agglutinative. The data were annotated by one to three annotators depending on the language, with morphological information, counts of the marked features and other information. The main data file is in HTML, color coded by language, which makes it easy to browse but also easy to extract into other formats.

Since the basic information conveyed by each sentence can be assumed to be the same across languages, the main focus of the research was on the additional information that each language marks, and cannot avoid marking. For example, the English sentence:

Use my location for the search results and other services.

has the French translation:

Utilisez ma position pour les résultats de recherche et d'autres services.

The verb ending -ez, in boldface above marks “addressee respect”, a bit of information that is missing from the English original.  One could have used a different ending on the French verb, but then that would not avoid this bit of information—it would be choosing to mark lack of respect, or familiarity with the addressee.

In the paper we tried various ways of measuring the differing information content of the languages relative to various definitions of “space”. Considering all the factors together, we concluded that the languages that conveyed the most information in a given amount of space were highly inflected languages like Russian, with uninflected languages like Chinese actually being the “least efficient” at conveying information.

We don’t expect this to be the final answer, which is why we are releasing the data as open source in the hopes that others will find it useful and maybe can even extend it to more sentences or a wider variety of languages. Ultimately though, any answer to the question of which languages convey the most information in the smallest amount of space must seriously address what is meant by “information”, and must pay heed to the famous maxim by the Russian linguist Roman Jakobson (1959) that “languages differ essentially in what they must convey and not in what they may convey.”

By Richard Sproat, Research Scientist

Making Rubyists more comfortable on Google Cloud Platform

One of the many open source efforts at Google is the Google Cloud Platform (GCP) native libraries for our most popular languages. One of these libraries is the gcloud-ruby project on GitHub which is released as the gcloud gem on rubygems.org. There are several gems for accessing Google Cloud Platform resources from Ruby but this gem is different. It is hand coded by Rubyists for Rubyists and that has some distinct advantages.

Many of us have had experience working with libraries that are clearly ported from another language. I usually talk about them as Ruby with a Java accent or Python with a Perl accent. Generally they work just fine but you can run into some low level friction — sometimes things just don’t feel right. Native gems written by members of the community solve this problem. In the case of gcloud-ruby there are some really concrete examples.

First, gcloud-ruby uses syntax that is similar to other popular Ruby libraries. For example, the syntax for specifying a table schema in BigQuery (Google Cloud Platform's very large scale data warehouse) looks like this:

table = dataset.create_table "baby_names" do |schema|
schema.string "name"
schema.string "sex"
schema.integer "number"
end

Creating the same table in popular Ruby on Rails looks like this:

create_table "baby_names" do |schema|
schema.string "name"
schema.string "sex"
schema.integer "number
end

The two are nearly identical. That makes getting up to speed on BigQuery easier and quicker than it would be if the Ruby library didn't use patterns that are already known to the majority of Rubyists. 

Another way the gcloud-ruby library meets the community where it is at is by embracing the community's fondness for doing things several different ways. In Ruby there are often several correct ways to do a given task.

The gcloud-ruby library is no exception. There are a few different ways to authenticate and create the objects you use to interact with the API. Ruby also has many common methods that have aliases. In the standard library Enumerable#map and Enumerable#collect actually run the same code path for example. In gcloud-ruby the vision API uses aliases. Google Cloud Vision provides a single endpoint: annotate. gcloud-ruby has an annotate method but also aliases this method as mark and detect if those make more sense to you (detect is the method that makes the most sense to my brain so that's the one I use). By providing a couple of different aliases it can mean the first thing you try is more likely to work. This speeds up development time and makes learning the library easier. 

The last way the gcloud-ruby gem makes Rubyists feel at home is by having comprehensive tests, a common value and popular discussion topic for the Ruby community. gcloud-ruby uses minitest-spec for testing, a popular choice that most Rubyists can easily read. When I was learning the storage API I looked at the tests for storage to learn how to use the library. There is outstanding documentation as well for those who prefer learning that way but I'm so used to looking at tests that I really appreciated that gcloud-ruby has well written and easily accessible tests.

Above are three examples of how hand-coded libraries from within the community can improve the user experience when learning to use tools. Of course, doing all the development on GitHub in the open also helps. Users can easily see what bugs people have run into and what features are next up in the production queue. And if a user has a feature request (like the previously mentioned Cloud Vision support) they can create a GitHub issue.

If you’re a Rubyist, give gcloud-ruby a shot and let us know what you think!

By Aja Hammerly, Developer Advocate

Stories from Google Code-in: KDE, MetaBrainz and Haiku

Google Code-in is our annual contest that gives students age 13 to 17 experience in computer science through contributions to open source projects. This blog post is the second installment in our series reflecting on the experiences of Google Code-in 2015 grand prize winners. Be sure to check out the first post in the series.

This week we profile three more grand prize winners from Google Code-in 2015. These students came from all around the world to celebrate with us in June after successfully completing 692 tasks that resulted in significant contributions to the participating open source projects.

Google Code-in 2015 Grand Prize Winners and Mentors were treated to a cruise around San Francisco Bay.

Students were paired with mentors who guided them as they learned both new technologies and how to collaborate on real-world projects. While most students had some programming experience, many were new to open source. In the end, they learned new skills, connected with open source communities and many will continue to contribute to open source projects.

We’re proud of all of the participants and grateful to the mentors who helped them. We invited the contest winners to write about their experience and many took us up on the offer. Here are their stories:

First up today is Imran Tatriev, a student from Kazakhstan who decided to work on the KDE project because loved their philosophy and had experience with C++ and Qt. He was a finalist in Google Code-in 2014 when he worked with the OpenMRS project.

Imran’s work on KDE included contributing to projects such as KDevelop, Marble and GCompris. His biggest challenge was working on the KDevelop IDE’s debugger where he was tasked with highlighting crashed threads. Highlighting the crashed thread was trivial, finding the thread that had crashed was not. It took him five days to solve that problem and he credits his mentor with helping him to work through it.

In the end, Imran learned a lot about regular expressions, the architecture of large software projects, C++ and unit testing. What did he like most about his Google Code-in experience? Imran writes: “The most valuable moments were meeting wonderful and smart people.” He plans to continue working with KDE and apply for Google Summer of Code.

Next is Caroline Gschwend, a student from the US who worked on the MetaBrainz project. Both of her parents are computer scientists and she credits them with spurring her interest.

A homeschool student with a unique approach to education, Caroline loves to learn and voraciously consumes free online resources. She had this to say: “I think that free, online learning is an amazing benefit to our society. With access to a computer and the internet, anyone, anywhere, can learn anything.”

Caroline discovered Google Code-in through her mother who had, in turn, discovered the contest through Google for Education. Caroline dug in and decided it was right up her alley. She loved that it embraced beginners with open arms and introduced new people to open source. Ultimately, she decided to work with MetaBrainz because, as a classically trained violinist, MusicBrainz piqued her interest. Their projects are primarily written in Perl and Python and, while Caroline was fluent in Java, it was too interesting to pass up.

As with most students, Caroline found collaboration to be a big part of the learning curve -- from GitHub to Git and IRC. Her mentors and other community contributors on IRC helped Caroline through the process and, looking back, she found that collaboration to be her favorite part of the whole experience. She loved that the mentors helped her to produce professional quality work rather than focusing on quantity.

Google Code-in gave Caroline a chance to learn about collaboration, Inkspace, icon design, web development and more. She has continued her work in open source and plans to apply for Google Summer of Code.

The last student we’re highlighting today is Vale Tolpegin, a student from the US who worked on the Haiku project, an open source operating system for personal computers. He also participated in Google Code-in 2014 but didn’t feel his skills were sharp enough to attack the more challenging tasks, like the ones he tackled this time around for Haiku.

Vale took on a wide range of tasks from documentation to application development, his favorite being the creation of the Haiku Hardware Repository. The repository is a Django website that lets people search and share hardware tests to determine if a given machine will work with Haiku.

He ran into a sticky issue early on, spending nearly a week finding a race condition within an application maintained by Haiku. Vale found it frustrating, but his mentors helped him see it through to the end. That wasn’t the only big challenge he ran into and, ultimately, bested: he spent another week debugging a Remote Desktop Application, software which had a very large code base.

Despite the two time consuming challenges, Vale managed to accomplish a lot more during the contest, including building a graph plotter and fixing bugs in the Haiku package manager. Vale had this to say:

“After finishing GCI, I have continued to work with Haiku and the experiences I have gained will continue to have an impact on me for years to come. Participating in GCI has truly been a life-­changing experience!”

Thank you to Imran, Caroline and Vale for their contributions to open source and for sharing their Google Code-in experiences with us. Stay tuned, we’ve got two more posts coming in this series!

By Josh Simmons, Open Source Programs Office

Stories from Google Code-in: FOSSASIA and Haiku

Google Code-in is our annual contest to help pre-university students gain real-world computer science experience by taking on tasks of varying difficulty levels with the help of volunteer mentors. These tasks are created by open source projects so while learning, the students are contributing to the software many of us use on a daily basis.

The finalists and winners for our 2015/2016 season were announced in February and, in June, the grand prize winners joined us for four days of learning and celebration. Students and their guardians came from all around the world. One of my favorite things, as one of the Googler hosts, was seeing the light bulbs go on above parents’ heads as they came to understand open source and why it’s so important. These parents and guardians were even more proud of the students as they learned how much their teenager has contributed to the world through participating in Google Code-in.

We’ve invited contest winners and organizations to write about their experience and will be sharing their stories in a series of blog posts. This marks the first post in the series.

Google Code-in 2015 Grand Prize Winners and Mentors

Let’s start with Jason Wong, a student from the US who worked with FOSSASIA. FOSSASIA supports open source developers in Asia through events and coding programs.

Jason got into computer science during middle school at a summer camp where he built a website describing the differences between Linux, OS X, and Windows.  He dove deeper into web development by learning PHP and JavaScript through YouTube videos. He enjoyed being able to build more complex and dynamic websites. Like many new developers, Jason became very confident but did not concern himself with important aspects of programming like testing.

He learned about Google Code-in when Stephanie Taylor, fellow open source program manager who manages the GCI program here at Google, gave a talk at his school. Jason dove right in picking FOSSASIA as the project he would contribute to.

FOSSASIA offered Jason a chance to learn a lot about development and open source. He worked on their event pages, integrated Loklak and added an RSS section to their website, gaining experience with version control, Docker, Pharo and Node.js in the process. Most importantly, Jason learned about collaboration. He had this to say:

“Collaboration is so important in the open source community as it allows everyone to come together to help the world. Google Code-in has persuaded me to contribute to open source in the future.”

Next up we have Hannah Pan, another US student. She chose to work on Haiku, an open source operating system built for personal computers, because it used the C/C++ language which she was already confident with.

Hannah got into computer science through a high school AP course and discovered Google Code-in through this blog (woohoo!). She decided to participate even though it had already been underway for two weeks. Aiming just to make the top 10 in order to have a chance at being a finalist (and earn a hoodie), Hannah finished as a grand prize winner! 

The learning curve was steep: *nix commands, build tools and GitHub all presented new challenges. She was surprised how much code she had to sift through sometimes just to isolate the cause of minor bugs.

Like all of the participants, Hannah found her mentors to be crucial in providing both technical guidance and moral support. She explained, “I was amazed at my mentors’ expertise, dedication, modesty, and high standards. They taught me to strive for excellence rather than settle for mediocrity.”

Among other things, Hannah added localization support to the Tipster app, fixed extractDebugInfo, and even wrote a how-to article relating to the work. Reflecting on her experience, Hannah wrote:

“On the technical side, not only have I learned a lot, but I have realized how much more I have yet to learn. In addition, it has taught me some important life skills that no doubt will benefit me in my future endeavors. I’d like to thank my mentors and other students who inspired me and pushed me to do my best.”

Thank you to Jason and Hannah both for contributing to open source and sharing their Google Code-in experiences with us. Stay tuned as we continue this series in our next blog post!

By Josh Simmons, Open Source Programs Office

From Google Summer of Code to Game of Thrones on the Back of a JavaScript Dragon (Part 1)

This guest post is a part of a short series about Tatyana Goldberg and Guy Yachdav, instructors at Technical University of Munich, and the journey that was inspired by their participation as Google Summer of Code mentors for the BioJS project.

Hello there! We are from the BioJavaScript (BioJS) project which first joined Google Summer of Code (GSoC) in 2014. Our experience in the program set us on a grand open source adventure that we’ll be sharing with you in a series of blog posts. We hope you enjoy our story and, more importantly, hope it inspires you to pursue your own open source adventure.

Tatyana Goldberg and Guy Yachdav, GSoC mentors and open source enthusiasts. Photo taken at the MorpheusCup competition Luxembourg, May 2016.
We came together around the BioJS community, an open source project for creating beautiful and interactive open source visualizations of biological data on the web. BioJS visualizations are made up of components which have a modular design. This modular design enables several things: they can be used by non-programmers, they can be combined to make more complex visualizations, and they can be easily integrated into existing web applications. Despite being a young community, BioJS already has traction in industry and academia.

In early 2014 we decided to apply for GSoC and we were fortunate to have our application accepted on our first try. The experience was extremely positive — the five students we accepted delivered great software and they had a big impact on the BioJS community:
  • The number of mailing list subscribers doubled in less than a month.
  • All five of our accepted students from 2014 became core developers.
  • Students were invited to six international conferences to share their work.
  • Students helped organize the first BioJS conference held July 2015.
  • Most importantly, the students have independently designed BioJS version 2.0 which positioned BioJS as the leading open source visualization library for biological data. 
You can see three examples of the work GSoC students did on BioJS below:


MSAViewer is a visualization and analysis of multiple sequence alignments and was developed by Sebastian Wilzbach. Proteome Viewer is a multilevel visualization of proteomes in the UniProt database and was developed by Jose Villaveces. Genetic Variation Viewer is visualization of the number and type of mutations at each position in a biological sequence and was developed by Saket Choudhary.

We learned a lot in the first year we participated in Google Summer of Code. Here are some of the takeaways that are especially relevant to mentors and organizations that are considering joining the program:
  1. GSoC is a great source of dedicated and enthusiastic young developers.
  2. Mentors need to carefully manage students, listen to them and let them lead initiatives when it makes sense.
  3. Org admins should leverage success in GSoC beyond the program.
  4. Orgs need to find the most motivated students and make sure their projects are feasible.
  5. People want to share in your success, so participation in GSoC can start a positive feedback loop attracting new contributors and users.
  6. Most importantly: the ideas behind GSoC - the love for open source and coding - are contagious and spread easily to larger audiences, especially to students and other people who work in academia. Just try it! 
Our positive experience spurred us to seek out and conquer new challenges. Stay tuned for our next post where we explain how GSoC inspired us to create a popular new class and how we applied data science to Game of Thrones.

By Tatyana Goldberg and Guy Yachdav, BioJS and TU Munich

From Google Summer of Code to Game of Thrones on the Back of a JavaScript Dragon (Part 1)

This guest post is a part of a short series about Tatyana Goldberg and Guy Yachdav, instructors at Technical University of Munich, and the journey that was inspired by their participation as Google Summer of Code mentors for the BioJS project.

Hello there! We are from the BioJavaScript (BioJS) project which first joined Google Summer of Code (GSoC) in 2014. Our experience in the program set us on a grand open source adventure that we’ll be sharing with you in a series of blog posts. We hope you enjoy our story and, more importantly, hope it inspires you to pursue your own open source adventure.

Tatyana Goldberg and Guy Yachdav, GSoC mentors and open source enthusiasts. Photo taken at the MorpheusCup competition Luxembourg, May 2016.
We came together around the BioJS community, an open source project for creating beautiful and interactive open source visualizations of biological data on the web. BioJS visualizations are made up of components which have a modular design. This modular design enables several things: they can be used by non-programmers, they can be combined to make more complex visualizations, and they can be easily integrated into existing web applications. Despite being a young community, BioJS already has traction in industry and academia.

In early 2014 we decided to apply for GSoC and we were fortunate to have our application accepted on our first try. The experience was extremely positive — the five students we accepted delivered great software and they had a big impact on the BioJS community:
  • The number of mailing list subscribers doubled in less than a month.
  • All five of our accepted students from 2014 became core developers.
  • Students were invited to six international conferences to share their work.
  • Students helped organize the first BioJS conference held July 2015.
  • Most importantly, the students have independently designed BioJS version 2.0 which positioned BioJS as the leading open source visualization library for biological data. 
You can see three examples of the work GSoC students did on BioJS below:


MSAViewer is a visualization and analysis of multiple sequence alignments and was developed by Sebastian Wilzbach. Proteome Viewer is a multilevel visualization of proteomes in the UniProt database and was developed by Jose Villaveces. Genetic Variation Viewer is visualization of the number and type of mutations at each position in a biological sequence and was developed by Saket Choudhary.

We learned a lot in the first year we participated in Google Summer of Code. Here are some of the takeaways that are especially relevant to mentors and organizations that are considering joining the program:
  1. GSoC is a great source of dedicated and enthusiastic young developers.
  2. Mentors need to carefully manage students, listen to them and let them lead initiatives when it makes sense.
  3. Org admins should leverage success in GSoC beyond the program.
  4. Orgs need to find the most motivated students and make sure their projects are feasible.
  5. People want to share in your success, so participation in GSoC can start a positive feedback loop attracting new contributors and users.
  6. Most importantly: the ideas behind GSoC - the love for open source and coding - are contagious and spread easily to larger audiences, especially to students and other people who work in academia. Just try it! 
Our positive experience spurred us to seek out and conquer new challenges. Stay tuned for our next post where we explain how GSoC inspired us to create a popular new class and how we applied data science to Game of Thrones.

By Tatyana Goldberg and Guy Yachdav, BioJS and TU Munich

From Google Summer of Code to Game of Thrones on the Back of a JavaScript Dragon (Part 1)

This guest post is a part of a short series about Tatyana Goldberg and Guy Yachdav, instructors at Technical University of Munich, and the journey that was inspired by their participation as Google Summer of Code mentors for the BioJS project.

Hello there! We are from the BioJavaScript (BioJS) project which first joined Google Summer of Code (GSoC) in 2014. Our experience in the program set us on a grand open source adventure that we’ll be sharing with you in a series of blog posts. We hope you enjoy our story and, more importantly, hope it inspires you to pursue your own open source adventure.

Tatyana Goldberg and Guy Yachdav, GSoC mentors and open source enthusiasts. Photo taken at the MorpheusCup competition Luxembourg, May 2016.
We came together around the BioJS community, an open source project for creating beautiful and interactive open source visualizations of biological data on the web. BioJS visualizations are made up of components which have a modular design. This modular design enables several things: they can be used by non-programmers, they can be combined to make more complex visualizations, and they can be easily integrated into existing web applications. Despite being a young community, BioJS already has traction in industry and academia.

In early 2014 we decided to apply for GSoC and we were fortunate to have our application accepted on our first try. The experience was extremely positive — the five students we accepted delivered great software and they had a big impact on the BioJS community:
  • The number of mailing list subscribers doubled in less than a month.
  • All five of our accepted students from 2014 became core developers.
  • Students were invited to six international conferences to share their work.
  • Students helped organize the first BioJS conference held July 2015.
  • Most importantly, the students have independently designed BioJS version 2.0 which positioned BioJS as the leading open source visualization library for biological data. 
You can see three examples of the work GSoC students did on BioJS below:


MSAViewer is a visualization and analysis of multiple sequence alignments and was developed by Sebastian Wilzbach. Proteome Viewer is a multilevel visualization of proteomes in the UniProt database and was developed by Jose Villaveces. Genetic Variation Viewer is visualization of the number and type of mutations at each position in a biological sequence and was developed by Saket Choudhary.

We learned a lot in the first year we participated in Google Summer of Code. Here are some of the takeaways that are especially relevant to mentors and organizations that are considering joining the program:
  1. GSoC is a great source of dedicated and enthusiastic young developers.
  2. Mentors need to carefully manage students, listen to them and let them lead initiatives when it makes sense.
  3. Org admins should leverage success in GSoC beyond the program.
  4. Orgs need to find the most motivated students and make sure their projects are feasible.
  5. People want to share in your success, so participation in GSoC can start a positive feedback loop attracting new contributors and users.
  6. Most importantly: the ideas behind GSoC - the love for open source and coding - are contagious and spread easily to larger audiences, especially to students and other people who work in academia. Just try it! 
Our positive experience spurred us to seek out and conquer new challenges. Stay tuned for our next post where we explain how GSoC inspired us to create a popular new class and how we applied data science to Game of Thrones.

By Tatyana Goldberg and Guy Yachdav, BioJS and TU Munich

Omnitone: Spatial audio on the web


Spatial audio is a key element for an immersive virtual reality (VR) experience. By bringing spatial audio to the web, the browser can be transformed into a complete VR media player with incredible reach and engagement. That’s why the Chrome WebAudio team has created and is releasing the Omnitone project, an open source spatial audio renderer with the cross-browser support.

Our challenge was to introduce the audio spatialization technique called ambisonics so the user can hear the full-sphere surround sound on the browser. In order to achieve this, we implemented the ambisonic decoding with binaural rendering using web technology. There are several paths for introducing a new feature into the web platform, but we chose to use only the Web Audio API. In doing so, we can reach a larger audience with this cross-browser technology, and we can also avoid the lengthy standardization process for introducing a new Web Audio component. This is possible because the Web Audio API provides all the necessary building blocks for this audio spatialization technique.



Omnitone Audio Processing Diagram

The AmbiX format recording, which is the target of the Omnitone decoder, contains 4 channels of audio that are encoded using ambisonics, which can then be decoded into an arbitrary speaker setup. Instead of the actual speaker array, Omnitone uses 8 virtual speakers based on an the head-related transfer function (HRTF) convolution to render the final audio stream binaurally. This binaurally-rendered audio can convey a sense of space when it is heard through headphones.

The beauty of this mechanism lies in the sound-field rotation applied to the incoming spatial audio stream. The orientation sensor of a VR headset or a smartphone can be linked to Omnitone’s decoder to seamlessly rotate the entire sound field. The rest of the spatialization process will be handled automatically by Omnitone. A live demo can be found at the project landing page.

Throughout the project, we worked closely with the Google VR team for their VR audio expertise. Not only was their knowledge on the spatial audio a tremendous help for the project, but the collaboration also ensured identical audio spatialization across all of Google’s VR applications - both on the web and Android (e.g. Google VR SDK, YouTube Android app). The Spatial Media Specification and HRTF sets are great examples of the Google VR team’s efforts, and Omnitone is built on top of this specification and HRTF sets.

With emerging web-based VR projects like WebVR, Omnitone’s audio spatialization can play a critical role in a more immersive VR experience on the web. Web-based VR applications will also benefit from high-quality streaming spatial audio, as the Chrome Media team has recently added FOA compression to the open source audio codec Opus. More exciting things like VR view integration, higher-order ambisonics and mobile web support will also be coming soon to Omnitone.

We look forward to seeing what people do with Omnitone now that it's open source. Feel free to reach out to us or leave a comment with your thoughts and feedback on the issue tracker on GitHub.

By Hongchan Choi and Raymond Toy, Chrome Team

Due to the incomplete implementation of multichannel audio decoding on various browsers, Omnitone does not support mobile web at the time of writing.