Category Archives: Open Source Blog

News about Google’s open source projects and programs

Open sourcing Seurat: bringing high-fidelity scenes to mobile VR

Crossposted from the Google Developers Blog

Great VR experiences make you feel like you’re really somewhere else. To create deeply immersive experiences, there are a lot of factors that need to come together: amazing graphics, spatialized audio, and the ability to move around and feel like the world is responding to you.

Last year at I/O, we announced Seurat as a powerful tool to help developers and creators bring high-fidelity graphics to standalone VR headsets with full positional tracking, like the Lenovo Mirage Solo with Daydream. Seurat is a scene simplification technology designed to process very complex 3D scenes into a representation that renders efficiently on mobile hardware. Here’s how ILMxLAB was able to use Seurat to bring an incredibly detailed ‘Rogue One: A Star Wars Story’ scene to a standalone VR experience.

Today, we’re open sourcing Seurat to the developer community. You can now use Seurat to bring visually stunning scenes to your own VR applications and have the flexibility to customize the tool for your own workflows.

Behind the scenes: how Seurat works

Seurat works by taking advantage of the fact that VR scenes are typically viewed from within a limited viewing region, and leverages this to optimize the geometry and textures in your scene. It takes RGBD images (color and depth) as input and generates a textured mesh, targeting a configurable number of triangles, texture size, and fill rate, to simplify scenes beyond what traditional methods can achieve.


To demonstrate what Seurat can do, here’s a snippet from Blade Runner: Revelations, which launched today with the Lenovo Mirage Solo.

Blade Runner: Revolution by Alcon Interactive and Seismic Games
The Blade Runner universe is known for its stunning worlds, and in Revelations, you get to unravel a mystery around fugitive Replicants in the futuristic but gritty streets. To create the look and feel for Revelations, Seismic used Seurat to bring a scene of 46.6 million triangles down to only 307,000, improving performance by more than 100x with almost no loss in visual quality:

Original scene:

Seurat-processed scene: 

If you’re interested in learning more about Seurat or trying it out yourself, visit the Seurat GitHub page to access the documentation and source code. We’re looking forward to seeing what you build!

By Manfred Ernst, Software Engineer

Rolling out the red carpet for GSoC 2018 students!

Congratulations to our 2018 Google Summer of Code (GSoC) students and a big thank you to everyone who applied! Our 206 mentoring organizations have chosen the 1,264 students that they'll be working with during the 14th Google Summer of Code. This year’s students come from 64 different countries!

The next step for participating students is the Community Bonding period which runs from April 23rd through May 15th. During this time, students will get up to speed on the culture and code base of their new community. They’ll also get acquainted with their mentor(s) and learn more about the languages or tools they will need to complete their projects. Coding begins May 15th and will continue throughout the summer until August 14th.

To the more than 3,800 students who were not chosen this year - don’t be discouraged! Many students apply at least once to GSoC before being accepted. You can improve your odds for next time by contributing to the open source project of your choice directly; organizations are always eager for new contributors! Look around GitHub and elsewhere on the internet for a project that interests you and get started.

Happy coding, everyone!

By Stephanie Taylor, GSoC Program Lead

My first open source project and Google Code-in

This is a guest post from a mentor with coala, an open source tool for linting and fixing code in many different languages, which participated in Google Code-in 2017.

About two years ago, my friend Gyan and I built a small web app which checked whether or not a given username was available on a few popular social media websites. The idea was simple: judge availability of the username on the basis of an HTTP response. Here’s a pseudo-code example:
website_url = form_website_url(website, username)
# Eg: form_website_url('github', 'manu-chroma') returns 'github.com/manu-chroma'

if website_url_response.http_code == 404:
username available
else:
username taken
Much to our delight, it worked! Well, almost. It had a lot of bugs but we didn’t care much at the time. It was my first Python project and the first time I open sourced my work. I always look back on it as a cool idea, proud that I made it and learned a lot in the process.

But the project had been abandoned until John from coala approached me. John suggested we use it for Google Code-in because one of coala’s tasks for the students was to create accounts on a few common coding related websites. Students could use the username availability tool to find a good single username–people like their usernames to be consistent across websites–and coala could use it to verify that the accounts were created.

I had submitted a few patches to coala in the past, so this sounded good to me! The competition clashed with my vacation plans, but I wanted to get involved, so I took the opportunity to become a mentor.

Over the course of the program, students not only used the username availability tool but they also began making major improvements. We took the cue and began adding tasks specifically about the tool. Here are just a few of the things students added:
  • Regex to determine whether a given username was valid for any given website
  • More websites, bringing it to a total of 13
  • Tests (!)
The web app is online so you can check username availability too!

I had such a fun time working with students in Google Code-in, their enthusiasm and energy was amazing. Special thanks to students Andrew, Nalin, Joshua, and biscuitsnake for all the time and effort you put into the project. You did really useful work and I hope you learned from the experience!

I want to thank John for approaching me in the first place and suggesting we use and improve the project. He was an unstoppable force throughout the competition, helping both students and fellow mentors. John even helped me with code reviews to really refine the work students submitted, and help them improve based on the feedback.

Kudos to the Google Open Source team for organizing it so well and lowering the barriers of entry to open source for high school students around the world.

By Manvendra Singh, coala mentor

A galactic experience in Google Code-in 2017

This is a guest post from Liquid Galaxy, one of the organizations that participated in both Google Summer of Code and Google Code-in 2017.

Liquid Galaxy, an open source project that powers panoramic views spanning multiple computers and displays, has been participating in Google Summer of Code (GSoC) since 2011. However, we never applied to participate in Google Code-in (GCI) because we heard stories from other projects about long hours and interrupted holidays in service of mentoring eager young students.

That changed in 2017! And, while the stories are true, we have to say it’s also an amazing and worthwhile experience.

It was hard for our small project to recruit the number of mentors needed. Thankfully, our GSoC mentors stepped up, as did many former GSoC students. We even had forward thinking students who were interested in participating in GSoC 2018 volunteer to mentor! While it was challenging, our team of mentors helped us have a nearly flawless GCI experience.

The Google Open Source team only had to nudge us once, when a student’s task had been pending review for more than 36 hours. We’re pretty happy with that considering we had nearly 500 tasks completed over the 50 days of the contest.

More important than our experience, though, is the student experience. We learned a lot, seeing how they chose tasks, the attention to detail some of them put into their work, and the level of interaction between the students and the mentors. Considering these were young students, ranging in age from 13 to 17, they far exceeded our expectations.

There was one piece of advice the Google Open Source team gave us that we didn’t understand as GCI newbies: have a large number of tasks ready from day one, and leave some unpublished until the halfway point. That ended up being key, it ensured we had enough tasks for the initial flood of students and some in reserve for the second flood around the holidays. Our team of mentors worked hard from the moment we were accepted into GCI to the moment we began to create over 150 tasks in five different categories. Students seemed to think we did a good job and told us they enjoyed the variety of tasks and level of difficulty.

We’re glad we finally participated in Google Code-in and we’ll definitely be applying next time! You can learn more about the project and the students who worked with us on our blog.

By Andreu Ibáñez, Liquid Galaxy org admin

Cloudprober: open source black-box monitoring software

Ever wonder if users can actually access your microservices? Observe timeouts in your applications, and not sure if it's the network or if your servers are too busy? Curious about the 99%-ile network latency between your on-premise data center and services running in the cloud?

Cloudprober, which we open sourced last year, answers questions like these and more. It’s black-box monitoring software that "probes" your systems and services and generates metrics based on probe results. This kind of monitoring strategy doesn’t make assumptions about how your service is implemented and it works at the same layer as your service’s users. You can make changes to your service’s implementation with peace of mind, knowing you’ll notice if a change prevents users from accessing the service.

A probe can be anything: a ping, an HTTP request, or even a custom program that mimics how your services are consumed (for example, creating and accessing a blog post). Cloudprober builds and exports standard metrics, and provides a way to easily integrate them with your existing monitoring stack, such as Prometheus-Grafana, Stackdriver and soon InfluxDB. Cloudprober is written in Go and works on all major platforms: Linux, Mac OS, and Windows. It's released as a static binary as well as a Docker image.

Here’s an example probe config that runs an HTTP probe against your forwarding rules and exports data to Stackdriver and Prometheus:
probe {
name: "internal-web"
type: HTTP
# Probe all forwarding rules that contain web-fr in their name.
targets {
gce_targets {
forwarding_rules {}
}
regex: "web-fr-.*"
}
interval_msec: 5000
timeout_msec: 1000
http_probe {
port: 8080
}
}

// Export data to stackdriver
surfacer {
type: STACKDRIVER
}

// Prometheus exporter
surfacer {
type: PROMETHEUS
}

The probe config is run like this from the command-line:
./cloudprober --config_file $HOME/cloudprober/cloudprober.cfg

This example probe config highlights two major features of Cloudprober: automatic, continuous discovery of cloud targets, and data export over multiple channels (Stackdriver and Prometheus in this case). Cloud deployments are dynamic and are often changing constantly. Cloudprober's dynamic target discovery feature ensures you have one less thing to worry about when doing minor infrastructure changes. Data export in various formats helps it integrate well with your existing monitoring setup.

Other features include:
  • Go text templates based configuration which adds programming capability to configs, such as "for" loops and conditionals
  • Fast and efficient implementation of core probe types
  • Custom probes through the "external" probe type
  • The ability to read config through metadata
  • And cloud (Stackdriver) logging
Though most of the cloud support is specific to Google Cloud Platform (GCP), it’s easy to add support for other providers. Cloudprober has an extensible architecture so you can add new types of targets, probes and monitoring backends.

Cloudprober was built by the Cloud Networking Site Reliability Engineering (SRE) team at Google to monitor network availability and associated features. Today, it's used by several other Google Cloud SRE teams as well.

We’re excited to share Cloudprober with the wider devops community! You can find more examples in the GitHub repository and more information on the project website.

By Manu Garg, Cloud Networking Team

Open sourcing GTXiLib, an accessibility test automation framework for iOS

Google believes everyone should be able to access and enjoy the web. We share guidance on building accessible tech over at Google Accessibility and we recently launched a dedicated disability support team. Today, we’re excited to announce that we’ve open sourced GTXiLib, an accessibility test automation framework for iOS, under the Apache license.

We want our products to be accessible and automation, with frameworks like GTXiLib, is one of the ways we scale our accessibility testing. GTXiLib can automate the process of checking for some kinds of issues such as missing labels, hints, or low contrast text.

GTXiLib is written in Objective-C and will integrate with your existing XCTests to perform all the registered accessibility checks before the test tearDown. When the checks fail, the existing test fails as well. Fixing your tests will thus lead to better accessibility and your tests can catch new accessibility issues as well.
  • Reuse your tests: GTXiLib integrates into your existing functional tests, enhancing the value of any tests that you have or any that you write.
  • Incremental accessibility testing: GTXiLib can be installed onto a single test case, test class or a specific subset of tests giving you the freedom to add accessibility testing incrementally. This helped drive GTXiLib adoption in large projects at Google.
  • Author your own checks: GTXiLib has a simple API to create custom checks based on the specific needs of your app. For example, you can ensure every button in your app has an accessibilityHint using a custom check.
Do you also care about accessibility? Help us sharpen GTXiLib by suggesting a check or better yet, writing one. You can add GTXiLib to your project using CocoaPods or by using its Xcode project file.

We hope you find this useful and look forward to feedback and contributions from the community! Please check out the README for more information.

By Siddartha Janga, Google Central Accessibility Team 

Celebrating open source mentorship with Joomla

Let’s marvel for a moment: as Google Summer of Code (GSoC) 2018 begins, 46 of the participating open source organizations are celebrating a decade or more with the program. There are 586 collective years of mentorship between them, and that’s just through GSoC.

Free and open source software projects have been doing outreach and community building since the beginning. The free software movement has been around for 35 years, and open source has been around for 20.

Bringing new people into open source is necessary for project health and sustainability, but it’s not easy. It takes time and effort to prepare onboarding materials and mentor people. It takes personal dedication, a welcoming culture, and a commitment to institutional knowledge. Sustained volunteerism at this scale is worthy of celebration!

Joomla is one open source project that exemplifies this and Puneet Kala is one such person. Joomla, a web content management system (CMS) that was first released in 2005, is now on their 11th year of GSoC. More than 80 students have participated over the years. Most students are still actively contributing, and many have gone on to become mentors.

Puneet, now Joomla’s GSoC team lead, began with the project as a student five years ago. He sent along this article celebrating their 10th anniversary, which includes links to interviews with other students who have become mentors, and this panel discussion from Joomla World Conference.

It’s always great to hear from the people who have participated in Google Summer of Code. The stories are inspiring and educational. They know a thing or two about building open source communities, so we share what they have to say: you can find guest posts here.

We’d like to extend our heartfelt thanks to the 608 open source organizations and 12,000 organization administrators and mentors who have been a part of GSoC so far. We’d also like to applaud the 46 organizations that have 10+ years under their belts!

Your tireless investment in the future of people and open source is a testament to generosity.

By Josh Simmons, Google Open Source

Coding your way into cinemas

This is a guest post from apertus° and TimVideos.us, open source organizations that participated in Google Summer of Code last year and are back for 2018!

The apertus° AXIOM project is bringing the world’s first open hardware/free software digital motion picture production camera to life. The project has a rich history, exercises a steadfast adherence to the open source ethos, and all aspects of development have always revolved around supporting and utilising free technologies. The challenge of building a sophisticated digital cinema camera was perfect for Google Summer of Code 2017. But let’s start at the beginning: why did the team behind the project embark on their journey?

Modern Cinematography

For over a century film was dominated by analog cameras and celluloid, but in the late 2000’s things changed radically with the adoption of digital projection in cinemas. It was a natural next step, then, for filmmakers to shoot and produce films digitally. Certain applications in science, large format photography and fine arts still hold onto 35mm film processing, but the reduction in costs and improved workflows associated with digital image capture have revolutionised how we create and consume visual content.

The DSLR revolution

Photo by Matthew Pearce
licensed CC SA 2.0.
Filmmaking has long been considered an expensive discipline accessible only to a select few. This all changed with the adoption of movie recording capabilities in digital single-lens reflex (DSLR) cameras. For multinational corporations this “new” feature was a relatively straightforward addition to existing models as most compact digital photo cameras could already record video clips. This was the first time that a large diameter image sensor, a vital component for creating the typical shallow depth of field we consider cinematic, appeared in consumer cameras. In recent times, user groups have stepped up to contribute to the DSLR revolution first-hand, including groups like the Magic Lantern community.

Magic Lantern

Photo by Dave Dugdale licensed CC BY-SA 2.0.
Magic Lantern is a free and open source software add-on that runs from a camera’s SD/CF card. It adds a host of new features to Canon’s DSLRs that weren't included from the factory, such as allowing users to record high-dynamic range (HDR) video or 14-bit uncompressed RAW video. It’s a community project and many filmmakers simply wouldn’t have bought a Canon camera if it weren’t for the features that Magic Lantern pioneered. Because installing Magic Lantern doesn’t replace the stock Canon firmware or modify the read-only memory (ROM) but runs alongside it, it is both easy to remove and carries little risk. Originally developed for filmmaking, Magic Lantern’s feature base has expanded to include tools useful for still photography as well.

Starting the revolution for real 

Of course, Magic Lantern has been held back by the underlying proprietary hardware routines on existing camera models. So, in 2014 a team of developers and filmmakers around the apertus° project joined forces with the Magic Lantern team to lay the foundation for a totally independent, open hardware, free software, digital cinema camera. They ran a successful crowdfunding campaign for initial development, and they completed hardware development of the first developer kits in 2016. Unlike traditional cameras, the AXIOM is designed to be completely modular, and so continuously evolve, thereby preventing it from ever becoming obsolete. How the camera evolves is determined by its user community, with its design files and source code freely available and users encouraged to duplicate, modify and redistribute anything and everything related to the camera.

While the camera is primarily for use in motion picture production, there are many suitable applications where AXIOM can be useful. Individuals in science, astronomy, medicine, aerial mapping, industrial automation, and those who record events or talks at conferences have expressed interest in the camera. A modular and open source device for digital imaging allows users to build a system that meets their unique requirements. One such company for instance, Mavrx Inc, who use aerial imagery to provide actionable insight for the agriculture industry, used the camera because it enabled them to not only process the data more efficiently than comparable camera equivalents, but also to re-configure its form factor so that it could be installed alongside existing equipment configurations.

Google Summer of Code 2017

Continuing their journey, apertus° participated in Google Summer of Code for the first time in 2017. They received about 30 applications from interested students, from which they needed to select three. Projects ranged from field programmable gate array (FPGA) centered video applications to creating Linux kernel drivers for specific camera hardware. Similarly TimVideos.us, an open hardware project for live event streaming and conference recording, is working on FPGA projects around video interfaces and processing.

After some preliminary work, the students came to grips with the camera’s operating processes and all three dove in enthusiastically. One student failed the first evaluation and another failed the second, but one student successfully completed their work.

That student, Vlad Niculescu, worked on defining control loops for a voltage controller using VHSIC Hardware Description Language (VHDL) for a potential future AXIOM Beta Power Board, an FPGA-driven smart switching regulator for increasing the power efficiency and improving flexibility around voltage regulation.
Left: The printed circuit board (PCB) (printed circuit board) for testing the switching regulator FPGA logic. Right: After final improvements the fluctuation ripple in the voltages was reduced to around 30mV at 2V target voltage.
Vlad had this to say about his experience:

“The knowledge I acquired during my work with this project and apertus° was very satisfying. Besides the electrical skills gained I also managed to obtain other, important universal skills. One of the things I learned was that the key to solving complex problems can often be found by dividing them into small blocks so that the greater whole can be easily observed by others. Writing better code and managing the stages of building a complex project have become lessons that will no doubt become valuable in the future. I will always be grateful to my mentor as he had the patience to explain everything carefully and teach me new things step by step, and also to apertus° and Google’s Summer of Code program, without which I may not have gained the experience of working on a project like this one.”

We are grateful for Vlad’s work and congratulate him for successfully completing the program. If you find open hardware and video production interesting, we encourage you to reach out and join the community–both apertus° and TimVideos.us are back for Google Summer of Code 2018.

By Sebastian Pichelhofer, apertus°, and Tim 'mithro' Ansell, TimVideos.us

Googlers on the road: FOSSASIA Summit 2018

In a week’s time, free and open source enthusiasts of all kinds will gather in Singapore for FOSSASIA Summit 2018. Established in 2009, the annual event attracts more than 3,000 attendees, running from March 22nd to 25th this year.

FOSSASIA logo licensed LGPL-2.1.
FOSSASIA Summit is organized by FOSSASIA, a nonprofit organization that focuses on Asia and brings people together around open technology both in-person and online. The organization is also home to many open source projects and is a regular participant in the Google Open Source team’s student programs, Google Summer of Code and Google Code-in.

Our team is excited to be among those attending and speaking at the conference this year, and we’re proud that Google Cloud is a sponsor. If you’re around, please come say hello. The highlight of our travel is meeting the students and mentors who have participated in our programs!

Here are the Googlers who will be giving presentations:

Thursday, March 22nd

2:00pm Real-world Machine Learning with TensorFlow and Cloud ML by Kaz Sato

Friday, March 23rd

9:30am BigQuery codelab by Jan Peuker
10:30am  Working with Cloud DataPrep by KC Ayyagari
1:00pm Extract, analyze & translate Text from Images with Cloud ML APIs by Sara Robinson
2:00pm    Bitcoin in BigQuery: blockchain analytics on public data by Allen Day
2:40pm What can we learn from 1.1 billion GitHub events and 42 TB of code? by Felipe Hoffa2:40pm Engaging IoT solutions with Machine Learning by Markku Lepisto
2:45pm CloudML Engine: Qwik Start by Kaz Sato
3:20pm Systems as choreographed behavior with Kubernetes by Jan Peuker
4:00pm The Assistant by Manikantan Krishnamurthy

Saturday, March 24th

10:30am Google Summer of Code and Google Code-in by Stephanie Taylor
10:30am Zero to ML on Google Cloud Platform by Sara Robinson
11:00am  Building a Sustainable Open Tech Community through Coding Programs, Contests and Hackathons panel including Stephanie Taylor
11:05am Codifying Security and Modern Secrets Management by Seth Vargo
1:00pm    Open Source Education panel including Cat Allman
5:00pm Everything as Code by Seth Vargo

We look forward to seeing you there!

By Josh Simmons, Google Open Source

Semantic Image Segmentation with DeepLab in TensorFlow

Cross-posted on the Google Research Blog.

Semantic image segmentation, the task of assigning a semantic label, such as “road”, “sky”, “person”, “dog”, to every pixel in an image enables numerous new applications, such as the synthetic shallow depth-of-field effect shipped in the portrait mode of the Pixel 2 and Pixel 2 XL smartphones and mobile real-time video segmentation. Assigning these semantic labels requires pinpointing the outline of objects, and thus imposes much stricter localization accuracy requirements than other visual entity recognition tasks such as image-level classification or bounding box-level detection.


Today, we are excited to announce the open source release of our latest and best performing semantic image segmentation model, DeepLab-v3+ [1]*, implemented in TensorFlow. This release includes DeepLab-v3+ models built on top of a powerful convolutional neural network (CNN) backbone architecture [2, 3] for the most accurate results, intended for server-side deployment. As part of this release, we are additionally sharing our TensorFlow model training and evaluation code, as well as models already pre-trained on the Pascal VOC 2012 and Cityscapes benchmark semantic segmentation tasks.

Since the first incarnation of our DeepLab model [4] three years ago, improved CNN feature extractors, better object scale modeling, careful assimilation of contextual information, improved training procedures, and increasingly powerful hardware and software have led to improvements with DeepLab-v2 [5] and DeepLab-v3 [6]. With DeepLab-v3+, we extend DeepLab-v3 by adding a simple yet effective decoder module to refine the segmentation results especially along object boundaries. We further apply the depthwise separable convolution to both atrous spatial pyramid pooling [5, 6] and decoder modules, resulting in a faster and stronger encoder-decoder network for semantic segmentation.


Modern semantic image segmentation systems built on top of convolutional neural networks (CNNs) have reached accuracy levels that were hard to imagine even five years ago, thanks to advances in methods, hardware, and datasets. We hope that publicly sharing our system with the community will make it easier for other groups in academia and industry to reproduce and further improve upon state-of-art systems, train models on new datasets, and envision new applications for this technology.

By Liang-Chieh Chen and Yukun Zhu, Google Research

Acknowledgements
We would like to thank the support and valuable discussions with Iasonas Kokkinos, Kevin Murphy, Alan L. Yuille (co-authors of DeepLab-v1 and -v2), as well as Mark Sandler, Andrew Howard, Menglong Zhu, Chen Sun, Derek Chow, Andre Araujo, Haozhi Qi, Jifeng Dai, and the Google Mobile Vision team.

References
  1. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation, Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, and Hartwig Adam, arXiv: 1802.02611, 2018.
  2. Xception: Deep Learning with Depthwise Separable Convolutions, François Chollet, Proc. of CVPR, 2017.
  3. Deformable Convolutional Networks — COCO Detection and Segmentation Challenge 2017 Entry, Haozhi Qi, Zheng Zhang, Bin Xiao, Han Hu, Bowen Cheng, Yichen Wei, and Jifeng Dai, ICCV COCO Challenge Workshop, 2017.
  4. Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs, Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L. Yuille, Proc. of ICLR, 2015.
  5. Deeplab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs, Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L. Yuille, TPAMI, 2017.
  6. Rethinking Atrous Convolution for Semantic Image Segmentation, Liang-Chieh Chen, George Papandreou, Florian Schroff, and Hartwig Adam, arXiv:1706.05587, 2017.


* DeepLab-v3+ is not used to power Pixel 2's portrait mode or real time video segmentation. These are mentioned in the post as examples of features this type of technology can enable.