Category Archives: Open Source Blog

News about Google’s open source projects and programs

OpenTelemetry’s First Release Candidates

OpenTelemetry has hit another milestone with the tracing specification reaching release candidate status.

With the specification now ready to go, expect to see tracing release candidates of the official APIs and SDKs over the next few weeks, along with updated exporters for Cloud Trace. In the coming months the same will follow for the metrics specification, followed by metrics release candidates of the APIs and SDKs and Cloud Monitoring exporters, followed by the project’s general availability. At this point we’ll switch our default application metrics and distributed tracing instrumentation from OpenCensus to OpenTelemetry.

This is exciting news for Google Cloud customers, as OpenTelemetry will enable even better observability experiences, both with Cloud Monitoring and Cloud Trace, or the third party monitoring and operations tools of your choice.

Originally posted on the on the OpenTelemetry blog.


As we’ve discussed in past announcements, we’re hard at work building OpenTelemetry’s first GA quality release. Today marks another milestone in this journey, with the freezing and first release candidate of the tracing specification.
Tracing Spec Release Candidate

The tracing specification is now considered to be a release candidate (RC) and is frozen, and the OpenTelemetry APIs and SDKs have a stable specification to build their own release candidates against. This means:
  • API, SDK, and Collector release candidates will appear within the next few weeks.
  • No breaking spec changes are allowed between now and the final GA specification, beyond any showstopper (P1) issues that are revealed in the RC period. We don’t expect any of these to appear, but the purpose of the RC period is for us to validate that we have a GA-worthy spec.
  • Some non-breaking changes will be allowed during the RC period. Most of these are clarifications of existing behaviour or are pure editorial updates.
The release candidate sections of the specification include all tracing related dependencies, specifically the following sections: Trace, Baggage, Resource, Context Propagation, Environment Variables, Exporters (for traces). You can view the progress of each OpenTelemetry component’s implementation in the project status matrix.

What’s Coming Next?

Achieving a release candidate of the tracing specification has been the top priority of OpenTelemetry since releasing our beta in March. With this completed, our focus now shifts to tracing release candidates of the APIs, SDKs, Collector, and auto instrumentation components, and producing a release candidate of the metrics specification.

RC Tracing Implementations

Most OpenTelemetry APIs and SDKs are close to completing their tracing RC implementations, and we expect the first wave of these to arrive within the next two weeks. Contributors who are looking to provide instrumentation (for various web frameworks, storage clients, etc.) can start building against release candidate APIs once they arrive. While the APIs may change in response to issues discovered during RC usage and testing (which will result in multiple pre-GA release candidates for these components), these will be extremely constrained.

Several SDKs will have two waves of release candidate milestones: the first will contain functionality from the tracing and context propagation sections of the specification, and the second will include release candidate implementations for baggage, exporters, resources, and environment variables.

Metrics

In parallel to the tracing RC component releases, we will apply the focus that we’ve had on tracing to the metrics specification. Starting this week, we will categorize which work items are required for GA, which can be optionally allowed in GA (non-breaking), and which will be shifted to post-GA. After completing this, we will track our burndown progress, and lock the metrics specification and publish a metrics specification release candidate once all P1 items are complete. Shortly after this, the APIs, SDKs, Collector, and other components will publish release candidates with RC-quality tracing and metrics functionality.

Productionization and GA Readiness Work

Once the metrics specification, SDKs, Collector, and other components reach release candidate status, we will focus on productionization tasks like writing documentation, producing a post-GA versioning strategy, building additional automated tests, etc. Once we are satisfied with each component’s adoptability and reliability, we will announce their general availability.

Overall Timeline

  1. Components (APIs, SDKs, Collector, auto instrumentation, etc.) issue release candidates with RC-quality tracing functionality.
  2. The metrics section of the specification achieves RC quality and is frozen.
  3. Components issue release candidates with RC-quality tracing and metrics functionality.
  4. Once we are satisfied with our metrics + tracing release candidates, OpenTelemetry goes GA.
  5. Logging enters beta, then issues an RC specification, followed by RC-quality logging functionality in each component, followed by a GA for logging.
We will have a better understanding of our GA release timeline in the coming weeks once outstanding work on the metrics specification is fully accounted for.

Tracking a Language’s Progress

As mentioned above, you can view the progress of a particular component (API, SDK, etc.) in the project status matrix. Each component’s implementation has their own timeline, though a core set (the JavaScript, Java, Go, Python, and .Net APIs + SDKs, the Collector, and Java auto instrumentation) are all tracking well. Each component has its own GA burndown board.

FAQ

I want to use OpenTelemetry on my production services; what’s the impact of today’s announcement?

SDKs with release candidate quality tracing support will be available in a few weeks. Release candidates are not recommended for critical production services, however they are functional and are intended to offer APIs that are compatible with their upcoming GA counterparts.

I want to write instrumentation for OpenTelemetry; what’s the impact of today’s announcement?

APIs with release candidate quality tracing support will be available shortly (prior to the SDKs). You can bind against these to produce traces that will be picked up by the OpenTelemetry SDKs or any other implementations that implement the OpenTelemetry APIs.

When will OpenTelemetry offer drop-in replacements for OpenCensus and OpenTracing?

Work is currently underway on bridge APIs that allow OpenTelemetry SDKs to seamlessly replace OpenCensus libraries or OpenTracing implementations. While the delivery date of this functionality is not tied to OpenTelemetry’s GA goals, we expect this to arrive between each API + SDK’s release candidate and GA milestones.

Wrapping Up

Producing a specification release candidate is an important milestone for the OpenTelemetry community, and it took significant effort on the part of our contributors to make this happen. We’d like to thank every person and every organization that was a part of this release, and to recognize that their contributions are laying the groundwork for the project's long term success.

If you haven’t been a part of the OpenTelemetry community but would like to join, now is the perfect time! OpenTelemetry is now in the top three CNCF projects by weekly and cumulative commits, and no matter your level of commitment (ha!) to the project, contributions are always welcome. If you have a particular area that you’re interested in (for example, the Python API + SDK), the best way to get involved is to join the relevant weekly SIG meetings or interact with other contributors on Gitter.

By Morgan McLean, Google Cloud

Fuzzing internships for open source software

Open source software is the foundation of many modern software products. Over the years, developers increasingly have relied on reusable open source components for their applications. It is paramount that these open source components are secure and reliable, as weaknesses impact those that build upon it.

Google cares deeply about the security of the open source ecosystem and recently launched the Open Source Security Foundation with other industry partners. Fuzzing is an automated testing technique to find bugs by feeding unexpected inputs to a target program. At Google, we leverage fuzzing at scale to find tens of thousands of security vulnerabilities and stability bugs. This summer, as part of Google’s OSS internship initiative, we hosted 50 interns to improve the state of fuzz testing in the open source ecosystem.

The fuzzing interns worked towards integrating new projects and improving existing ones in OSS-Fuzz, our continuous fuzzing service for the open source community (which has 350+ projects, 22,700 bugs, 89% fixed). Several widely used open source libraries including but not limited to nginx, postgresql, usrsctp, and openexr, now have continuous fuzzing coverage as a result of these efforts.

Another group of interns focused on improving the security of the Linux kernel. syzkaller, a kernel fuzzing tool from Google, has been instrumental in finding kernel vulnerabilities in various operating systems. The interns were tasked with improving the fuzzing coverage by adding new descriptions to syzkaller like ip tunnels, io_uring, and bpf_lsm for example, refining the interface description language, and advancing kernel fault injection capabilities.

Some interns chose to write fuzzers for Android and Chrome, which are open source projects that billions of internet users rely on. For Android, the interns contributed several new fuzzers for uncovered areas - network protocols such as pppd and dns, audio codecs like monoblend, g722, and android framework. On the Chrome side, interns improved existing blackbox fuzzers, particularly in the areas: DOM, IPC, media, extensions, and added new libprotobuf-based fuzzers for Mojo.

Our last set of interns researched quite a few under-explored areas of fuzzing, some of which were fuzzer benchmarking, ML based fuzzing, differential fuzzing, bazel rules for build simplification and made useful contributions.

Over the course of the internship, our interns have reported over 150 security vulnerabilities and 750 functional bugs. Given the overall success of these efforts, we plan to continue hosting fuzzing internships every year to help secure the open source ecosystem and teach incoming open source contributors about the importance of fuzzing. For more information on the Google internship program and other student opportunities, check out careers.google.com/students. We encourage you to apply.

By: Abhishek Arya, Google Chrome Security

Announcing the latest Google Open Source Peer Bonus winners!

We are very pleased to announce the latest Google Open Source Peer Bonus winners!

The Google Open Source Peer Bonus program rewards external open source contributors nominated by Googlers for their exceptional contributions to open source. Historically, the program was primarily focused on rewarding developers. Over the years the program has evolved—rewarding not just software engineers contributors from every part of open source—including technical writers, user experience and graphic designers, community managers and marketers, mentors and educators, ops and security experts. 


This time around we have 90 winners from an impressive number of countries—24—spread across five continents: Australia, Austria, Canada, China, Costa Rica, Finland, France, Germany, Ghana, India, Italy, Japan, Mozambique, New Zealand, Nigeria, Poland, Portugal, Singapore, Spain, Sweden, Switzerland, Uganda, United Kingdom, and the United States.

Although the majority of recipients in this round were recognized for their code contributions, more than 40% of the successful nominations included tooling work, community work, and documentation. (Some contributors were recognized for their work in more than one area.)

Below is the list of current winners who gave us permission to thank them publicly:
WinnerProject
Xihan LiA Concise Handbook of TensorFlow 2
Alain SchlesserAMP Plugin for WordPress
Pierre GordonAMP Plugin for WordPress
Catherine HouleAMP Project
Quyen Le HoangANGLE
Kamil BregulaApache Airflow
László Kiss Kollárauditwheel/manylinux
Jack NeusChrome OS Release Branching tool
Fabian Hennekechromium
Matt GodboltCompiler Explorer
Sumeet Pawnikarcoreboot
Hal Sekicovid19
Derek ParkerDelve
Alessandro ArzilliDelve
Matthias SohnEclipse Foundation
Luca MilanesioEclipse Foundation
João Távoraeglot
Brad Cowiefaucetsdn
Harri HohteriFirebase
Rosário Pereira FernandesFirebase
Peter SteinbergerFirebase iOS, CocoaPods
Eduardo SilvaFluent Bit
Matthias SohnGerrit Code Review
Marco MillerGerrit Code Review
Camilla LöwyGLFW
Akim DemailleGNU Bison
Josh Bleecher SnyderGo
Alex BrainmanGo
Richard MusiolGo
Roger PeppeGo, CUE, gohack
Daniel MartíGo, CUE, many individual repo.
Juan LinietskyGodot Engine
Maddy MyersGoogle Research Open-COVID-19-Data
Pontus Leitzlergovim, gopls
Paul Jollygovim, gopls
Parul RahejaGround
Pau FreixesgRPC
Marius BrehlerIREE
George Nachmaniterm2
Kenji Urushimajsrsasign
Jacques ChesterKNative
Markus ThömmesKnative Serving
Savitha RaghunathanKubernetes
David Andersonlibdwarf
Florian WestphalLinux kernel
Jonas Bernoullimagit
Hugo van KemenadeMany open-source Python projects
Jeff LockhartMaps SDK for Android Utility Library
Claude VervoortMoodle
Jared McNeillNetBSD
Nao Yonashironginx-sxg-module
Geoffrey BoothNode.js
Gus CaplanNode.js
Guy BedfordNode.js
Samson GoddyOpen Source Community Africa
Daniel DylaOpenTelemetry
Leighton ChenOpenTelemetry
Shivkanya AndhareOpenTelemetry
Bartlomiej ObecnyOpenTelemetry
Philipp WagnerOpenTitan, Ibex, CocoTB
Srijan ReddyOppia
Chris SOppia
Bastien GuerryOrg mode
Gary KramlichPidgin Lead Developer
Hassan Kibirigeplotnine
Abigail DogbePyLadies Ghana
David HewittPyO3
Yuji KanagawaPyO3
Mannie YoungPython Ghana
Alex BradburyRISC-V LLVM, Ibex, OpenTitan
Lukas Taegert-AtkinsonRollup.js
Sanil RautShaka Packager
Richard Hallowsstylelint
Luke EdwardsSvelte and Node Libraries
Zoe CarverSwift Programming Language
Nick LockwoodSwiftFormat
Priti DesaiTekton
Sayak PaulTensorFlow
Lukas GeigerTensorFlow
Margaret Maynard-ReidTensorFlow
Gabriel de MarmiesseTensorFlow Addons
Jared MorganThe Good Docs Project
Jo CookThe Good Docs Project, GeoNetwork, Portable GIS, Various Open Source Geospatial Foundation communities
Ricky Mulyawan SuryadiTink JNI Examples
Nicholas MarriottTmux
Michael Tüxenusrsctp
Seth BrenithV8
Ramya RaoVS Code Go
Philipp HanckeWebRTC
Jason DonenfeldWireGuard
Congratulations to our winners! We look forward to your continued support and contributions to open source!

By Maria Tabak and Erin McKean, Program Managers – Google Open Source Programs Office

Kubernetes Ingress Goes GA

The Kubernetes Ingress API, first introduced in late 2015 as an experimental beta feature, has finally graduated as a stable API and is included in the recent 1.19 release of Kubernetes.

The goal of the Ingress API is to provide a simple uniform means of describing the routing of HTTP or HTTPS traffic from outside a cluster to backend services within a cluster; independent of the Ingress Controller being used. An Ingress controller is a 3rd party application, such as Nginx or an external service like the Google Cloud Load Balancer (GCLB), that performs the actual routing of the HTTP(S) traffic. This uniform API, supported by the Ingress Controllers made it easy to create simple HTTP(S) load balancers, however most use-cases required something more complex.

By early 2019, the Ingress API had remained in beta for close to four years. Beta APIs are not intended to be relied upon for business-critical production use, yet many users were using the Ingress API in some level of production capacity. After much discussion, the Kubernetes Networking Special Interest Group (SIG) proposed a path forward to bring the Ingress API to GA primarily by introducing two changes in Kubernetes 1.18. These were: a new field, pathType, to the Ingress API; and a new Ingress resource type, IngressClass. Combined, they provide a means of guaranteeing a base level of compatibility between different path prefix matching implementations, along with opening the door to further extension by the Ingress Controller developers in a uniform and consistent pattern.

What does this mean for you? You can be assured that the path prefixes you use will be evaluated the same way across Ingress Controllers implementations, and the Ingress configuration sprawl across Annotations, ConfigMaps and CustomResourceDefinitions (CRDs) will be consolidated into a single IngressClass resource type.

pathType

The pathType field specifies one of three ways that an Ingress Object’s path should be interpreted:
  • ImplementationSpecific: Path prefix matching is delegated to the Ingress Controller (IngressClass).
  • Exact: Matches the URL path exactly (case sensitive)
  • Prefix: Matches based on a URL path prefix split by /. Matching is case sensitive and done on a path element by element basis.

NOTE: ImplementationSpecific was configured as the default pathType in the 1.18 release. In 1.19 the defaulting behavior was removed and it MUST be specified. Paths that do not include an explicit pathType will fail validation.

Pre Kubernetes 1.18

Kubernetes 1.19+

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: minimal-ingress
  annotations:
    kubernetes.io/ingress.class: nginx
spec:
  rules:
  - http:
    paths:
    - path: /testpath
      backend:
        serviceName: test
        servicePort: 80

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: minimal-ingress
spec:
  ingressClassName: external-lb
  rules:
  - http:
      paths:
      - path: /testpath
        pathType: Prefix
        backend:
          service:
            name: test
            port:
              number: 80



These changes not only make room for backwards-compatible configurations with the ImplementationSpecific pathType, but also enables more portable workloads between Ingress Controllers with Exact or Prefix pathType.

IngressClass

The new IngressClass resource takes the place of various different Annotations, ConfigMaps, Environment Variables or Command Line Parameters that you would regularly pass to an Ingress Controller directly. Instead, it has a generic parameters field that can be used to reference controller specific configuration.


Example IngressClass Resource

apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: external-lb
spec:
  controller: example.com/ingress-controller
  parameters:
    apiGroup: k8s.example.com
    kind: IngressParameters
    name: external-lb

Source: https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-class

In this example, the parameters resource would include configuration options implemented by the example.com/ingress-controller ingress controller. These items would not need to be passed as Annotations or a ConfigMap as they would in versions prior to Kubernetes 1.18.

How do you use IngressClass with an Ingress Object? You may have caught it in the earlier example, but the Ingress resource’s spec has been updated to include an ingressClassName field. This field is similar to the previous kubernetes.io/ingress.class annotation but refers to the name of the corresponding IngressClass resource.

Other Changes

Several other small changes went into effect with the graduation of Ingress to GA in 1.19. A few fields have been remapped/renamed and support for resource backends has been added.

Remapped Ingress Fields

Resource Backend
A Resource Backend is essentially a pointer or ObjectRef (apiGroup, kind, name) to another resource in the same namespace. Why would you want to do this? Well, it opens the door to all sorts of future possibilities such as routing to static object storage hosted in GCS or S3, or another internal form of storage.

NOTE: Resource Backend and Service Backends are mutually exclusive. Only one field can be specified at a time.

Deprecation Notice

With the graduation of Ingress in the 1.19 release, it officially puts the older iterations of the API (extensions/v1beta1 and networking.k8s.io/v1beta1) on a clock. Following the Kubernetes Deprecation Policy, the older APIs are slated to be removed in Kubernetes 1.22.

Should you migrate right now (September 2020)? Not yet. The majority of Ingress Controllers have not added support for the new GA Ingress API. Ingress-GCE, the Ingress Controller for Google Kubernetes Engine (GKE) should be updated to support the Ingress GA API in Q4 2020. Keep your eyes on the GKE rapid release channel to stay up to date on it, and Kubernetes 1.19’s availability.

What’s Next for Ingress?

The Ingress API has had a rough road getting to GA. It is an essential resource for many, and the changes that have been introduced help manage that complexity while keeping it relatively light-weight. However, even with the added flexibility that has been introduced it doesn’t cover a variety of complex use-cases.

SIG Network has been working on a new API referred to as “Service APIs” that takes into account the lessons learned from the previous efforts of working on Ingress. These Service APIs are not intended to replace Ingress, but instead compliment it by providing several new resources that could enable more complex workflows.


By Bob Killen, Program Manager, Google Open Source Programs Office

MySQL to Cloud Spanner via HarbourBridge



Today we’re announcing that HarbourBridge—an open source toolkit that automates much of the manual work of evaluating and assessing Cloud Spanner—supports migrations from MySQL, in addition to existing support for PostgreSQL. This provides a zero-configuration path for MySQL users to try out Cloud Spanner. HarbourBridge bootstraps early stages of migration, and helps get you to the meaty issues as quickly as possible.

Core capabilities

At its core, HarbourBridge provides an automated workflow for loading the contents of an existing MySQL or PostgreSQL database into Spanner. It requires zero configuration—no manifests or data maps to write. Instead, it imports the source database, builds a Spanner schema, creates a new Spanner database populated with data from the source database, and generates a detailed assessment report. HarbourBridge can either import dump files (from mysqldump or pg_dump) or directly connect to the source database. It is intended for loading databases up to a few tens of GB for evaluation purposes, not full-scale migrations.

Bootstrap early-stage migration

HarbourBridge bootstraps early-stage migration to Spanner by using an existing MySQL or PostgreSQL source database to quickly get you running on Spanner. It generates an assessment report with an overall migration-fitness score for Spanner, a table-by-table analysis of type mappings and a list of features used in the source database that aren't supported by Spanner.

View HarbourBridge as a way to get up and running quickly, so you can focus on critical things like tuning performance and getting the most out of Spanner. You will need to tweak and enhance what HarbourBridge produces—more on that later.

Getting started

HarbourBridge can be used with the Cloud Spanner Emulator, or directly with a Cloud Spanner instance. The Emulator is a local, in-memory emulation of Spanner that implements the same APIs as Cloud Spanner’s production service, and allows you to try out Spanner’s functionality without creating a GCP Project. The HarbourBridge README contains a step-by-step quick-start guide for using the tool with a Cloud Spanner instance.

Together, HarbourBridge and the Cloud Spanner Emulator provide a lightweight, open source toolchain to experiment with Cloud Spanner. Moreover, when you want to proceed to performance testing and tuning, switching to a production Cloud Spanner instance is a simple configuration change.

To get started on using HarbourBridge with the Emulator, follow the Emulator instructions. In particular, start the Emulator using Docker and configure the SPANNER_EMULATOR_HOST environment variable (this tells the Cloud Spanner Client libraries to use the Emulator).

Next, install Go and configure the GOPATH environment variable if they are not already part of your environment. Now you can download and install HarbourBridge using
 
GO111MODULE=on \
go get github.com/cloudspannerecosystem/harbourbridge

It should be installed as $GOPATH/bin/harbourbridge. To use HarbourBridge on a MySQL database, run mysqldump and pipe its output to HarbourBridge

mysqldump <opts> db | $GOPATH/bin/harbourbridge -driver=mysqldump

where <opts> are the standard options you pass to mysqldump or mysql to specify host, port, etc., and db is the name of the database to dump.
Similarly, to use HarbourBridge on a PostgreSQL database, run

 pg_dump <opts> db | $GOPATH/bin/harbourbridge -driver=pg_dump

See the Troubleshooting guide if you run into any issues. In addition to creating a new Spanner database with data from the source database, HarbourBridge also generates a schema file, the assessment report, and a bad data file (if any data is dropped). See Files generated by HarbourBridge.

Sample dump files

If you don’t have ready access to a MySQL or PostgreSQL database, the HarbourBridge github repository has some samples. The files cart.mysqldump and cart.pg_dump contain mysqldump and pg_dump output for a very basic shopping cart application (just two tables, one for products and one for user carts). The files singers.mysqldump and singers.pg_dump contain mysqldump and pg_dump output for a version of the Cloud Spanner singers example. To use HarbourBridge on cart.mysqldump, download the file locally and run

$GOPATH/bin/harbourbridge -driver=mysqldump < cart.mysqldump

Next steps

The schema created by HarbourBridge provides a starting point for evaluation of Spanner. While it preserves much of the core structure of your MySQL or PostgreSQL schema, data types will be mapped based on the types supported by Spanner, and unsupported features will be dropped e.g. functions, sequences, procedures, triggers and views. See the assessment report as well as HarbourBridge’s Schema conversion documentation for details.

To test Spanner’s performance, you will need to switch from the Emulator to a Cloud Spanner instance. The HarbourBridge quick-start guide provides details of how to set up a Cloud Spanner instance. To have HarbourBridge use your Cloud Spanner instance instead of the Emulator, simply unset the SPANNER_EMULATOR_HOST environment variable (see the Emulator documentation for context).

To optimize your Spanner performance, carefully review choices of primary keys and indexes—see Keys and indexes. Note that HarbourBridge preserves primary keys from the source database but drops all other indexes. This means that the out-of-the-box performance you get from the schema created by HarbourBridge can be significantly impacted. If this is the case, add appropriate Secondary indexes. In addition, consider using Interleaved tables to optimize table layout and improve the performance of joins.

Recap

HarbourBridge is an open source toolkit for evaluating and assessing Cloud Spanner using an existing MySQL or PostgreSQL database. It automates many of the manual steps so that you can quickly get to important design, evaluation and performance issues, such as. refining choice of primary keys, tuning of indexes, and other optimizations.

We encourage you to try out HarbourBridge, send feedback, file issues, fork and modify the codebase, and send PRs for fixes and new functionality. We have big plans for HarbourBridge, including the addition of user-guided schema conversion (to customize type mappings and provide a guided exploration of indexing, primary key choices, and use of interleaved tables), as well as support for more databases. HarbourBridge is part of the Cloud Spanner Ecosystem, owned and maintained by the Cloud Spanner user community. It is not officially supported by Google as part of Cloud Spanner.

By Nevin Heintze, Cloud Spanner

Science Journal graduates from Google to Arduino

Science Journal is an open source mobile app that enables students in K-12 classrooms to conduct fun, hands-on science experiments using smart devices. Since its launch in May 2016, Science Journal has gone through quite a journey, from collaborating with rock stars to supporting classrooms, through an integration with Google Drive. Now we're pleased to share that Science Journal has graduated from Google and moved over to Arduino, the makers of the popular open source Arduino microcontrollers for students, hobbyists, and professionals around the world. Arduino Science Journal is free, open sourced, and available for download today on Android and iOS

We're thrilled to be handing the project to a partner that has a long history of supporting open source and education. The Arduino Science Kit for middle school students was developed in 2019 in partnership with Science Journal. The Arduino Science Journal Android source code and iOS source code, are already available on GitHub along with the Science Journal Arduino firmware. We've put a lot of time and energy into making Science Journal a great app for students and science enthusiasts everywhere, and we're confident that it will continue to thrive in its new home.

Although Google's Science Journal apps are still available on the App and Play Store today, these apps will no longer be supported after December 11, 2020, at which point Google Drive Syncing will stop working and Google's versions of the apps will no longer be available for download. However, existing Science Journal experiments can be exported from Google's Science Journal apps and imported into Arduino Science Journal at any time. You can find more information about this handoff in our Help Center article.

We see this change as a win for Google, Arduino, Science Journal, and for open source overall. Since Science Journal is an app for kids and schools, we wanted to be particularly careful with this transition. By supporting Arduino in releasing their own version of Science Journal and forking our code on GitHub, we were able to effectively hand off the project without transferring any user data or Intellectual Property. We hope this approach can serve as an effective model for future projects that need to reallocate their resources but don't want to let down their users (as we like to say: focus on the user, and all else will follow).

Moving forward, all future updates will be happening through Arduino's versions of the apps. You can stay up-to-date on the Arduino Science Journal website and experiment with their new hands-on activities, and if you have any questions, you can contact them on the Arduino Science Journal Forum.

Although the Science Journal project is moving on from Google, we still think data and scientific literacy are critically important for present and future generations, now more than ever. With the ubiquity of smart devices in classrooms and at home, we think Science Journal remains the perfect solution for parents and teachers looking to provide students with hands-on learning opportunities during this time period. We hope you enjoy using Science Journal as much as we have, and we're excited to see how the project will continue to evolve moving forward.

By Maia Deutsch and the Science Journal Team

Recreating historical streetscapes using deep learning and crowdsourcing

For many, gazing at an old photo of a city can evoke feelings of both nostalgia and wonder. We have Google Street View for places in the present day, but what about places in the past? What was it like to walk through Manhattan in the 1940s? To create a rewarding “time travel” experience for both research and entertainment purposes, Google Research is launching Kartta Labs, an open source, scalable system on Google Cloud and Kubernetes that tackles the difficult problem of reconstructing what cities looked like in the past from scarce historical maps and photos.

Kartta Labs consists of three main parts:
  • A temporal map server, which shows how maps change over time;
  • A crowdsourcing platform, which allows users to upload historical maps of cities, georectify, and vectorize them (i.e. match them to real world coordinates);
  • And an upcoming 3D experience platform, which runs on top of maps creating the 3D experience by using deep learning to reconstruct buildings in 3D from limited historical images and maps data.

Maps & Crowdsourcing

Kartta Labs is a growing suite of open source tools that work together to create a map server with a time dimension, allowing users to populate the service with historically accurate data.
gif of editor in use

Warper

The entry point to crowdsourcing is Warper, an open source web app based on MapWarper that allows users to upload historical images of maps and georectify them by finding control points on the historical map and corresponding points on a base map.

Once a user uploads a scanned historical map, Warper makes a best guess of the map’s geolocation by extracting textual information from the map. This initial guess is used to place the map roughly in its location and allow the user to georeference the map pixels by placing pairs of control points on the historical map and a reference map. Given the georeferenced points, the application warps the image such that it aligns well with the reference map.

Warper runs as a Ruby on Rails application using a number of open source geospatial libraries and technologies, including but not limited to PostGIS and GDAL. The resulting maps can be exported in PNG, GeoTIFF, and other open formats. Warper also runs a raster tiles server that serves each georectified map at a tile URL. This raster tile server is used to load the georectified map as a background in the Editor application that is described next.

Editor

Editor is an open source web application which is a customized version of the OpenStreetMap editor; customizations include support for time dimension and integration with the other tools in the Kartta Labs suite. Editor allows users to load the georectified historical maps and trace their geographic features (e.g., building footprints, roads, etc.). This traced data is stored in vector format.

Extracted geometries in vector format, as well as metadata (e.g., address, name, and start or end dates), are stored in a geospatial database that can be queried, edited, styled, and rendered into new maps.

Kartta

Finally, the temporal map front end, Kartta (based on Tegola), visualizes the vector tiles allowing the users to navigate historical maps in space and time. Kartta works like any familiar map application (such as Google Maps), but also has a time slider so the user can choose the year at which they want to see the map. By moving the time slider, the user is able to see how features in the map, such as buildings and roads, changes over time.

3D Experience

To actually create the “time traveling” 3D experience, the forthcoming 3D Models module aims to reconstruct the detailed full 3D structures of historical buildings. The module will associate images with maps data, organize these 3D models properly in one repository, and render them on the historical maps with a time dimension.

Preliminary Results

Figure 2 – Bird’s eye view of 3D-reconstructed  Chelsea, Manhattan with a time slider
Figure 3 – Street level view of 3D-reconstructed Chelsea, Manhattan

Conclusion

We developed the tools outlined above to facilitate crowdsourcing and tackle the main challenge of insufficient historical data. We hope Kartta Labs acts as a nexus for an active community of developers, map enthusiasts, and casual users that not only utilizes our historical datasets and open source code, but actively contributes to both. The launch of our implementation of the Kartta Labs suite is imminent—keep an eye out on the Google AI blog for that announcement!

By Raimondas Kiveris – Google Research

Google Summer of Code 2020: Learning Together


In its 16th year of the program, we are pleased to announce that 1,106 students from 65 countries have successfully completed Google Summer of Code (GSoC) 2020! These student projects are the result of three months of collaboration between students, 198 open source organizations, and over 2,000 mentors from 67 countries.

During the course of the program what we learned was most important to the students was the ability to learn, mentorship, and community building. From the student evaluations at the completion of the program, we collected additional statistics from students about the GSoC program, where we found some common themes. The word cloud below shows what mattered the most to our students, and the larger the word in the cloud, the more frequently it was used to describe mentors and open source.

Valuable insights collected from the students:
  • 94% of students think that GSoC helped their programming
  • 96% of students would recommend their GSoC mentors
  • 94% of students will continue working with their GSoC organization
  • 97% of students will continue working on open source
  • 27% of students said GSoC has already helped them get a job or internship
The GSoC program has been an invaluable learning journey for students. In tackling real world, real time implementations, they've grown their skills and confidence by leaps and bounds. With the support and guidance from mentors, they’ve also discovered that the value of their work isn’t just for the project at hand, but for the community at large. As newfound contributors, they leave the GSoC program enriched and eager to continue their open source journey.

Throughout its 16 years, GSoC continues to ignite students to carry on their work and dedication to open source, even after their time with the program has ended. In the years to come, we look forward to many of this year’s students paying it forward by mentoring new contributors to their communities or even starting their own open source project. Such lasting impact cannot be achieved without the inspiring work of mentors and organization administrators. Thank you all and congratulations on such a memorable year!

By Romina Vicente, Project Coordinator for the Google Open Source Programs Office

New Case Studies About Google’s Use of Go

Go started in September 2007 when Robert Griesemer, Ken Thompson, and I began discussing a new language to address the engineering challenges we and our colleagues at Google were facing in our daily work. The software we were writing was typically a networked server—a single program interacting with hundreds of other servers—and over its lifetime thousands of programmers might be involved in writing and maintaining it. But the existing languages we were using didn't seem to offer the right tools to solve the problems we faced in this complex environment.

So, we sat down one afternoon and started talking about a different approach.

When we first released Go to the public in November 2009, we didn’t know if the language would be widely adopted or if it might influence future languages. Looking back from 2020, Go has succeeded in both ways: it is widely used both inside and outside Google, and its approaches to network concurrency and software engineering have had a noticeable effect on other languages and their tools.

Go has turned out to have a much broader reach than we had ever expected. Its growth in the industry has been phenomenal, and it has powered many projects at Google.
Credit to Renee French for the gopher illustration.

The earliest production uses of Go inside Google appeared in 2011, the year we launched Go on App Engine and started serving YouTube database traffic with Vitess. At the time, Vitess’s authors told us that Go was exactly the combination of easy network programming, efficient execution, and speedy development that they needed, and that if not for Go, they likely wouldn’t have been able to build the system at all.

The next year, Go replaced Sawzall for Google’s search quality analysis. And of course, Go also powered Google’s development and launch of Kubernetes in 2014.

In the past year, we’ve posted sixteen case studies from end users around the world talking about how they use Go to build fast, reliable, and efficient software at scale. Today, we are adding three new case studies from teams inside Google:
  • Core Data Solutions: Google’s Core Data team replaced a monolithic indexing pipeline written in C++ with a more flexible system of microservices, the majority of them written in Go, that help support Google Search.
  • Google Chrome: Mobile users of Google Chrome in lite mode rely on the Chrome Optimization Guide server to deliver hints for optimizing page loads of well-known sites in their geographic area. That server, written in Go, helps deliver faster page loads and lowered data usage to millions of users daily.
  • Firebase: Google Cloud customers turn to Firebase as their mobile and web hosting platform of choice. After joining Google, the team completely migrated its backend servers from Node.js to Go, for the easy concurrency and efficient execution.
We hope these stories provide the Go developer community with deeper insight into the reasons why teams at Google choose Go, what they use Go for, and the different paths teams took to those decisions.

If you’d like to share your own story about how your team or organization uses Go, please contact us.

By Rob Pike, Distinguished Engineer

Recapping major improvements in Go 1.15 and bringing the Go community together

The Latest Version of Go is Released

In August, the Go team released Go 1.15, marking another milestone of continuous improvements to the language. As always, many of the updates were supported by our community of contributors in collaboration with the engineering team here at Google.

Following our earlier release in February, the latest Go build brings a slew of performance improvements. We’ve made significant changes behind the scenes to the compiler, reducing binary sizes by about 5%, and improving building Go applications to be around 20% faster and requiring 30% less memory on average.

Go 1.15 also includes several updates to the core library, a few security improvements, and much more–you can dive into the full release notes here. We’re really excited to see how developers like you, ranging from those working on indie projects all the way to enterprise devs, will incorporate these updates into your projects.

A few users have been working with the release candidates ahead of the latest build and were kind enough to share their experience.

Wayne Ashley Berry, a Senior Engineer at Over, shared that “...seeing significant performance improvements in the new releases is incredible!” and, speaking of compiler improvements, showed “one of [their] services compiling ~1.3x faster” after upgrading to Go 1.15.

This mirrored our experience within Google, compiling larger Go applications like Kubernetes which experienced 30% memory reductions and 20% faster builds.

These are just a couple examples of how some users have already seen the benefits of Go 1.15. We’re looking forward to what the rest of the gopher community will do with it!

A Better Experience For Go Developers

Over the last few months we’ve also been hard at work improving a few things in the Go ecosystem. In July, the VS Code extension for Go officially joined the Go project and more recently, we rolled out a few updates for our online resources.

We brought a few important changes to pkg.go.dev, a central source of information for Go packages and modules. With these changes came functional improvements to make the browsing experience better and minor tweaks across the site (including a cute new gopher). We also made some changes to go.dev—our hub for Go developers—making it easier to navigate the site and find examples of Go’s use in the enterprise.
The new home page on pkg.go.dev. Credit to Renee French for the gopher illustration.
We’ll be bringing even more improvements to the Go ecosystem in the coming months, so stay tuned!

Our Commitment to Open Source and Google Open Source Live

Most of these changes wouldn’t be possible without contribution from our open source community through submitting CLs to our release process, organizing community meetups, and engaging in discussions about future changes (like generics).

Being part of the open source community is something that the Go team embraces, and Google as a whole works to support every year. It’s through this community that we’re able to iterate on our work with a constant feedback loop and bring new gophers into the Go ecosystem. We’re lucky to have the support of passionate Go advocates, and even get to celebrate the occasional community gopher design!

That being said, this has been a challenging year to gather in person for meetups or larger conferences. However, the gopher community has been incredibly resilient, with many meetups taking place virtually, several of which Go team members have been able to attend.

We’d like to help the entire open source community stay connected. In that vein, we’re excited to announce that Google will host a series of free virtual events, Google Open Source Live, every month through next year! As part of the series, on November 7th, members of the Go team will be sharing community updates, some things we’ve been up to, and a few best practices around getting started with Go.

Visit the official site for the Go Day on Google Open Source Live, to learn more about registration and speakers. To keep up-to-date with the Go team, make sure to follow the official Go twitter and visit go.dev, our hub for Go developers.

By Steve Francia Product Lead, Go Team