Author Archives: Mary Radomile

Calling all teens: join the latest round of Google Code-in

Yesterday marked the start of the 7th year of Google Code-in (GCI), our pre-university contest introducing students to open source development. GCI takes place entirely online and is open to students between the ages of 13 and 17 around the globe.

Open source software makes up the backbone of the internet, from servers and routers to the phone in your pocket, but it’s a community-driven effort. Google Code-in serves a dual purpose of encouraging young developers and ensuring that open source communities continue to grow.

The concept is simple: students complete bite-sized tasks created by 17 participating open source organizations on topic areas of their choice, including:

  • Coding

  • Documentation/Training

  • Outreach/Research

  • Quality Assurance

  • User Interface

Tasks take an average of 3-5 hours to complete and include the guidance of a mentor to help along the way. Complete one task? Get a digital certificate. Three tasks? Get a Google t-shirt. Mentor organizations pick finalists and grand prize winners from among the 10 students who contributed most to that organization. Finalists get a hoodie and Grand Prize winners get a trip to Google headquarters in California where they meet Googlers, mentors and fellow winners.  

Google Code-in began with 361 students from 45 countries and has grown to include, in 2015, 980 students from 65 countries. You can read about the experiences of past participants on the Google Open Source blog. Over the last 6 years, more than 3,000 students from 99 countries have successfully completed tasks in GCI.

Student Ahmed Sabie had this to say, “Overall, Google Code-in was the experience of a lifetime. It set me up for the future by teaching me relevant and critical skills necessary in software development.”

Know of a student who might be interested? Learn more about GCI by checking out our rules and FAQs. And please visit our contest site and read the Getting Started Guide. Teachers, you can find additional resources here to help get your students started.

Source: Education


OpenCensus’s journey ahead: enhanced feature set

This is the second half of a series of blog posts about what’s coming next for OpenCensus. The OpenCensus Roadmap is composed of two pillars: increased language, framework, and platform coverage, and the addition of more powerful features.

In this blog post we’re going to discuss the second pillar: new functionality that makes OpenCensus more powerful. This includes dramatically improved sampling capabilities and new types of telemetry that we’re looking to capture.

More Power

Intelligent Sampling

In addition to expanding the list of languages and frameworks that OpenCensus supports out of the box, we’ll also be increasing the usefulness of existing functionality.

Services instrumented with OpenCensus currently randomly (at a configurable rate) sample new requests (without context, usually received directly from clients). While this does provide an effective view into application latency, developers are mostly interested in traces of particularly slow requests or requests that also capture a useful event, such as an exception.

We’re adding support for OpenCensus to make deferred sampling decisions - that is, to sample requests after they’ve propagated through several systems, while still preserving the full critical path of the trace. Though the feature is just starting development, we’re focusing on making sampling more intelligent - for example, by triggering traces based on accumulated latency, errors, and debugging events. Expect to hear more about this soon.

New Telemetry, Including Logs and Errors

As we mentioned in our last blog post, our ambition is for OpenCensus to become a ubiquitous observability framework, meaning that collecting traces and stats alone won’t be enough. Correlating traces and tags with logs and errors represents an obvious next step, and we’re currently working through what this might look like. Longer term, this list could grow to include profiles and other signals.

The topic of what signals will come next is worth of its own blog post, and you can expect us to start talking about this more in the coming months.

Server-provided Traces and Metrics

Distributed applications can obtain observability into their own performance by instrumenting themselves with OpenCensus, however visibility into the performance of external services or APIs that they call into is still limited. For example, imagine a web service that calls into Google Cloud Platform’s Cloud Bigtable service: the application developer would have visibility into their client side traces but would not be able to tell how much time Cloud BigTable took to respond vs time taken by network. We’re working on adding server side traces and metrics - essentially a way for service providers to summarize server side traces and metrics.

Cluster wide Z pages

Today, OpenCensus provides a stand-alone application called z-pages that includes an embedded web server and displays configuration parameters and trace information in real-time, as captured from any OpenCensus libraries running on the same host. By accessing a z-page, developers can configure sampling rate for the local instance, or view traces, tags, and stats as they’re being processed in real-time.

Longer-term, we wish to extend this functionality to enable cluster wide z-pages, which could provide the same functionality as the current z-pages experience, aggregated over all of the instances of a particular service. We’re still discussing different implementation options, and if we can tie this into other aggregation-related workstreams that we’re already pursuing.

Wrapping up

Does the strategy and roadmap above resonate with what you’d want to get from OpenCensus library? We’d love to hear your ideas and what you’d like to see prioritized.

As we mentioned in our last post, none of this is possible without the support and participation from the community. Check out our repo and start contributing. No contribution or idea is too small. Join other developers and users on the OpenCensus Gitter channel. We’d love to hear from you.

By Pritam Shah and Morgan McLean, Census team

Introducing Git protocol version 2


Today we announce Git protocol version 2, a major update of Git's wire protocol (how clones, fetches and pushes are communicated between clients and servers). This update removes one of the most inefficient parts of the Git protocol and fixes an extensibility bottleneck, unblocking the path to more wire protocol improvements in the future.

The protocol version 2 spec can be found here. The main improvements are:
    The main motivation for the new protocol was to enable server side filtering of references (branches and tags). Prior to protocol v2, servers responded to all fetch commands with an initial reference advertisement, listing all references in the repository. This complete listing is sent even when a client only cares about updating a single branch, e.g.: `git fetch origin master`. For repositories that contain 100s of thousands of references (the Chromium repository has over 500k branches and tags) the server could end up sending 10s of megabytes of data that get ignored. This typically dominates both time and bandwidth during a fetch, especially when you are updating a branch that's only a few commits behind the remote, or even when you are only checking if you are up-to-date, resulting in a no-op fetch.

    We recently rolled out support for protocol version 2 at Google and have seen a performance improvement of 3x for no-op fetches of a single branch on repositories containing 500k references. Protocol v2 has also enabled a reduction of 8x of the overhead bytes (non-packfile) sent from googlesource.com servers. A majority of this improvement is due to filtering references advertised by the server to the refs the client has expressed interest in.

    Getting over the hurdles

    The Git project has tried on a number of occasions over the years to either limit the initial ref advertisement or move to a new protocol altogether but continued to run into two problems: (1) the initial request is rigid and does not include a field that could be used to request that new servers modify their response without breaking compatibility with existing servers and (2) error handling is not well enough defined to allow safely using a new protocol that existing servers do not understand with a quick fallback to the old protocol. To migrate to a new protocol version, we needed to find a side channel which existing servers would ignore but could be used to safely communicate with newer servers.

    There are three main transports that are used to speak Git’s wire-protocol (git://, ssh://, and https://), and the side channel that we use to request v2 needs to communicate in such a way that an older server would ignore any additional data sent and not crash. The http transport was the easiest as we can simply include an additional http header in the request (“Git-Protocol: version=2”). The ssh transport is a bit more difficult as it requires sending an environment variable (“GIT_PROTOCOL=version=2”) to be set on the remote end. This is more challenging because it requires server administrators to configure sshd to accept the new environment variable on their server. The most difficult transport is the anonymous Git transport (git://).

    Initial requests made to a server using the anonymous Git transport are made in the form of a single packet-line which includes the requested service (git-upload-pack for fetches and git-receive-pack for pushes), and the repository followed by a NUL byte. Later virtualization support was added and a hostname parameter could be tacked on and  terminated by a NUL byte: `0033git-upload-pack /project.git\0host=myserver.com\0`. Ideally we’d be able to add a new parameter to be used to request v2 by adding it in the same manner as the hostname was added: `003dgit-upload-pack /project.git\0host=myserver.com\0version=2\0`. Unfortunately due to a bug introduced in 2006 we aren't able to place any extra arguments (separated by NULs) other than the host because otherwise the parsing of those arguments would enter an infinite loop. When this bug was fixed in 2009, a check was put in place to disallow extra arguments so that new clients wouldn't trigger this bug in older servers.

    Fortunately, that check doesn't notice if we send additional request arguments hidden behind a second NUL byte, which was pointed out back in 2009.  This allows requests structured like: `003egit-upload-pack /project.git\0host=myserver.com\0\0version=2\0`. By placing version information behind a second NUL byte we can skirt around both the infinite loop bug and the explicit disallowal of extra arguments besides hostname. Only newer servers will know to look for additional information hidden behind two NUL bytes and older servers won’t croak.

    Now, in every case, a client can issue a request to use v2, using a transport-specific side channel, and v2 servers can respond using the new protocol while older servers will ignore the side channel and just respond with a ref advertisement.

    Try it for yourself

    To try out protocol version 2 for yourself you'll need an up to date version of Git (support for v2 was recently merged to Git's master branch and is expected to be part of Git 2.18) and a v2 enabled server (repositories on googlesource.com and Cloud Source Repositories are v2 enabled). If you enable tracing and run the `ls-remote` command querying for a single branch, you can see the server sends a much smaller set of references when using protocol version 2:

    ```
    # Using the original wire protocol
    GIT_TRACE_PACKET=1 git -c protocol.version=0 ls-remote https://chromium.googlesource.com/chromium/src.git master

    # Using protocol version 2
    GIT_TRACE_PACKET=1 git -c protocol.version=2 ls-remote https://chromium.googlesource.com/chromium/src.git master
    ```

    By Brandon Williams, Git-core Team

    Open-sourcing Google Earth Enterprise

    (originally posted on the Geo Developers blog)

    We are excited to announce that we are open-sourcing Google Earth Enterprise (GEE), the enterprise product that allows developers to build and host their own private maps and 3D globes. With this release, GEE Fusion, GEE Server, and GEE Portable Server source code (all 470,000+ lines!) will be published on GitHub under the Apache2 license in March.
    Screen Shot 2017-01-26 at 2.51.24 PM.png
    Originally launched in 2006, Google Earth Enterprise provides customers the ability to build and host private, on-premise versions of Google Earth and Google Maps. In March 2015, we announced the deprecation of the product and the end of all sales. To provide ample time for customers to transition, we have provided a two year maintenance period ending on March 22, 2017. During this maintenance period, product updates have been regularly shipped and technical support has been available to licensed customers.

    Feedback is important to us and we’ve heard from our customers that GEE remains in-use in mission-critical applications. Many customers have not transitioned to other technologies. Open-sourcing GEE allows our customer community to continue to improve and evolve the project in perpetuity. Note that the implementations for Google Earth Enterprise Client, Google Maps JavaScript® API V3 and Google Earth API will not be open sourced. The Enterprise Client will continue to be made available and updated. However, since GEE Fusion and GEE Server are being open-sourced, the imagery and terrain quadtree implementations used in these products will allow third-party developers to build viewers that can consume GEE Server Databases.

    We’re thankful for the help of our GEE partners in preparing the codebase to be migrated to GitHub. It’s a lot of work and we cannot do it without them. It is our hope that their passion for GEE and GEE customers will serve to lead the project into its next chapter.

    Looking forward, GEE customers can use Google Cloud Platform (GCP) instead of legacy on-premises enterprise servers to run their GEE instances. For many customers, GCP provides a scalable and affordable infrastructure as a service where they can securely run GEE. Other GEE customers will be able to continue to operate the software in disconnected environments. However, we believe that the advantages of incorporating even some of the workloads on GCP will become apparent (such as processing large imagery or terrain assets on GCP that can be downloaded and brought to internal networks, or standing up user-facing Portable Globe Factories).

    Moreover, GCP is increasingly used as a source for geospatial data. Google’s Earth Engine has made available over a petabyte of raster datasets which are readily accessible and available to the public on Google Cloud Storage. Additionally, Google uses Cloud Storage to provide data to customers who purchase Google Imagery today. Having access to massive amounts of geospatial data, on the same platform as your flexible compute and storage, makes generating high quality Google Earth Enterprise Databases and Portables easier and faster than ever.

    We will be sharing a series of white papers and other technical resources to make it as frictionless as possible to get open source GEE up and running on Google Cloud Platform. We are excited about the possibilities that open-sourcing enables, and we trust this is good news for our community. We will be sharing more information when we launch the code in March on GitHub. For general product information, visit the Google Earth Enterprise Help Center. Review the essential and advanced training for how to use Google Earth Enterprise, or learn more about the benefits of Google Cloud Platform.