Author Archives: Open Source Programs Office

The Apache Beam Community in 2019

2019 has already been a busy time for the Apache Beam. The ASF blog featured our way of community building and we've had more Beam meetups around the world. Apache Beam also received the Technology of the Year Award from InfoWorld.
As these events happened, we were building up to the 20th anniversary of the Apache Software Foundation. The contributions of the Beam community were a part of Maximilian Michels blog post on the success of the ASF's open source development model:

As the founder of the first Beam meetup in London back in 2017, seeing the community flourish on a larger and worldwide scale is something that makes me happy. And we have come quite a long way since 2017, both in terms of geographical spread:



As well as in numbers:



All of this culminates in two Beam Summits this year—one we already had a few weeks ago in Berlin, and the other which will take place in a few weeks in Las Vegas, where we worked together with Apache and the ApacheCon team.

In that spirit, let's have a more detailed overview of the things that have happened, what the next few months look like, and how we can foster even more community growth.

Meetups

We've had a flurry of activity, with several meetups in the planning process and more popping up globally over time. As diversity of contributors is a core ASF value, this geographic spread is exciting for the community. Here's a picture from the latest Apache Beam meetup organized at Lyft in San Francisco:



We have more Bay Area meetups coming soon, and the community is looking into kicking off a meetup in Toronto and New York! In Europe, London had its first meetup of 2019 at the start of April, as did Stockholm at the start of May:
Meetup groups are becoming active in Berlin and New York also, so stay tuned for events there and more meetups internationally! If you are interested in starting your own meetup, feel free to reach out! Good places to start include our Slack channel, the dev and user mailing lists, or the Apache Beam Twitter. Even if you can’t travel to these meetups, you can stay informed on the happenings of the community. The talks and sessions from previous conferences and meetups are archived on the Apache Beam YouTube channel. If you want your session added to the channel, don’t hesitate to get in touch!

Summits

The first summit of the year was held in Berlin this past June. You can read about the inaugural edition of the Beam Summit Europe here. At these summits, you have the opportunity to meet with other Apache Beam creators and users, get expert advice, learn from the speaker sessions, and participate in workshops. We are proud to say that the Summit doubled in size this year with attendees from 24 countries across 4 continents.

You can find resources from this year’s Summit here:
  • 📹 the recordings can be found on our YouTube channel.
  • 📊 presentations of the Summit are made available via the website and in this folder.
  • We strongly encourage you to get involved again this year! You can still sign up for the upcoming summit in North America.
  • 🎫 If you want to secure your ticket to attend the Beam Summit North America 2019, check our the ApacheCon website.
  • 🔈 In case you want to get involved in speaking at events, do not hesitate to contact us via email or Twitter.

Why community engagement matters

Why we need a strong Apache Beam community:
  • We’re gaining lots of code contributions and need committers to review them
  • We want people to feel a sense of ownership to the project. By fostering this level of engagement, the work becomes even more exciting.
  • A healthy community has a further reach and leads to more growth. More hours can be contributed to the project as we can spread the work and ownership.
Why are we organizing these summits:
  • We’d like to give folks a place to meet and share ideas.
  • We know that offline interactions often changes the nature of the online ones in a positive manner.
  • Building an active and diverse community is part of the Apache Way. These summits provide an opportunity for us to engage people from different locations, companies, and backgrounds.
By Matthias Baetens, Google Developer Expert for Cloud, Apache Beam committer and community organiser

Announcing Docsy: A Website Theme for Technical Documentation

Have you ever struggled with the process of creating documentation for an open source project? Do you have an open source project that's outgrown its README? Open source projects need great docs to succeed, but great open source doc sites aren't always easy to produce and share.

Google supports over 2000 open source projects, and there has been growing demand from these projects for tooling and guidance to help them write and publish their documentation. To meet this need we created Docsy: a documentation website with templates and guidance for documentation, which we’re open sourcing to the public to use and help improve the tool.

Docsy builds on existing open source tools, like Hugo, and our experience with open source docs, providing a fast and easy way to stand up an OSS documentation website with features specifically designed to support technical documentation. Special features include everything from site navigation to multi-language support – with easy site deployment options provided by Hugo. We also created guidance on how to add additional pages, structure your documentation, and accept community contributions, all with the goal of letting you focus on creating great content.

Who’s using it?

The Kubeflow, Knative, and Agones websites were built using the Docsy theme, with more projects in the pipeline. We’ve also created an example site that uses lots of Docsy features for you to explore and copy.

Ready to get started?

Visit the Docsy site to find out how to create your first site with Docsy! You can either use Docsy like a regular Hugo theme, or clone our example site. Docsy is an open source project—of course—and we welcome your issues, suggestions, and contributions!

Come Meet the Google Open Source Team at OSCON!

Google Cloud is proud to be a Diamond Sponsor at OSCON, and we’re excited for another year of connecting, learning, and sharing with the open source community! Google is deeply grateful to all of your amazing open source efforts, so to celebrate, our booth will have an Open Gratitude wall where we will acknowledge your contributions, and where we encourage you to express your gratitude for those who have helped you in open source!

Once you’ve recognized your open source heroes on the Open Gratitude wall, stick around at the Google Open Source booth to learn about topics such as open source governance, documentation, open source in ML and gaming, encouraging non-code contributions, and about Google’s open source outreach programs in general. At our booth sessions you can also explore open source projects such as Kubernetes, Istio, Go, and Beam (as well as other Apache projects). Booth office hours run from 10:15am to 7pm Wednesday, July 17, and from 10:15am to 4:10pm on Thursday, July 18. The full schedule will be posted at the booth—please come by and check it out!

In addition to the events at the booth, the Google open source team has two workshops on Tuesday, July 16:
This half-day workshop kicks off with an overview of research-backed documentation best practices. Andrew Chen, Erin McKean, and Aizhamal Nurmamat kyzy lead you through a hands-on exercise in which you'll create the skeleton of a ready-to-deploy documentation website for your open source project.
Paris Pittman takes you through the ins and outs of the Kubernetes contributor community so you can land your first PR. You'll learn about SIGs, the GitHub workflow, its automation and continuous integration (CI), setting up your dev environment, and much more. Stick around until the end, and you'll have time to work on your first PR with the help of current contributors.
We also hope you attend the main conference sessions presented by Googlers, especially the keynotes on Wednesday (Built to last: What Google and Microsoft have learned growing open source communities) and Thursday (Be a Docs Star), and the sessions on Wednesday:
And Thursday:
As part of our commitment to creating a diverse and inclusive community, we’ve redirected our conference swag budget into diversity scholarships. (We believe you’d prefer to have more interesting conversations with a wider range of people over another pair of socks!) But if you are looking for a souvenir of your time in Portland there will be a special Portland-themed sticker featuring Pancakes, the (extremely adorable) gRPC mascot, and we encourage projects to take and leave stickers in our sticker-swap space!

OSCON is one of the highlights of the year for those of us who love open source—we’re thrilled to be able to share what we’ve learned with you, and to learn what you’re interested in and excited about (and also what you think could improve). See you in Portland!

Truth 1.0: Fluent Assertions for Java and Android Tests

Software testing is important—and sometimes frustrating. The frustration can come from working on innately hard domains, like concurrency, but too often it comes from a thousand small cuts:
assertEquals("Message has been sent", getString(notification, EXTRA_BIG_TEXT));
assertTrue(
    getString(notification, EXTRA_TEXT)
        .contains("Kurt Kluever <kak@google.com>"));
The two assertions above test almost the same thing, but they are structured differently. The difference in structure makes it hard to identify the difference in what's being tested.
A better way to structure these assertions is to use a fluent API:
assertThat(getString(notification, EXTRA_BIG_TEXT))
    .isEqualTo("Message has been sent");
assertThat(getString(notification, EXTRA_TEXT))
    .contains("Kurt Kluever <kak@google.com>");
A fluent API naturally leads to other advantages:
  • IDE autocompletion can suggest assertions that fit the value under test, including rich operations like containsExactly(permission.SEND_SMS, permission.READ_SMS).
  • Failure messages can include the value under test and the expected result. Contrast this with the assertTrue call above, which lacks a failure message entirely.
Google's fluent assertion library for Java and Android is Truth. We're happy to announce that we've released Truth 1.0, which stabilizes our API after years of fine-tuning.



Truth started in 2011 as a Googler's personal open source project. Later, it was donated back to Google and cultivated by the Java Core Libraries team, the people who bring you Guava.
You might already be familiar with assertion libraries like Hamcrest and AssertJ, which provide similar features. We've designed Truth to have a simpler API and more readable failure messages. For example, here's a failure message from AssertJ:
java.lang.AssertionError:
Expecting:
  <[year: 2019
month: 7
day: 15
]>
to contain exactly in any order:
  <[year: 2019
month: 6
day: 30
]>
elements not found:
  <[year: 2019
month: 6
day: 30
]>
and elements not expected:
  <[year: 2019
month: 7
day: 15
]>
And here's the equivalent message from Truth:
value of:
    iterable.onlyElement()
expected:
    year: 2019
    month: 6
    day: 30

but was:
    year: 2019
    month: 7
    day: 15
For more details, read our comparison of the libraries, and try Truth for yourself.

Also, if you're developing for Android, try AndroidX Test. It includes Truth extensions that make assertions even easier to write and failure messages even clearer:
assertThat(notification).extras().string(EXTRA_BIG_TEXT)
    .isEqualTo("Message has been sent");
assertThat(notification).extras().string(EXTRA_TEXT)
    .contains("Kurt Kluever <kak@google.com>");
Coming soon: Kotlin users of Truth can look forward to Kotlin-specific enhancements.
By Chris Povirk, Java Core Libraries

Google’s robots.txt Parser is Now Open Source

Originally posted on the Google Webmaster Central Blog

For 25 years, the Robots Exclusion Protocol (REP) was only a de-facto standard. This had frustrating implications sometimes. On one hand, for webmasters, it meant uncertainty in corner cases, like when their text editor included BOM characters in their robots.txt files. On the other hand, for crawler and tool developers, it also brought uncertainty; for example, how should they deal with robots.txt files that are hundreds of megabytes large?

Today, we announced that we're spearheading the effort to make the REP an internet standard. While this is an important step, it means extra work for developers who parse robots.txt files.

We're here to help: we open sourced the C++ library that our production systems use for parsing and matching rules in robots.txt files. This library has been around for 20 years and it contains pieces of code that were written in the 90's. Since then, the library evolved; we learned a lot about how webmasters write robots.txt files and corner cases that we had to cover for, and added what we learned over the years also to the internet draft when it made sense.

We also included a testing tool in the open source package to help you test a few rules. Once built, the usage is very straightforward:

robots_main <robots.txt content> <user_agent> <url>

If you want to check out the library, head over to our GitHub repository for the robots.txt parser. We'd love to see what you can build using it! If you built something using the library, drop us a comment on Twitter, and if you have comments or questions about the library, find us on GitHub.

Posted by Edu Pereda, Lode Vandevenne, and Gary, Search Open Sourcing team

Security Crawl Maze: An Open Source Tool to Test Web Security Crawlers

Scanning modern web applications for security vulnerabilities can be a difficult task, especially if they are built with Javascript frameworks, which is why crawlers have to use a multi-stage crawling approach to discover all the resources on modern websites.

Living in the times of dynamically changing specifications and the constant appearance of new frameworks, we often have to adjust our crawlers so that they are able to discover new ways in which developers can link resources from their applications. The issue we face in such situations is measuring if changes to crawling logic improve the effectiveness. While working on replacing a crawler for a web security scanner that has been in use for a number of years, we found we needed a universal test bed, both to test our current capabilities and to discover cases that are currently missed. Inspired by Firing Range, today we’re announcing the open-source release of Security Crawl Maze – a universal test bed for web security crawlers.

Security Crawl Maze is a simple Python application built with the Flask framework that contains a wide variety of cases for ways in which a web based application can link other resources on the Web. We also provide a Dockerfile which allows you to build a docker image and deploy it to an environment of your choice. While the initial release is covering the most important cases for HTTP crawling, it’s a subset of what we want to achieve in the near future. You’ll soon be able to test whether your crawler is able to discover known files (robots.txt, sitemap.xml, etc…) or crawl modern single page applications written with the most popular JS frameworks (Angular, Polymer, etc.).

Security crawlers are mostly interested in code coverage, not in content coverage, which means the deduplication logic has to be different. This is why we plan to add cases which allow for testing if your crawler deduplicates URLs correctly (e.g. blog posts, e-commerce). If you believe there is something else, feel free to add a test case for it – it’s super simple! Code is available on GitHub and through a public deployed version.

We hope that others will find it helpful in evaluating the capabilities of their crawlers, and we certainly welcome any contributions and feedback from the broader security research community.

By Maciej Trzos, Information Security Engineer

Introducing TensorNetwork, an Open Source Library for Efficient Tensor Calculations

Originally posted on the Google AI Blog.

Many of the world's toughest scientific challenges, like developing high-temperature superconductors and understanding the true nature of space and time, involve dealing with the complexity of quantum systems. What makes these challenges difficult is that the number of quantum states in these systems is exponentially large, making brute-force computation infeasible. To deal with this, data structures called tensor networks are used. Tensor networks let one focus on the quantum states that are most relevant for real-world problems—the states of low energy, say—while ignoring other states that aren't relevant. Tensor networks are also increasingly finding applications in machine learning (ML). However, there remain difficulties that prohibit them from widespread use in the ML community: 1) a production-level tensor network library for accelerated hardware has not been available to run tensor network algorithms at scale, and 2) most of the tensor network literature is geared toward physics applications and creates the false impression that expertise in quantum mechanics is required to understand the algorithms.

In order to address these issues, we are releasing TensorNetwork, a brand new open source library to improve the efficiency of tensor calculations, developed in collaboration with the Perimeter Institute for Theoretical Physics and X. TensorNetwork uses TensorFlow as a backend and is optimized for GPU processing, which can enable speedups of up to 100x when compared to work on a CPU. We introduce TensorNetwork in a series of papers, the first of which presents the new library and its API, and provides an overview of tensor networks for a non-physics audience. In our second paper we focus on a particular use case in physics, demonstrating the speedup that one gets using GPUs.

How are Tensor Networks Useful?

Tensors are multidimensional arrays, categorized in a hierarchy according to their order: e.g., an ordinary number is a tensor of order zero (also known as a scalar), a vector is an order-one tensor, a matrix is an order-two tensorDiagrammatic notation for tensors. and so on. While low-order tensors can easily be represented by an explicit array of numbers or with a mathematical symbol such as Tijnklm (where the number of indices represents the order of the tensor), that notation becomes very cumbersome once we start talking about high-order tensors. At that point it's useful to start using diagrammatic notation, where one simply draws a circle (or some other shape) with a number of lines, or legs, coming out of it—the number of legs being the same as the order of the tensor. In this notation, a scalar is just a circle, a vector has a single leg, a matrix has two legs, etc. Each leg of the tensor also has a dimension, which is the size of that leg. For example, a vector representing an object's velocity through space would be a three-dimensional, order-one tensor.
Diagrammatic notation for tensors.
The benefit of representing tensors in this way is to succinctly encode mathematical operations, e.g., multiplying a matrix by a vector to produce another vector, or multiplying two vectors to make a scalar. These are all examples of a more general concept called tensor contraction.
Diagrammatic notation for tensor contraction. Vector and matrix multiplication, as well as the matrix trace (i.e., the sum of the diagonal elements of a matrix), are all examples.
These are also simple examples of tensor networks, which are graphical ways of encoding the pattern of tensor contractions of several constituent tensors to form a new one. Each constituent tensor has an order determined by its own number of legs. Legs that are connected, forming an edge in the diagram, represent contraction, while the number of remaining dangling legs determines the order of the resultant tensor.
Left: The trace of the product of four matrices, tr(ABCD), which is a scalar. You can see that it has no dangling legs. Right: Three order-three tensors being contracted with three legs dangling, resulting in a new order-three tensor.
While these examples are very simple, the tensor networks of interest often represent hundreds of tensors contracted in a variety of ways. Describing such a thing would be very obscure using traditional notation, which is why the diagrammatic notation was invented by Roger Penrose in 1971.

Tensor Networks in Practice

Consider a collection of black-and-white images, each of which can be thought of as a list of N pixel values. A single pixel of a single image can be one-hot-encoded into a two-dimensional vector, and by combining these pixel encodings together we can make a 2N-dimensional one-hot encoding of the entire image. We can reshape that high-dimensional vector into an order-N tensor, and then add up all of the tensors in our collection of images to get a total tensor Ti1,i2,...,iN encapsulating the collection.
This sounds like a very wasteful thing to do: encoding images with about 50 pixels in this way would already take petabytes of memory. That's where tensor networks come in. Rather than storing or manipulating the tensor T directly, we instead represent T as the contraction of many smaller constituent tensors in the shape of a tensor network. That turns out to be much more efficient. For instance, the popular matrix product state (MPS) network would write T in terms of N much smaller tensors, so that the total number of parameters is only linear in N, rather than exponential.
The high-order tensor T is represented in terms of many low-order tensors in a matrix product state tensor network.
It's not obvious that large tensor networks can be efficiently created or manipulated while consistently avoiding the need for a huge amount of memory. But it turns out that this is possible in many cases, which is why tensor networks have been used extensively in quantum physics and, now, in machine learning. Stoudenmire and Schwab used the encoding just described to make an image classification model, demonstrating a new use for tensor networks. The TensorNetwork library is designed to facilitate exactly that kind of work, and our first paper describes how the library functions for general tensor network manipulations.

Performance in Physics Use-Cases

TensorNetwork is a general-purpose library for tensor network algorithms, and so it should prove useful for physicists as well. Approximating quantum states is a typical use-case for tensor networks in physics, and is well-suited to illustrate the capabilities of the TensorNetwork library. In our second paper, we describe a tree tensor network (TTN) algorithm for approximating the ground state of either a periodic quantum spin chain (1D) or a lattice model on a thin torus (2D), and implement the algorithm using TensorNetwork. We compare the use of CPUs with GPUs and observe significant computational speed-ups, up to a factor of 100, when using a GPU and the TensorNetwork library.
Computational time as a function of the bond dimension, χ. The bond dimension determines the size of the constituent tensors of the tensor network. A larger bond dimension means the tensor network is more powerful, but requires more computational resources to manipulate.

Conclusion and Future Work

These are the first in a series of planned papers to illustrate the power of TensorNetwork in real-world applications. In our next paper we will use TensorNetwork to classify images in the MNIST and Fashion-MNIST datasets. Future plans include time series analysis on the ML side, and quantum circuit simulation on the physics side. With the open source community, we are also always adding new features to TensorNetwork itself. We hope that TensorNetwork will become a valuable tool for physicists and machine learning practitioners.

Acknowledgements

The TensorNetwork library was developed by Chase Roberts, Adam Zalcman, and Bruce Fontaine of Google AI; Ashley Milsted, Martin Ganahl, and Guifre Vidal of the Perimeter Institute; and Jack Hidary and Stefan Leichenauer of X. We'd also like to thank Stavros Efthymiou at X for valuable contributions.

by Chase Roberts, Research Engineer, Google AI and Stefan Leichenauer, Research Scientist, X 

Software Community Growth Through “first-timers-only” Issues

“first-timers-only issues are those which are written in a very engaging, welcoming way, far different than the usual ‘just report the bug’ type of GitHub issue. To read more about these, check out firsttimersonly.com, which really captures how and why this works and is beginning to be a movement in open source coding outreach! Beyond the extra welcome, this also includes getting such well-formatted issues out in front of lots of people who may be contributing to open source software for the very first time. 
It takes a LOT of work to make a good issue of this type, and we often walk through each step required to actually make the requested changes – the point is to help newcomers understand that a) they're welcome, and b) what the collaboration workflow looks like. Read more at https://publiclab.org/software-outreach!”
Since early 2016, we at Public Lab have been working to make our open source software projects more welcoming and inclusive and to grow our software contributor community in diversity and size. Our adoption of the strategy of posting first-timers-only (FTO) issues was also started at Public Lab near the end of 2016:https://publiclab.org/notes/warren/10-31-2016/create-a-welcoming-first-timers-only-issue-to-invite-new-software-contributors

During March and April, as GSoC, Outreachy, and other outreach programs were seeking proposals for the upcoming summer, we put a lot of extra time and work into welcoming newcomers into our community and making sure they are well-supported. We've seen a huge increase in newcomers and wanted to report in about how this process has scaled!

Through the end of March, nearly 409 FTO issues had been created across our projects, which shows how many people have been welcomed into Open Source 🌐 and in our community, by the collaborative efforts.

From March 9, 2019, we started maintaining a list of people who want to work on various projects of Public Lab - https://github.com/publiclab/plots2/issues/4963 - through first-timers-only issues. And, we are proud to announce that over 20 days at the end of March, the Public Lab community created 55 FTO issues i.e., 13% of total Public Lab FTO issues (for all time) were created during this 20 day period.

The idea of maintaining a list of FTO issue-seekers has been a big success and has helped coordinate and streamline the process. We were able to assign issues to nearly 50 contributors in just 20 days. And, each day the list is growing and we are opening more and more FTO issues for helping new contributors in taking their first-step in Open Source with Public Lab.

The credit for this tremendous growth goes to whole Public Lab reviewers team who ensured that every newcomer gets an FTO issue and also supported each newcomer in making their first contribution.

What makes this especially cool is that many of the FTO creators were people who had just recently completed their own FTO — then turned around and welcomed someone else into the community. This really highlights how newcomers have special insight into how important it is to welcome people properly, to support them in their first steps, and how making this process a core responsibility of our community has worked well.

Thanks to everyone, for the great work and cheers to this awesome community growth 🎉 🥂 💯

By Gaurav Sachdeva with input from Jeffrey Warren, Public Lab


Easier and More Powerful Observability with the OpenCensus Agent and Collector

The OpenCensus project has grown considerably over the past year, and we’ve had several significant announcements in the first half of 2019, including the project’s merger into OpenTelemetry. The features discussed in this post will move into OpenTelemetry over the coming months.

For those who aren’t already familiar with the project, OpenCensus provides a set of libraries that collect distributed traces and application metrics from your applications and send them to your backend of choice. Google announced OpenCensus one year ago, and the community has since grown to include large companies, major cloud providers, and APM vendors. OpenCensus includes integrations with popular web, RPC, and storage clients out of the box, along with exporters that allow it to send captured telemetry to your backend of choice.

We’ve recently enhanced OpenCensus with two new components: an optional agent that can manage exporters and collect system telemetry from VMs and containers, and an optional collector service that offers improved trace sampling and metrics aggregation. While we’ve already demonstrated these components on stage at Kubecon and Next, we’re now ready to show them more broadly and encourage their use.


The OpenCensus Agent

The OpenCensus agent is an optional component that can be deployed to each of your virtual machines or kubernetes pods. The agent receives traces and metrics from OpenCensus libraries, collects system telemetry, and can even capture data from a variety of different sources including Zipkin, Jaeger, and Prometheus. The agent has several benefits over exporting telemetry directly from the OpenCensus libraries:
  • The OpenCensus libraries export data to the OpenCensus agent by default, meaning that backend exporters can be reconfigured on the agent without having to rebuild or redeploy your application. This provides value for applications with high deployment costs and for PaaS-like platforms that have OpenCensus already integrated.
  • While the OpenCensus libraries collect application-level telemetry, the agent also captures system metrics like CPU and memory consumption and exports these to your backend of choice.
  • You can configure stats aggregations without redeploying your application.
  • The OpenCensus agent will host z-pages. While we originally made these a part of the libraries, we’ll be moving this functionality to the agent. This should result in a higher quality diagnostic page experience, as we’ll no longer have to reimplement the pages in each language.
  • The OpenCensus agent uses the same exporters already written for the Go OpenCensus library.
While directly exporting to a backend will remain a common use case for developers, we expect most larger deployments to start using the OpenCensus agent. The agent is currently in beta and we expect it to reach production ready quality and feature completeness later this year.

The OpenCensus Collector

The OpenCensus collector and agent share the same source code and are both optional – the key difference is how they’re deployed. While the agent runs on each host, the collector is deployed as a service and receives data from OpenCensus libraries and agents running across multiple hosts. The collector has several benefits over exporting telemetry directly from the OpenCensus libraries or agent:
  • Intelligent (tail based) trace sampling is one of the biggest benefits of the collector. By configuring your OpenCensus libraries to sample all traces and send all spans to the collector, you can have the collector perform sampling decisions after the fact! For example, you can configure the collector to sample the slowest one percent of traces at 30%, traces with errors at 100%, and all other traces at 1%!
  • The collector performs well and can be sharded across multiple instances. Current performance scales linearly across cores, allowing 10,000 spans to be collected per 1.2 cores.
  • The collector can be used as a proxy for other telemetry sources. In addition to receiving data from OpenCensus libraries and agents, Zipkin clients, Jaeger clients, and Prometheus clients, the service can be used to receive telemetry from client applications running on the web or on mobile devices.
  • The collector will soon host z-pages for your entire cluster. This is simply an expansion of the z-page functionality that we’ve added to the OpenCensus agent.
  • The collector can be used to apply telemetry policies across your entire application including adding span attributes such as region to all spans received, stripping personally identifiable information by removing or overwriting span attributes, or mapping span attributes to different names.

When to Use Each

As mentioned above, both the agent and collector are optional, and we expect that some developers will continue to export traces and metrics directly from the OpenCensus libraries. However, we expect both to become quite popular in the following scenarios:
  • Many organizations don’t want to have to rebuild and redeploy their apps when they change exporters. The agent provides the flexibility to change exporters without having to modify and redeploy your code.
  • With the OpenCensus agent you can capture system metrics via the same pipeline used to extract application metrics and distributed traces from your application.
  • If you want to make trace sampling decisions more intelligently, you’ll need to start using the collector.
  • With the OpenCensus collector you can minimize egress points and support features including batching, queuing and retry. These features are important when sending tracing and metric data to SaaS-based backends.
  • Platform providers can include the OpenCensus agent and collector into their services, making them available out of the box to customers.

Status

Both the agent and collector are currently in beta, though we know that several companies are already using them in their production services. We’re working towards the 1.0 release of each of these, and we expect this to occur by the end of Q2.

In the meantime, please join us on GitHub, Gitter, and in our monthly community meetings!

gVisor: One Year Later

Last year at KubeCon EU 2018, we open-sourced gVisor to help advance the container security field. Container isolation was -- and continues to be -- an important topic in containers, and we wanted to share how we’ve addressed this at Google with the broader community. Over the past year, we’ve listened to your feedback and made enhancements to the project, improving integration and support for Kubernetes, and growing our contributor community.

Extending Kubernetes Support

One of the most common requests we heard was for better Kubernetes support. When we launched, gVisor supported Docker and had only minimal (and experimental) support for Kubernetes. One year later we now support full integration with Kubernetes via containerd and MiniKube. This includes the ability to run multiple containers in a single pod, full terminal support with kubectl exec, and enforcement of pod cgroup policies. We've also tightened security by isolating our I/O proxies (known as "gofers") in seccomp sandboxes.
Our Docker support has also improved; gVisor now obeys CPU and memory limits when passed to Docker. We also support interactive terminals via docker run -it and docker exec -it, and we exposed gVisor's native save/restore functionality via the experimental docker checkpoint command.

Increasing Compatibility and Performance

Since launch, we’ve increased our compatibility with many popular workloads and released a suite of over 1,500 syscall tests. This test suite helps to prevent regressions and also makes it easier for contributors to get started developing and testing their changes to gVisor.
Along with compatibility, we've also increased our performance significantly, particularly around networking. Network-heavy workloads like webservers are an important use case for potential gVisor users, and we've made a lot of optimizations to our network stack, such as enabling Generic Segmentation Offloading (GSO). This has resulted in more than a 3x improvement in tcp_benchmark throughput. We've also implemented some RFCs in our TCP/IP stack like SACK which helps maintain throughput with lossy network connections. We've published many performance benchmarks on our website, and will update those as we continue to make progress.

Growing the gVisor Community

We’ve also made our development process more open by moving bugs to a GitHub issue tracker, holding monthly community meetings, and starting a new developer-focused mailing list. We've also published a new governance model and code of conduct for our community.

In the last year we focused on improving our documentation, including new user guides, architecture details, and contribution guides, so that it is easier for new users to learn about gVisor and start using and contributing to the project. You can view these docs on a new website we created, gvisor.dev. Both the website and documentation are open to contributions. The changes have brought in contributions from dozens of users all over the world.
We're very excited about the future of gVisor and the great community we are building. We'd love to hear more feedback and look forward to continuing working towards more open infrastructure and collaboration. To learn more and get involved, check out our new home at gvisor.dev, particularly the community page where you can join our mailing lists and find the next community meeting.

By Nicolas Lacasse and Ian Lewis, gVisor Team