Tag Archives: OpenCensus

OpenTelemetry’s First Release Candidates

OpenTelemetry has hit another milestone with the tracing specification reaching release candidate status.

With the specification now ready to go, expect to see tracing release candidates of the official APIs and SDKs over the next few weeks, along with updated exporters for Cloud Trace. In the coming months the same will follow for the metrics specification, followed by metrics release candidates of the APIs and SDKs and Cloud Monitoring exporters, followed by the project’s general availability. At this point we’ll switch our default application metrics and distributed tracing instrumentation from OpenCensus to OpenTelemetry.

This is exciting news for Google Cloud customers, as OpenTelemetry will enable even better observability experiences, both with Cloud Monitoring and Cloud Trace, or the third party monitoring and operations tools of your choice.

Originally posted on the on the OpenTelemetry blog.


As we’ve discussed in past announcements, we’re hard at work building OpenTelemetry’s first GA quality release. Today marks another milestone in this journey, with the freezing and first release candidate of the tracing specification.
Tracing Spec Release Candidate

The tracing specification is now considered to be a release candidate (RC) and is frozen, and the OpenTelemetry APIs and SDKs have a stable specification to build their own release candidates against. This means:
  • API, SDK, and Collector release candidates will appear within the next few weeks.
  • No breaking spec changes are allowed between now and the final GA specification, beyond any showstopper (P1) issues that are revealed in the RC period. We don’t expect any of these to appear, but the purpose of the RC period is for us to validate that we have a GA-worthy spec.
  • Some non-breaking changes will be allowed during the RC period. Most of these are clarifications of existing behaviour or are pure editorial updates.
The release candidate sections of the specification include all tracing related dependencies, specifically the following sections: Trace, Baggage, Resource, Context Propagation, Environment Variables, Exporters (for traces). You can view the progress of each OpenTelemetry component’s implementation in the project status matrix.

What’s Coming Next?

Achieving a release candidate of the tracing specification has been the top priority of OpenTelemetry since releasing our beta in March. With this completed, our focus now shifts to tracing release candidates of the APIs, SDKs, Collector, and auto instrumentation components, and producing a release candidate of the metrics specification.

RC Tracing Implementations

Most OpenTelemetry APIs and SDKs are close to completing their tracing RC implementations, and we expect the first wave of these to arrive within the next two weeks. Contributors who are looking to provide instrumentation (for various web frameworks, storage clients, etc.) can start building against release candidate APIs once they arrive. While the APIs may change in response to issues discovered during RC usage and testing (which will result in multiple pre-GA release candidates for these components), these will be extremely constrained.

Several SDKs will have two waves of release candidate milestones: the first will contain functionality from the tracing and context propagation sections of the specification, and the second will include release candidate implementations for baggage, exporters, resources, and environment variables.

Metrics

In parallel to the tracing RC component releases, we will apply the focus that we’ve had on tracing to the metrics specification. Starting this week, we will categorize which work items are required for GA, which can be optionally allowed in GA (non-breaking), and which will be shifted to post-GA. After completing this, we will track our burndown progress, and lock the metrics specification and publish a metrics specification release candidate once all P1 items are complete. Shortly after this, the APIs, SDKs, Collector, and other components will publish release candidates with RC-quality tracing and metrics functionality.

Productionization and GA Readiness Work

Once the metrics specification, SDKs, Collector, and other components reach release candidate status, we will focus on productionization tasks like writing documentation, producing a post-GA versioning strategy, building additional automated tests, etc. Once we are satisfied with each component’s adoptability and reliability, we will announce their general availability.

Overall Timeline

  1. Components (APIs, SDKs, Collector, auto instrumentation, etc.) issue release candidates with RC-quality tracing functionality.
  2. The metrics section of the specification achieves RC quality and is frozen.
  3. Components issue release candidates with RC-quality tracing and metrics functionality.
  4. Once we are satisfied with our metrics + tracing release candidates, OpenTelemetry goes GA.
  5. Logging enters beta, then issues an RC specification, followed by RC-quality logging functionality in each component, followed by a GA for logging.
We will have a better understanding of our GA release timeline in the coming weeks once outstanding work on the metrics specification is fully accounted for.

Tracking a Language’s Progress

As mentioned above, you can view the progress of a particular component (API, SDK, etc.) in the project status matrix. Each component’s implementation has their own timeline, though a core set (the JavaScript, Java, Go, Python, and .Net APIs + SDKs, the Collector, and Java auto instrumentation) are all tracking well. Each component has its own GA burndown board.

FAQ

I want to use OpenTelemetry on my production services; what’s the impact of today’s announcement?

SDKs with release candidate quality tracing support will be available in a few weeks. Release candidates are not recommended for critical production services, however they are functional and are intended to offer APIs that are compatible with their upcoming GA counterparts.

I want to write instrumentation for OpenTelemetry; what’s the impact of today’s announcement?

APIs with release candidate quality tracing support will be available shortly (prior to the SDKs). You can bind against these to produce traces that will be picked up by the OpenTelemetry SDKs or any other implementations that implement the OpenTelemetry APIs.

When will OpenTelemetry offer drop-in replacements for OpenCensus and OpenTracing?

Work is currently underway on bridge APIs that allow OpenTelemetry SDKs to seamlessly replace OpenCensus libraries or OpenTracing implementations. While the delivery date of this functionality is not tied to OpenTelemetry’s GA goals, we expect this to arrive between each API + SDK’s release candidate and GA milestones.

Wrapping Up

Producing a specification release candidate is an important milestone for the OpenTelemetry community, and it took significant effort on the part of our contributors to make this happen. We’d like to thank every person and every organization that was a part of this release, and to recognize that their contributions are laying the groundwork for the project's long term success.

If you haven’t been a part of the OpenTelemetry community but would like to join, now is the perfect time! OpenTelemetry is now in the top three CNCF projects by weekly and cumulative commits, and no matter your level of commitment (ha!) to the project, contributions are always welcome. If you have a particular area that you’re interested in (for example, the Python API + SDK), the best way to get involved is to join the relevant weekly SIG meetings or interact with other contributors on Gitter.

By Morgan McLean, Google Cloud

OpenTelemetry is now beta!

OpenTelemetry and OpenCensus have been a critical part of our goal of making platforms like Kubernetes more observable and more manageable. This has been a multi-year journey for us, from creating OpenCensus and growing it into a core part of major web services’ observability stack, to our announcement of OpenTelemetry last year and the rapid growth of the OpenTelemetry community.

Beta is a big milestone for OpenTelemetry, as developers can now use the SDKs, integrations, and Collector to capture distributed traces and metrics from their applications and send them to backends like Prometheus, Jaeger, Cloud Monitoring, Cloud Trace, and others for analysis. This is a great time to try out OpenTelemetry and get involved in the observability community— whether you’re looking to improve your visibility into production services, giving your users performance data from client libraries that you maintain—or want to join a rapidly-growing open source project!

To learn more, please read our official community announcement, which copied below:

Co-authored by maintainers, community contributors, and members of the OpenTelemetry governance committee.

OpenTelemetry has just begun its first wave of beta releases, starting with the Collector and the Erlang, Go, Java, JavaScript, and Python SDKs, followed by the .Net SDK and Java auto-instrumentation agent. This means that you can begin integrating OpenTelemetry into your applications and client libraries to capture app-level metrics and distributed traces.

If you’re not already familiar with OpenTelemetry, the project provides a single set of language-specific APIs, SDKs, agents, and other components that you can use to collect distributed traces, metrics, and related metadata from your applications. In addition to its core capabilities, much of OpenTelemetry’s utility comes from integrations for HTTP and RPC libraries, storage clients, etc. that allow developers to capture critical observability data from their applications with almost zero effort. After capturing these signals, each OpenTelemetry component can export them to your backends of choice, including Prometheus, Jaeger, Zipkin, Azure Monitor, Dynatrace, Google Cloud Monitoring + Trace, Lightstep, New Relic, and Splunk.

This first beta release includes:
  • APIs and SDKs for Erlang, Go, Java, JavaScript, and Python, which include the interfaces and implementations that you need to define and create distributed traces and metrics, manage sampling and context propagation, etc. The .Net API + SDK will follow shortly.
  • Language-specific API integrations for at least one popular HTTP framework, gRPC, and at least one popular storage client, which can be enabled with one line of code, and will automatically capture relevant traces and metrics and handle context propagation.
  • Language-specific exporters that allow SDKs to send captured traces and metrics to any supported backends.
  • The OpenTelemetry Collector, which can receive data from OpenTelemetry SDKs and other sources, and then export this telemetry to any supported backend.
  • Auto-Instrumentation for Java that captures telemetry from 47 Java libraries and frameworks without requiring any modification to your application.
  • Documentation for each component including getting started guides.
As these and subsequent OpenTelemetry components enter beta (requirements and release plan), we are declaring that they are ready to start integrating with. This means that service developers can begin to include OpenTelemetry in their applications and that maintainers of storage, RPC, etc. clients should start testing the OpenTelemetry APIs to provide better observability of their users.

However, this does come with some caveats:
  • Each OpenTelemetry component will likely undergo several beta releases in the coming weeks — this is simply the first.
  • While functional, beta components have not gone through thorough testing or benchmarking and they are not intended for production workloads.
  • While we aim to avoid any major changes to the OpenTelemetry APIs between beta and GA release candidates, we cannot guarantee that there will not be any changes during this period.
  • Some functionality is still missing from the first beta and will be added in subsequent releases; this is documented in each component’s GitHub repository.
In the coming weeks, you can expect additional beta releases from the first wave of OpenTelemetry components and others. In particular, we expect the API + SDK for .Net and the Java auto-instrumentation agent to be ready soon. Eventually, components will reach a level of maturity and testing where we’ll feel confident in naming them a release candidate (RC), after which we will not make any breaking changes to the APIs for that component.

This beta milestone is a huge accomplishment for the OpenTelemetry community, and every contributor should be proud of the fact that OpenTelemetry is now working and ready to integrate with. This is a great opportunity for the maintainers of client libraries to begin integrating with the OpenTelemetry APIs, for end-users to start integrating it into their services, and for anyone interested in contributing to join our rapidly growing community by joining our mailing lists, Gitter chats, and the monthly community meeting!

By Morgan McLean, Product Manager

OpenCensus Web: Unlocking Full End-to-End Observability for Your Entire Stack

OpenCensus Web is a tool to trace and monitor the user-perceived performance of your web pages. It can help determine whether or not your web pages are experiencing performance issues that you might otherwise not know how to diagnose.

Web application owners want to monitor the operational health of their applications so that they can better understand actual user performance; however, capturing relevant telemetry from your web applications is often very difficult. Today, we are introducing OpenCensus Web (OC Web) to make instrumenting and exporting metrics and distributed traces from web applications simple and automatic.

Background

The OpenCensus project provides a set of language-specific instrumentation libraries that collect traces and metrics from applications and export them to tracing and monitoring backends like Prometheus, Zipkin, Jaeger, Stackdriver, and others.

The OpenCensus Web library is an implementation of OpenCensus that focuses on frontend web application code that executes in the browser. OC Web instruments web pages and collects user-side performance data, including latency and distributed traces, which gives developers the information to diagnose frontend issues and monitor overall application health.

Overshadowing the work on OC Web, the wider OpenCensus family of projects is merging with OpenTracing into OpenTelemetry. OpenCensus Web’s functionality will be migrated into OpenTelemetry JS once this project is ready, although OC Web will continue working as an alpha release in the meantime.

Architecture

OC Web interacts with three application components:
  • Frontend web server: renders the initial HTML to the browser including the OC Web library code and configuration. This would typically be instrumented with an OpenCensus server-side library (Go, Java, etc.). We also suggest that you create an endpoint in the server that receives HTTP/JSON traces and proxies to the OpenCensus Agent.
  • Browser JS: the OC Web library code that runs in the browser. This measures user interactions and collects browser data and writes them to the OpenCensus Agent as spans via HTTP/JSON.
  • OpenCensus Agent: receives traces from the frontend web server proxy endpoint or directly from the browser JS, and exports them to a trace backend (e.g. Stackdriver, Zipkin).
OC Web requires the OpenCensus Agent, which will proxy and re-export telemetry to your backend of choice. For more details see the documentation.


Features

Initial page load tracing

You can use OC Web to capture traces of initial page loads, which will even capture events that take place before the OC Web library was loaded by the browser! Initial page load traces show you which resources may be causing poor website performance, and contain data that you can’t typically capture from a distributed tracing system.

To measure the time of the overall initial page load interaction, OC Web waits until after the document load event and generates spans from the initial load performance timings via the browser's Navigation Timing and Resource Timing APIs. Below is a sample trace from OC Web that has been exported to Zipkin and captured from the initial load example app. Notice that there is an overall ‘nav./’ span for the user navigation experience until the browser load event fires.

This example also includes ‘/’ spans for the client and server side measurements of the initial HTML load. These spans are connected by the server sending back a ‘window.traceparent’ variable in the W3C Trace Context format, which is necessary because the browser does not send a trace context header for the initial page load. The server side spans also indicate how much time was spent parsing and rendering the template:

Notice the long js task span in the previous image, which indicates a CPU-bound JavaScript event loop that took 80.095ms, as measured by the Long Tasks browser API.

Span annotations for DOM and network events

Spans captured by OC Web also include detailed annotations for DOM events like `domInteractive` and `first-paint`, as well as network events like domainLookupStart and secureConnectionStart. Here is a similar trace exported to Stackdriver Trace with the annotations expanded:


User Interactions

For single page applications there are often subsequent interactions after the initial load (e.g. user clicks a button or navigates to a different section of the page). Measuring end-user interactions within a browser application adds useful data for your application:
  • Ability to relate an initial page render with subsequent on-page interactions
  • Visibility into slowness as perceived by the end user, for example, an unresponsive page after clicking
Currently, OC Web tracks clicks and route transitions by monkey-patching the Angular Zone.js library. OC Web tracks the subsequent synchronous and asynchronous tasks (e.g. setTimeouts, XHRs, etc.) caused by the interaction even if there are several concurrent interactions.

Automatic tracing for click events

All browser click events are traced as long as the click is done in a DOM element (e.g. button) and the clicked element is not disabled. When the user clicks the element, a new Zone is created to measure this interaction and determine the total time.

To name this root span, we provide developers with the option of adding the attribute data-ocweb-id to elements and give a custom name to the interaction. For the next example, the resulting name will be ‘Save edit user info’:
<button type="submit" data-ocweb-id="Save edit user info">       Save changes </button>
This helps you to identify the traces related to a specific element. Also, this may avoid ambiguity when there are similar interaction. If you don’t add this attribute, OC Web will use the DOM element ID, the tag name plus the event involved in the interaction. For example, clicking this button:
<button id="save_changes"> Save changes </button>
will generate a span named : “button#save_changes click”.

Automatic tracing for route transitions

OC Web traces route transitions between the different sections of your page by monkey-patching the History API. OC Web will name these interactions with the pattern ‘Navigation /path/to/page’. The following screenshot of a trace exported to Stackdriver from the user interaction example shows a Navigation trace which includes several network calls before the route transition is complete:

Creating your own custom spans

OC Web allows you to instrument your web application with custom spans for tasks or code involved in a user interaction. Here is a code snippet that shows how to do this:

import { tracing } from '@opencensus/web-instrumentation-zone';

function handleClick() {
  // Start child span of the current root span on the current interaction.
  // This must run in in code that the button is running.
  const childSpan = tracing.tracer.startChildSpan({
    name: 'name of your child span'
  });
  // Do some operation...
  // Finish the child span at the end of it's operation
  childSpan.end();
}

See the OC Web documentation for more details.

Automatic spans for HTTP requests and Browser performance data

OC Web automatically intercepts and generates spans for HTTP requests generated by user interactions. Additionally, OC Web attaches Trace Context Headers to each intercepted HTTP request, using the W3C Trace Context format. This is only done for same-origin requests or requests that match a provided regex.

If your servers are also instrumented with OpenCensus, these requests will continue to be traced throughout your backend services! This lets you know if the issues are related to either the front-end or the server-side.

OC Web also includes Performance API data to make annotations like domainLookupStart and responseEnd and generates spans for any CORS preflight requests.

The next screenshot shows a trace exported to Stackdriver as result of the user interaction example. There, you can see the several network calls with the automatic generated spans (e.g. ‘Sent./sleep’) with annotations, the server-side spans (e.g. ‘/sleep’ and ‘ocweb.handlerequest’) and CORS Preflight related spans:

Relate user interactions back to the initial page load tracing

OC Web attaches the initial page load trace id to the user interactions as an attribute and a span link. This enables you to do a trace search by attribute to find the initial load trace and its interactions traces via a single attribute query as well as letting you understand the whole navigation of a user through the application for a given page load.

The next screenshot shows a search by initial_load_trace_id attribute containing all user interaction traces after the initial page loaded:


Making it Real

With OC Web and a few lines of instrumentation, you can now export distributed traces from your web application. Start exploring the initial load and user interaction examples and you're welcome to poke around the source code and send us feedback via either Gitter or contributing with Pull Requests!

By Cristian González – OpenCensus Team – Software Engineering intern at Google Summer 2019 and student of Computer and Systems Engineering at Universidad Nacional de Colombia.

Special thanks to Dave Raffensperger for being initial creator of OC Web and guiding me in the process to develop i
t.

OpenTelemetry: The Merger of OpenCensus and OpenTracing

We’ve talked about OpenCensus a lot over the past few years, from the project’s initial announcement, roots at Google and partners (Microsoft, Dynatrace) joining the project, to new functionality that we’re continually adding. The project has grown beyond our expectations and now sports a mature ecosystem with Google, Microsoft, Omnition, Postmates, and Dynatrace making major investments, and a broad base of community contributors.

We recently announced that OpenCensus and OpenTracing are merging into a single project, now called OpenTelemetry, which brings together the best of both projects and has a frictionless migration experience. We’ve made a lot of progress so far: we’ve established a governance committee, a Java prototype API + implementation, workgroups for each language, and an aggressive implementation schedule.

Today we’re highlighting the combined project at the keynote of Kubecon and announcing that OpenTelemetry is now officially part of the Cloud Native Computing Foundation! Full details are available in the CNCF’s official blog post, which we’ve copied below:

A Brief History of OpenTelemetry (So Far)

After many months of planning, discussion, prototyping, more discussion, and more planning, OpenTracing and OpenCensus are merging to form OpenTelemetry, which is now a CNCF sandbox project. The seed governance committee is composed of representatives from Google, Lightstep, Microsoft, and Uber, and more organizations are getting involved every day.

And we couldn't be happier about it – here’s why.

Observability, Outputs, and High-Quality Telemetry

Observability is a fashionable word with some admirably nerdy and academic origins. In control theory, “observability” measures how well we can understand the internals of a given system using only its external outputs. If you’ve ever deployed or operated a modern, microservice-based software application, you have no doubt struggled to understand its performance and behavior, and that’s because those “outputs” are usually meager at best. We can’t understand a complex system if it’s a black box. And the only way to light up those black boxes is with high-quality telemetry: distributed traces, metrics, logs, and more.

So how can we get our hands – and our tools – on precise, low-overhead telemetry from the entirety of a modern software stack? One way would be to carefully instrument every microservice, piece by piece, and layer by layer. This would literally work, it’s also a complete non-starter – we’d spend as much time on the measurement as we would on the software itself! We need telemetry as a built-in feature of our services.

The OpenTelemetry project is designed to make this vision a reality for our industry, but before we describe it in more detail, we should first cover the history and context around OpenTracing and OpenCensus.

OpenTracing and OpenCensus

In practice, there are several flavors (or “verticals” in the diagram) of telemetry data, and then several integration points (or “layers” in the diagram) available for each. Broadly, the cloud-native telemetry landscape is dominated by distributed traces, timeseries metrics, and logs; and end-users typically integrate with a thin instrumentation API or via straightforward structured data formats that describe those traces, metrics, or logs.



For several years now, there has been a well-recognized need for industry-wide collaboration in order to amortize the shared cost of software instrumentation. OpenTracing and OpenCensus have led the way in that effort, and while each project made different architectural choices, the biggest problem with either project has been the fact that there were two of them. And, further, that the two projects weren’t working together and striving for mutual compatibility.

Having two similar-yet-not-identical projects out in the world created confusion and uncertainty for developers, and that made it harder for both efforts to realize their shared mission: built-in, high-quality telemetry for all.

Getting to One Project

If there’s a single thing to understand about OpenTelemetry, it’s that the leadership from OpenTracing and OpenCensus are co-committed to migrating their respective communities to this single and unified initiative. Although all of us have numerous ideas about how we could boil the ocean and start from scratch, we are resisting those impulses and focusing instead on preparing our communities for a successful transition; our priorities for the merger are clear:
  • Straightforward backwards compatibility with both OpenTracing and OpenCensus (via software bridges)
  • Minimizing the time where OpenTelemetry, OpenTracing, and OpenCensus are being co-developed: we plan to put OpenTracing and OpenCensus into “readonly mode” before the end of 2019.
  • And, again, to simplify and standardize the telemetry solutions available to developers.
In many ways, it’s most accurate to think of OpenTelemetry as the next major version of both OpenTracing and OpenCensus. Like any version upgrade, we will try to make it easy for both new and existing end-users, but we recognize that the main benefit to the ecosystem is the consolidation itself – not some specific and shiny new feature – and we are prioritizing our own efforts accordingly.

How you can help

OpenTelemetry’s timeline is an aggressive one. While we have many open-source and vendor-licensed observability solutions providing guidance, we will always want as many end-users involved as possible. The single most valuable thing any end-user can do is also one of the easiest: check out the actual work we’re doing and provide feedback. Via GitHub, Gitter, email, or whatever feels easiest.

Of course we also welcome code contributions to OpenTelemetry itself, code contributions that add OpenTelemetry support to existing software projects, documentation, blog posts, and the rest of it. If you’re interested, you can sign up to join the integration effort by filling in this form.

By Ben Sigelman, co-creator of OpenTracing and member of the OpenTelemetry governing committee, and Morgan McLean, Product Manager for OpenCensus at Google since the project’s inception

OpenMetrics project accepted into CNCF Sandbox

For the past several months, engineers from Google Cloud, Prometheus, and other vendors have been aligning on OpenMetrics, a specification for metrics exposition. Today, the project was formally announced and accepted into the CNCF Sandbox, and we’re currently working on ways to support OpenMetrics in OpenCensus, a set of uniform tracing and stats libraries that work with multiple vendors’ services. This multi-vendor approach works to put architectural choices in the hands of developers.
+
OpenMetrics stems from the stats formats used inside of Prometheus and Google’s Monarch time-series infrastructure, which underpins both Stackdriver and internal monitoring applications. As such, it is designed to be immediately familiar to developers and capable of operating at extreme scale. With additional contributions and review from AppOptics, Cortex, Datadog, InfluxData, Sysdig, and Uber, OpenMetrics has begun the cross-industry collaboration necessary to drive adoption of a new specification.

OpenCensus provides automatic instrumentation, APIs, and exporters for stats and distributed traces across C++, Java, Go, Node.js, Python, PHP, Ruby, and .Net. Each OpenCensus library allows developers to automatically capture distributed traces and key RPC-related statistics from their applications, add custom data, and export telemetry to their back-end of choice. Google has been a key collaborator in defining the OpenMetrics specification, and we’re now focusing on how to best implement this inside of OpenCensus.

“Google has a history of innovation in the metric monitoring space, from its early success with Borgmon, which has been continued in Monarch and Stackdriver. OpenMetrics embodies our understanding of what users need for simple, reliable and scalable monitoring, and shows our commitment to offering standards-based solutions,” said Sumeer Bhola, Lead Engineer on Monarch and Stackdriver at Google.

For more information about OpenMetrics, please visit openmetrics.io. For more information about OpenCensus and how you can quickly enable trace and metrics collection from your application, please visit opencensus.io.

By Morgan McLean, Product Manager for OpenCensus and Stackdriver APM

OpenCensus’s journey ahead: enhanced feature set

This is the second half of a series of blog posts about what’s coming next for OpenCensus. The OpenCensus Roadmap is composed of two pillars: increased language, framework, and platform coverage, and the addition of more powerful features.

In this blog post we’re going to discuss the second pillar: new functionality that makes OpenCensus more powerful. This includes dramatically improved sampling capabilities and new types of telemetry that we’re looking to capture.

More Power

Intelligent Sampling

In addition to expanding the list of languages and frameworks that OpenCensus supports out of the box, we’ll also be increasing the usefulness of existing functionality.

Services instrumented with OpenCensus currently randomly (at a configurable rate) sample new requests (without context, usually received directly from clients). While this does provide an effective view into application latency, developers are mostly interested in traces of particularly slow requests or requests that also capture a useful event, such as an exception.

We’re adding support for OpenCensus to make deferred sampling decisions - that is, to sample requests after they’ve propagated through several systems, while still preserving the full critical path of the trace. Though the feature is just starting development, we’re focusing on making sampling more intelligent - for example, by triggering traces based on accumulated latency, errors, and debugging events. Expect to hear more about this soon.

New Telemetry, Including Logs and Errors

As we mentioned in our last blog post, our ambition is for OpenCensus to become a ubiquitous observability framework, meaning that collecting traces and stats alone won’t be enough. Correlating traces and tags with logs and errors represents an obvious next step, and we’re currently working through what this might look like. Longer term, this list could grow to include profiles and other signals.

The topic of what signals will come next is worth of its own blog post, and you can expect us to start talking about this more in the coming months.

Server-provided Traces and Metrics

Distributed applications can obtain observability into their own performance by instrumenting themselves with OpenCensus, however visibility into the performance of external services or APIs that they call into is still limited. For example, imagine a web service that calls into Google Cloud Platform’s Cloud Bigtable service: the application developer would have visibility into their client side traces but would not be able to tell how much time Cloud BigTable took to respond vs time taken by network. We’re working on adding server side traces and metrics - essentially a way for service providers to summarize server side traces and metrics.

Cluster wide Z pages

Today, OpenCensus provides a stand-alone application called z-pages that includes an embedded web server and displays configuration parameters and trace information in real-time, as captured from any OpenCensus libraries running on the same host. By accessing a z-page, developers can configure sampling rate for the local instance, or view traces, tags, and stats as they’re being processed in real-time.

Longer-term, we wish to extend this functionality to enable cluster wide z-pages, which could provide the same functionality as the current z-pages experience, aggregated over all of the instances of a particular service. We’re still discussing different implementation options, and if we can tie this into other aggregation-related workstreams that we’re already pursuing.

Wrapping up

Does the strategy and roadmap above resonate with what you’d want to get from OpenCensus library? We’d love to hear your ideas and what you’d like to see prioritized.

As we mentioned in our last post, none of this is possible without the support and participation from the community. Check out our repo and start contributing. No contribution or idea is too small. Join other developers and users on the OpenCensus Gitter channel. We’d love to hear from you.

By Pritam Shah and Morgan McLean, Census team

OpenCensus’s journey ahead: platforms and languages

We recently blogged about the value of OpenCensus and how Google uses Census internally. Today, we want to share more about our long-term vision for OpenCensus.

The goal of OpenCensus is to be a ubiquitous observability framework that allows developers to automatically collect, aggregate, and export traces, metrics, and other telemetry from their applications. We plan on getting there by building easy-to-use libraries and automatically integrate with as many technologies and frameworks as possible.

Our roadmap has two themes: increased language, framework, and platform coverage, and the addition of more powerful features.Today, we’ll discuss the first theme of the increased coverage.

Increasing Coverage

More Language Coverage

In January, we released OpenCensus for Java, Go, and C++ as well as tracing support for Python, PHP, and Ruby. We’re about to start development of OpenCensus for Node.js and .NET, and you’ll see activity on these repositories ramp up in the coming quarter.

Integration with more Frameworks, Platforms, and Clients

We want to provide a great out-of-the-box experience, so we need to automatically capture traces and metrics with as little developer effort as possible. To achieve this, we’ll be creating integrations for popular web frameworks, RPC frameworks, and storage clients. This will enable automatic context propagation, span creation, and trace annotations, without requiring extra work on behalf of developers.

As a basic example, OpenCensus already integrates with Go’s default gRPC and HTTP handlers to generate spans (with relevant annotations) and to pass context.

More complex integrations will provide more information to developers. Here’s an example of a trace captured with our upcoming MongoDB instrumentation, shown on Stackdriver Trace and AWS X-Ray:
A MongoDB trace shown in Stackdriver Trace

The same trace captured in X-Ray

Istio

OpenCensus will soon have out-of-the-box tracing and metrics collection in Istio. We’re currently working through our initial designs and implementation for integrations with the Envoy Sidecar and Istio Mixer service. Our goal is to provide Istio users with a great out of box tracing and metrics collection experience.

Kubernetes

We have two primary use cases in mind for Kubernetes deployments: providing cluster-wide visibility via z-pages, and better labeling of traces, stats, and metrics. Cluster-wide z-pages will allow developers to view telemetry in real time across an entire Kubernetes deployment, independently of their back-end. This is incredibly useful when debugging immediate high-impact issues like service outages.

Client Application Support

OpenCensus currently provides observability into back-end services, however this doesn’t tell the whole story about end-to-end application performance. Throughout 2018, we plan to add instrumentation for client and front-end web applications, so developers can get traces that begin from customers’ devices and reflect actual perceived latency, and metrics captured from client code.

We aim to add support for instrumenting Android, iOS, and front-end JavaScript, though this list may grow or change. Expect to hear more about this later in 2018.

Next Up

Next week we’ll discuss some of the new features that we’re looking to bring to OpenCensus, including notable enhancements to the trace sampling logic.

None of this is possible without the support and participation from the community. Please check out our repository and start contributing; we welcome contributions of any size -- however you want to take part. You can join other developers and users on the OpenCensus Gitter channel. We’d love to hear from you!

By Pritam Shah and Morgan McLean, Census team

The value of OpenCensus

This post is the second in a series about OpenCensus. You can find the first post here.

Early this year we open sourced OpenCensus, a distributed tracing and stats instrumentation framework. Today, we continue our journey by discussing the history and motivation behind the project here at Google, and what benefits OpenCensus has to offer. As OpenCensus continues to gain partners we’ll be shifting the focus away from Google, but we wanted to use this post as an opportunity to answer some of the questions that we’re most commonly asked at meetings and events.

Why did Google open source this? Why now?

Google open sources a lot of projects and we’ve begun documenting some of the reasons why on the Google Open Source website. What about OpenCensus specifically? There are many reasons it made sense for us to release this project and get others involved.

We had already released other related projects. The Census team had been eager to share their work with the public for a while. With projects like gRPC and Istio, going open source, it made sense to release OpenCensus as well.

It helped us serve our customers better. Teams with performance-sensitive APIs like BigTable and Spanner needed more insight into their customers’ calling patterns while debugging issues, and wanted a way to connect customers’ traced requests to equivalent traces inside of Google.

Managing integrations ourselves is costly. The Stackdriver Trace engineering team had been investing considerable resources building their own instrumentation libraries across seven languages, and it became apparent that the cost of building and maintaining integrations into web and RPC frameworks would continue in perpetuity. Releasing these libraries might encourage framework providers to manage these integrations instead.

We have a vested interest in everyone else’s reliability and performance. As a web search and cloud services provider, Google’s users benefit as web services and applications become increasingly reliable and performant. Popularizing distributed tracing and app-level metrics is a one way to achieve this. This is especially important with the rising popularity of microservices-based architectures which are difficult to debug without distributed tracing.

This expands the market for other services. By making tracing and app-level metrics more accessible, we grow the overall market for monitoring and application performance management (APM) tools, which benefits Stackdriver Monitoring and Stackdriver Trace.

As these factors came into focus, the decision to open source the project became clear.

Benefits to Partners and the Community

Google’s reasons for developing and promoting OpenCensus apply to partners at all levels.

Service developers reap the benefits of having automatic traces and stats collection, along with vendor-neutral APIs for manually interacting with these. Developers who use open source backends like Prometheus or Zipkin benefit from having a single set of well-supported instrumentation libraries that can export to both services at once.

For APM vendors, being able to take advantage of already-provided language support and framework integrations is huge, and the exporter API allows traces and metrics to be sent to an ingestion API without much additional work. Developers who might have been working on instrumentation code can now focus on other more important tasks, and vendors get traces and metrics back from places they previously didn’t have coverage for.

Cloud and API providers have the added benefit of being able to include OpenCensus in client libraries, allowing customers to gain insight into performance characteristics and debug issues without having to contact support. In situations where customers were still not able to diagnose their own issues, customer traces can be matched with internal traces for faster root cause analysis, regardless of which tracing or APM product they use.

What’s Next

If you missed the first post in our series, you can read it now. In upcoming blog posts and videos we’ll discuss:
  • The current schedule for OpenCensus on a language-by-language basis
  • Guides on how to add custom instrumentation to your application
  • Techniques for adding more automatic integrations to OpenCensus
  • Our long-term vision for OpenCensus
Thanks for reading – we’ll see you on GitHub!

By Pritam Shah and Morgan McLean, Census team

How Google uses Census internally

This post is the first in a series about OpenCensus, a set of open source instrumentation libraries based on what we use inside Google. This series will cover the benefits of OpenCensus for developers and vendors, Google’s interest in open sourcing instrumentation tools, how to get started with OpenCensus, and our long-term vision.

If you’re new to distributed tracing and metrics, we recommend Adrian Cole’s excellent talk on the subject: Observability Three Ways.

Gaining Observability into Planet-Scale Computing

Google adopted or invented new technologies, including distributed tracing (Dapper) and metrics processing, in order to operate some of the world’s largest web services. However, building analysis systems didn’t solve the difficult problem of instrumenting and extracting data from production services. This is what Census was created to do.

The Census project provides uniform instrumentation across most Google services, capturing trace spans, app-level metrics, and other metadata like log correlations from production applications. One of the biggest benefits of uniform instrumentation to developers inside of Google is that it’s almost entirely automatic: any service that uses gRPC automatically collects and exports basic traces and metrics.

OpenCensus offers these capabilities to developers everywhere. Today we’re sharing how we use distributed tracing and metrics inside of Google.

Incident Management

When latency problems or new errors crop up in a highly distributed environment, visibility into what’s happening is critical. For example, when the latency of a service crosses expected boundaries, we can view distributed traces in Dapper to find where things are slowing down. Or when a request is returning an error, we can look at the chain of calls that led to the error and examine the metadata captured during a trace (typically logs or trace annotations). This is effectively a bigger stack trace. In rare cases, we enable custom trigger-based sampling which allows us to focus on specific kinds of requests.

Once we know there’s a production issue, we can use Census data to determine the regions, services, and scope (one customer vs many) of a given problem. You can use service-specific diagnostics pages, called “z-pages,” to monitor problems and the results of solutions you deploy. These pages are hosted locally on each service and provide a firehose view of recent requests, stats, and other performance-related information.

Performance Optimization

At Google’s scale, we need to be able to instrument and attribute costs for services. We use Census to help us answer questions like:
  • How much CPU time does my query consume?
  • Does my feature consume more storage resources than before?
  • What is the cost of a particular user operation at a particular layer of the stack?
  • What is the total cost of a particular user operation across all layers of the stack?
We’re obsessed with reducing the tail latency of all services, so we’ve built sophisticated analysis systems that process traces and metrics captured by Census to identify regressions and other anomalies.

Quality of Service

Google also improves performance dynamically depending on the source and type of traffic. Using Census tags, traffic can be directed to more appropriate shards, or we can do things like load shedding and rate limiting.

Next week we’ll discuss Google’s motivations for open sourcing Census, then we’ll shift the focus back onto the open source project itself.

By Pritam Shah and Morgan McLean, Census team