Tag Archives: Management Tools

Now, you can monitor, debug and log your Ruby apps with Stackdriver



The Google Cloud Ruby team continues to expand our support for Ruby apps running on Google Cloud Platform (GCP). Case in point, we’ve released beta gems for Stackdriver, our monitoring, logging and diagnostics suite. Now you can use Stackdriver in your Ruby projects not only on GCP but also on AWS and in your own data center. You can read more about the libraries on GitHub.

Like with all our Ruby libraries, we’re focused on ensuring the Stackdriver libraries make sense to Rubyists and helps them do their jobs more easily. Installation is easy. With Rails, simply add the "stackdriver" gem to your Gemfile and the entire suite is automatically loaded for you. With other Rack-based web frameworks like Sinatra, you require the gem and use the provided middleware.

Stackdriver Debugger is my favorite Stackdriver product. It provides live, production debugging without needing to redeploy. Once you’ve included the gem in your application, go to Cloud Console, upload your code (or point the debugger at a repository) and you’ll get snapshots of your running application including variable values and stacktraces. You can even add arbitrary log lines to your running application without having to redeploy it. Better yet, Debugger captures all this information in just one request to minimize the impact on your running application.
Stackdriver Error Reporting is Google Cloud's exception detection and reporting tool. It catches crashes in your application, groups them logically, alerts you to them (with appropriate alerting back-off), and displays them for you neatly in the UI. The UI shows you stacktraces of the errors and links to logs and distributed traces for each crash, and lets you acknowledge errors and link a group of errors to a bug in your bug database so you can keep better track of what is going on. In addition to automatically detecting errors, Stackdriver Error Reporting lets you send errors from your code in just a single line. 
Stackdriver Trace is Google's application performance monitoring and distributed tracing tool. In Rails it automatically shows you the amount of time a request spends hitting the database, rendering the views, and in application logic. It can also show you how a request moves through a microservices architecture and give you detailed reports on latency trends over time. This way, you can answer once and for all "Did the application get slower after the most recent release?"
Stackdriver Logging’s Ruby library was already generally available, and is currently being used by many of Container Engine customers in conjunction with the fluentd logging agent. You can use the logging library even if you don't use Container Engine, since it’s a drop-in replacement for the Ruby and Rails Logger. And when the Stackdriver gem is included in a Rails application, information like request latency is automatically pushed to Stackdriver Logging as well. 
You can find instructions for getting started with the Stackdriver gems on GitHub. The Stackdriver gems are currently in beta, and we’re eager for folks to try them out and give us feedback either in the Ruby channel on the GCP Slack or on GitHub, so we can make the libraries as useful and helpful as possible to the Ruby community.

Customizing Stackdriver Logs for Container Engine with Fluentd


Many Google Cloud Platform (GCP) users are now migrating production workloads to Container Engine, our managed Kubernetes environment.  Container Engine supports Stackdriver logging on GCP by default, which uses Fluentd under the hood to send your logs to Stackdriver. 


You may also want to fully customize your Container Engine cluster’s Stackdriver logs with additional logging filters. If that describes you, check out this tutorial where you’ll learn how you can configure Fluentd in Container Engine to apply additional logging filters prior to sending your logs to Stackdriver.

Using Stackdriver Logging for dedicated game server instances: new tutorial


Capturing logs from dedicated game server instances in a central location can be useful for troubleshooting, keeping track of instance runtimes and machine load, and capturing historical data that occurs during the lifetime of a game.

But collecting and making sense of these logs can be tricky, especially if you are launching the same game in multiple regions, or have limited resources on which to collect the logs themselves.

One possible solution to these problems is to collect your logs in the cloud. Doing this enables you to mine your data with tools that deliver speed and power not possible from an on-premise logging server. Storage and data management is simple in the cloud and not bound by physical hardware. Additionally, you can access cloud logging resources globally. Studios and BI departments across the globe can access the same logging database regardless of physical location, making collaboration for distributed teams significantly easier.


We recently put together a tutorial that shows you how to integrate Stackdriver Logging, our hosted log management and analysis service for data running on Google Cloud Platform (GCP) and AWS, into your own dedicated game server environment. It also offers some key storage strategies, including how to migrate this data to BigQuery and other Google Cloud tools. Check it out, and let us know what other Google Cloud tools you’d like to learn how to use in your game operations. You can reach me on Twitter at @gcpjoe.

Partnering on open source: Managing Google Cloud Platform with Chef


Managing cloud resources is a critical part of the application lifecycle. That’s why today, we released and open sourced a set of comprehensive cookbooks for Chef users to manage Google Cloud Platform (GCP) resources.

Chef is a continuous automation platform powered by an awesome community. Together, Chef and GCP enable you to drive continuous automation across infrastructure, compliance and applications.

The new cookbooks allow you to define an entire GCP infrastructure using Chef recipes. The Chef server then creates the infrastructure, enforces it, and ensures it stays in compliance. The cookbooks are idempotent, meaning you can reapply them when changes are required and still achieve the same result.

The new cookbooks support the following products:



We also released a unified authentication cookbook that provides a single authentication mechanism for all the cookbooks.

These new cookbooks are Chef certified, having passed the Chef engineering team’s rigorous quality and review bar, and are open-source under the Apache-2.0 license on GCP's Github repository.

We tested the cookbooks on CentOS, Debian, Ubuntu, Windows and other operating systems. Refer to the operating system support matrix for compatibility details. The cookbooks work with Chef Client, Chef Server, Chef Solo, Chef Zero, and Chef Automate.

To learn more about these Chef cookbooks, register for the webinar with myself and Chef’s JJ Asghar on 15 October 2017.

Getting started with Chef on GCP

Using these new cookbooks is as easy as following these four steps:
  1. Install the cookbooks.
  2. Get a service account with privileges for the GCP resources that you want to manage and enable the the APIs for each of the GCP services you will use.
  3. Describe your GCP infrastructure in Chef:
    1. Define a gauth_credential resource
    2. Define your GCP infrastructure
  4. Run Chef to apply the recipe.
Now, let’s discuss these steps in more detail.

1. Install the cookbooks

You can find all the GCP cookbooks for Chef on Chef Supermarket. We also provide a “bundle” cookbook that installs every GCP cookbook at once. That way you can choose the granularity of the code you pull into your infrastructure.

Note: These Google cookbooks require neither administrator privileges nor special privileges/scopes on the machines that Chef runs on. You can install the cookbooks either as a regular user on the machine that will execute the recipe, or on your Chef server; the latter option distributes the cookbooks to all clients.

The authentication cookbook requires a few of our gems. You can install them using various methods, including using Chef itself:


chef_gem 'googleauth'
chef_gem 'google-api-client'


For more details on how to install the gems, please visit the authentication cookbook documentation.

Now, you can go ahead and install the Chef cookbooks. Here’s how to install them all with a single command:


knife cookbook site install google-cloud


Or, you can install only the cookbooks for select products:


knife cookbook site install google-gcompute    # Google Compute Engine
knife cookbook site install google-gcontainer  # Google Container Engine
knife cookbook site install google-gdns        # Google Cloud DNS
knife cookbook site install google-gsql        # Google Cloud SQL
knife cookbook site install google-gstorage    # Google Cloud Storage


2. Get your service account credentials and enable APIs

To ensure maximum flexibility and portability, you must authenticate and authorize GCP resources using service account credentials. Using service accounts allows you to restrict the privileges to the minimum necessary to perform the job.

Note: Because service accounts are portable, you don’t need to run Chef inside GCP. Our cookbooks run on any computer with internet access, including other cloud providers. You might, for example, execute deployments from within a CI/CD system pipeline such as Travis or Jenkins, or from your own development machine.

Click here to learn more about service accounts, and how to create and enable them.

Also make sure to enable the the APIs for each of the GCP services you intend to use.

3a. Define your authentication mechanism

Once you have your service account, add the following resource block to your recipe to begin authenticating with it. The resource name, here 'mycred' is referenced in the objects in the credential parameter.


gauth_credential 'mycred' do
  action :serviceaccount
  path '/home/nelsonjr/my_account.json'
  scopes ['https://www.googleapis.com/auth/compute']
end


For further details on how to setup or customize authentication visit the Google Authentication cookbook documentation.

3b. Define your resources

You can manage any resource for which we provide a type. The example below creates an SQL instance and database in Cloud SQL. For the full list of resources that you can manage, please refer to the respective cookbook documentation link or to this aggregate summary view.


gsql_instance ‘my-app-sql-server’ do
  action :create
  project 'google.com:graphite-playground'
  credential 'mycred'
end

gsql_database 'webstore' do
  action :create
  charset 'utf8'
  instance ‘my-app-sql-server’
  project 'google.com:graphite-playground'
  credential 'mycred'
end


Note that the above code has to be described in a recipe within a cookbook. We recommend you have a “profile” wrapper cookbook that describes your infrastructure, and reference the Google cookbooks as a dependency.

4. Apply your recipe

Next, we direct Chef to enforce the recipe in the “profile” cookbook. For example:

$ chef-client -z --runlist ‘recipe[mycloud::myapp]’

In this example, mycloud is the “profile” cookbook, and myapp is the recipe that contains the GCP resource declarations.

Please note that you can apply the recipe from anywhere that Chef can execute recipes (client, server, automation), once or multiple times, or periodically in the background using an agent.

Next steps

Now you're ready to start managing GCP resources with Chef, and start reaping the benefits of cross-cloud configuration management. Our plan is to continue to improve the cookbooks and add support for more Google products. We're also preparing to release the technology used to create these cookbooks as open source. If you have questions about this effort please visit Chef on GCP discussions forum, or reach out to us on chef-on-gcp@google.com.

Announcing Stackdriver Debugger for Node.js



We’ve all been there. The code looked fine on your machine, but now you’re in production and it’s suddenly not working.

Tools like Stackdriver Error Reporting can make it easier to know when something goes wrong — but how do you diagnose the root cause of the issue? That’s where Stackdriver Debugger comes in.
Stackdriver Debugger lets you inspect the state of an application at any code location without using logging statements and without stopping or slowing down your applications. This means users are not impacted during debugging. Using the production debugger, you can capture the local variables and call stack and link it back to a specific line location in your source code. You can use this to analyze your applications’ production state and understand your code’s behavior in production.

What’s more, we’re excited to announce that Stackdriver Debugger for Node.js is now officially in beta. The agent is open source, and available on npm.


Setting up Stackdriver Debugger for Node.js


To get started, first install the @google-cloud/debug-agent npm module in your application:

$ npm install --save @google-cloud/debug-agent

Then, require debugger in the entry point of your application:

require('@google-cloud/debug-agent')
.start({ allowExpressions: true });

Now deploy your application! You’ll need to associate your sources with the application running in production, and you can do this via Cloud Source Repositories, GitHub or by copying sources directly from your desktop.



Using Logpoints 

The passive debugger is just one of the ways you can diagnose issues with your app. You can also add log statements in real time — without needing to re-deploy your application. These are called Stackdriver Debugger Logpoints.

Logpoints let you inject log statements in real time, in your production application, without redeploying your application.
These are just a few of ways you can use Stackdriver Debugger for Node.js in your application. To get started, check out the full setup guide.

We can’t wait to hear what you think. Feel free to reach out to us on Twitter @googlecloud, or request an invite to the Google Cloud Slack community and join the #nodejs channel.

Announcing new Stackdriver Logging features and expanded free logs limits



When we announced the general availability of Google Stackdriver, our integrated monitoring, logging and diagnostics suite for applications running on cloud, we heard lots of enthusiasm from our user community as well as some insightful feedback:
  • Analysis - Logs based metrics are great, but you’d like to be able to extract labels and values from logs, too. 
  • Exports - Love being able to easily export logs, but it’s hard to manage them across dozens or hundreds of projects. 
  • Controls - Aggregating all logs in a single location and exporting them various places is fantastic, but you want control over which logs go into Stackdriver Logging. 
  • Pricing - You want room to grow with Stackdriver without worrying too much about the cost of logging all that data. 
We heard you, which is why today we’re announcing a variety of new updates to Stackdriver, as well as updated pricing to give you the flexibility to scale and grow.

Here’s a little more on what’s new.

Easier analysis with logs-based metrics 

Stackdriver was created with the belief that bringing together multiple signals from logs, metrics, traces and errors can provide greater insight than any single signal. Logs-based metrics are a great example. That’s why the new and improved logs-based metrics are:
  • Faster - We’ve decreased the time from when a log entry arrives until it’s reflected in a logs-based metric from five minutes to under a minute. 
  • Easier to manage - Now you can extract user-defined labels from text in the logs. Instead of creating a new logs based metric for each possible value, you can use a field in the log entry as a label. 
  • More powerful - Extract values from logs and turn them into distribution metrics. This allows you to efficiently represent many data points at each point in time. Stackdriver Monitoring can then visualize these metrics as a heat map or by percentile. 
The example above shows a heat map produced from a distribution metric extracted from a text field in log entries.

Tony Li, Site Reliability Engineer at the New York Times, explains how they use the new user defined labels applied to proxies help them improve reliability and performance from logs.
“With LBMs [Logs based metrics], we can monitor errors that occur across multiple proxies and visualize the frequency based on when they occur to determine regressions or misconfigurations."
The faster pipeline applies to all logs-based metrics, including the already generally available count-based metrics. Distribution metrics and user labels are now available in beta.


Manage logs across your organization with aggregated exports 


Stackdriver Logging gives you the ability to export logs to GCS, PubSub or BigQuery using log sinks. We heard your feedback that managing exports across hundreds or thousands of projects in an organization can sometimes be tedious and error prone. For example, if a security administrator in an organization wanted to export all audit logs to a central project in BigQuery, she would have to set up a log sink at every project and validate that the sink was in place for each new project.

With aggregated exports, administrators of an organization or folder can set up sinks once to be inherited by all the child projects and subfolders. This makes it possible for the security administrator to export all audit logs in her organization to BigQuery with a single command:

gcloud beta logging sinks create my-bq-sink 
bigquery.googleapis.com/projects/my-project/datasets/my_dataset 
--log-filter='logName= "logs/cloudaudit.googleapis.com%2Factivity"' 
--organization=1234 --include-children

Aggregated exports help ensure that logs in future projects will be exported correctly. Since the sink is set at the organization or folder level, it also prevents an individual project owner from turning off a sink.

Control your Stackdriver Logging pipeline with exclusion filters 

All logs sent to the Logging API, whether sent by you or by Google Cloud services, have always gone into Stackdriver Logging where they're searchable in the Logs Viewer. But we heard feedback that users wanted more control over which logs get ingested into Stackdriver Logging, and we listened. To address this, exclusion filters are now in beta. Exclusion filters allow you to reduce costs, improve the signal to noise ratio by reducing chatty logs and manage compliance by blocking logs from a source or matching a pattern from being available in Stackdriver Logging. The new Resource Usage page provides visibility into which resources are sending logs and which are excluded from Stackdriver Logging.


This makes it easy to exclude some or all future logs from a specific resource. In the example above, we’re excluding 99% of successful load balancer logs. We know the choice and freedom to choose any solution is important, which is why all GCP logs are available to you irrespective of the logging exclusion filters, to export to BigQuery, Google Cloud Storage or any third party tool via PubSub. Furthermore, Stackdriver will not charge for this export, although BigQuery, GCS and PubSub charges will apply.

Starting Dec 1, Stackdriver Logging offers 50GB of logs per project per month for free 


You told us you wanted room to grow with Stackdriver without worrying about the cost of logging all that data, which is why on December 1 we’re increasing the free logs allocation to an industry-leading 50GB per project per month. This increase aims to bring the power of Stackdriver Logging search, storage, analysis and alerting capabilities to all our customers.

Want to keep logs beyond the free 50GB/month allocation? You can sign up for the Stackdriver Premium Tier or the logs overage in the Basic Tier. After Dec 1, any additional logs will be charged at a flat rate of $0.50/GB.


Audit logs, still free and now available for 13 months 

We’re also exempting admin activity audit logs from the limits and overage. They’ll be available in Stackdriver in full without any charges. You’ll now be able to keep them for 13 months instead of 30 days.

Continuing the conversation 


We hope this brings the power of Stackdriver Logging search, storage, analysis and alerting capabilities to all our customers. We have many more exciting new features planned, including a time range selector coming in September to make it easier to get visibility into the timespan of search results. We’re always looking for more feedback and suggestions on how to improve Stackdriver Logging. Please keep sending us your requests and feedback.

Interested in more information on these new features?

Preventing log waste with Stackdriver Logging



If you work with web applications, you probably know they can generate a lot of log messages. There are often multiple log messages for each request, log messages for database queries, and log messages from a monitoring system. Analyzing and understanding all that data can take up precious time and energy, especially if your logs are full of "normal" noise that's not relevant to the the issue you're currently facing.

A few years ago, I gave a talk about how we, as a community, need to do a better job managing our data collection and retention. Even with sophisticated tools, searching several terabytes of data takes longer than searching a few gigabytes. Luckily, the solution is simple: stop logging everything. Instead, selectively log what is likely to be important and don't log the noise.

Stackdriver Logging has recently released a new feature, Log Exclusion Filtering, that helps you be more selective about what is included in your log aggregation. Exclusion filters let you completely exclude log messages from a specific product or messages that match a certain query. You can also choose to sample certain messages so that only a percentage of the messages appear in Stackdriver Logs Viewer. You can learn more about getting started with Log Exclusions here.

Deciding what should always be logged and what you can safely sample or exclude depends on the details of your application. However, we thought we’d share some types of messages you can consider filtering out.



Logs from monitoring systems 

Most web applications have some kind of uptime monitoring in place, and I use Stackdriver Monitoring to monitor mine. It verifies that my application is up every minute from more than five locations. My application logs every request, and so my logs grow by five messages a minute. These messages do not have much value for me; if the uptime check fails, I can already see that in Stackdriver Monitoring. So I created a filter to exclude all messages from Stackdriver Uptime checks.
If your application is running on App Engine, or you’re using host health checking with Container Engine or Compute Engine, you might consider excluding those messages as well. If you run into an issue with your health check, you can choose to re-enable those log messages while you debug the issue.


Logs that indicate success

Logs that indicate everything is fine are another category of messages that are often safe to exclude. HTTP requests with status codes in the 200 range are one example. Log messages for redirects can also be safely excluded in most situations. You may also be able to exclude, or at least only sample, log messages from successful database queries.

These are just a few examples. Looking over your application logs will likely reveal several other messages that are basically "success spam." Since success messages are some of the most common messages in our logs, reducing them can result in significantly fewer logs overall. This can reduce both actual and cognitive costs associated with log waste.


Logs from non-production systems 


Most folks know that staging and production logs should be clearly separated. But sometimes you’re only occasionally using a tool in production, or perhaps trying out a new product and the logs aren't yet critical. In cases like these, you can turn off logs for an entire resource type. For example, if you only use BigQuery for ad-hoc analysis, turning off Stackdriver ingestion of BigQuery logs can help reduce the amount of logs that you need to sort through.



Logs from high throughput endpoints 


Logs from high throughput endpoints is another category to consider reducing. One of the applications I worked on early in my career drove 80% of the traffic through a single endpoint. We were generating several gigabytes of data a day for just that URL. Because there was so much data, we could have safely reduced our logging of that traffic from 100% to 50%, or possibly lower. There were enough requests that we would likely get an example of any errors even if we only logged one out of every two messages. Static traffic is often high throughput, too. If your application is logging, each time someone downloads a stylesheet or favicon you may be able to reduce waste by only logging these messages occasionally.


The what ifs 

These are just a few examples of what can be reduced to help get your logging under control. Looking at your application logs and thinking about the types of errors you often see can yield even more ideas for reducing log volume.

So why don’t more of us reduce our logging? The most common reason I hear is: "What if we need it?" With Stackdriver Log Exclusions, you can always turn off an exclusion and see all the future traffic in the Logs Viewer. Once you’re aware of an issue, you can adjust your logging to help debug it. Additionally, you can export all the logs, even the excluded ones, to BigQuery or Google Cloud Storage if you need the full historical logs for debugging or other purposes.

Stackdriver Logging and Stackdriver Log Exclusions are powerful, and I encourage you to try them out to see if it can help you reduce costs and use resources more efficiently. To learn more, visit Cloud.google.com/logging/.

Using Stackdriver Logging for visual effects and animation pipelines: new tutorial



Capturing logs in a visual effects (VFX), animation or games pipeline is useful for troubleshooting automated tools, keeping track of process runtimes and machine load and capturing historical data that occurs during the life of a production.

But collecting and making sense of these logs can be tricky, especially if you're working on the same project from multiple locations, or have limited resources on which to collect the logs themselves. 

Collecting logs in the cloud enables you to understand this data by mining it with tools that deliver speed and power not possible from an on-premise logging server. Storage and data management is simple in the cloud and not bound by physical hardware. Additionally, you can access cloud logging resources globally; visual effects or animation facilities can access the same logging database regardless of physical location, making international productions far simpler to manage and understand.

We recently put together a tutorial that shows you how to integrate Stackdriver Logging, our hosted log management and analysis service for data running on Google Cloud Platform (GCP) and AWS, into your own visual effects or animation pipeline. It also shows some key storage strategies and how to migrate this data to BigQuery and other Google Cloud tools. Check it out, and let us know what other Google Cloud tools you’d like to learn how to use in your visual effects or animation pipeline. You can reach us on Twitter at @gcpjoe or @agrahamvfx.

ASP.NET Core developers, meet Stackdriver diagnostics




Being able to diagnose application logs, errors and latency is key to understanding failures, but it can be tricky and time-consuming to implement correctly. That’s why we're happy to announce general availability of Stackdriver Diagnostics integration for ASP.NET Core applications, providing libraries to easily integrate Stackdriver Logging, Error Reporting and Trace into your ASP.NET Core applications, with a minimum of effort and code. While on the road to GA, we’ve fixed bugs, listened to and applied customer feedback, and have done extensive testing to make sure it's ready for your production workloads.

The Google.Cloud.Diagnostics.AspNetCore package is available on NuGet. ASP.NET Classic is also supported with the Google.Cloud.Diagnostics.AspNet package.

Now, let’s look at the various Google Cloud Platform (GCP) components that we integrated into this release, and how to begin using them to troubleshoot your ASP.NET Core application.

Stackdriver Logging 

Stackdriver Logging allows you to store, search, analyze, monitor and alert on log data and events from GCP and AWS. Logging to Stackdriver is simple with Google.Cloud.Diagnostics.AspNetCore. The package uses ASP.NET Core’s built in logging API; simply add the Stackdriver provider and then create and use a logger as you normally would. Your logs will then show up in the Stackdriver Logging section of the Google Cloud Console. Initializing and sending logs to Stackdriver Logging only requires a few lines of code:

public void Configure(IApplicationBuilder app, ILoggerFactory loggerFactory)
{
    // Initialize Stackdriver Logging
    loggerFactory.AddGoogle("YOUR-GOOGLE-PROJECT-ID");
    ...
}

public void LogMessage(ILoggerFactory loggerFactory)
{
    // Send a log to Stackdriver Logging
    var logger = loggerFactory.CreateLogger("NetworkLog");
    logger.LogInformation("This is a log message.");
}
Here’s view of Stackdriver logs shown in Cloud Console:

This shows two different logs that were reported to Stackdriver. An expanded log shows its severity, timestamp, payload and many other useful pieces of information.

Stackdriver Error Reporting 

Adding the Stackdriver Error Reporting middleware to the beginning of your middleware flow reports all uncaught exceptions to Stackdriver Error Reporting. Exceptions are grouped and shown in the Stackdriver Error Reporting section of Cloud Console. Here’s how to initialize Stackdriver Error Reporting in your ASP.NET Core application:

public void ConfigureServices(IServiceCollection services)
{
    services.AddGoogleExceptionLogging(options =>
    {
        options.ProjectId = "YOUR-GOOGLE-PROJECT-ID";
        options.ServiceName = "ImageGenerator";
        options.Version = "1.0.2";
    });
    ...
}

public void Configure(IApplicationBuilder app)
{
    // Use before handling any requests to ensure all unhandled exceptions are reported.
    app.UseGoogleExceptionLogging();
    ...
}

You can also report caught and handled exceptions with the IExceptionLogger interface:
public void ReadFile(IExceptionLogger exceptionLogger)
{
    try
    {
        string scores = File.ReadAllText(@"C:\Scores.txt");
        Console.WriteLine(scores);
    }
    catch (IOException e)
    {
        exceptionLogger.Log(e);
    }
}
Here’s a view of Stackdriver Error Reports in Cloud Console:

This shows the occurrence of an error over time for a specific application and version. The exact error is shown on the bottom.

Stackdriver Trace 

Stackdriver Trace captures latency information on all of your applications. For example, you can diagnose if HTTP requests are taking too long by using a Stackdriver Trace integration point. Similar to Error Reporting, Trace hooks into your middleware flow and should be added at the beginning of your middleware flow. Initializing Stackdriver Trace is similar to setting up Stackdriver Error Reporting:

public void ConfigureServices(IServiceCollection services)
{
    string projectId = "YOUR-GOOGLE-PROJECT-ID";
    services.AddGoogleTrace(options =>
    {
        options.ProjectId = projectId;
    });
    ...
}

public void Configure(IApplicationBuilder app)
{
    // Use at the start of the request pipeline to ensure the entire request is traced.
    app.UseGoogleTrace();
    ...
}
You can also manually trace a section of code that will be associated with the current request:
public void TraceHelloWorld(IManagedTracer tracer)
{
    using (tracer.StartSpan(nameof(TraceHelloWorld)))
    {
        Console.Out.WriteLine("Hello, World!");
    }
}
Here’s a view of a trace across multiple servers in Cloud Console:
This shows the time spent for portions of an HTTP request. The timeline shows both time spent on the front-end and on the back-end.

Not using ASP.NET Core? 


If you are haven’t made the switch to ASP.NET Core but still want to use Stackdriver diagnostics tools, we also provide a package for ASP.NET accordingly named Google.Cloud.Diagnostics.AspNet. It provides simple Stackdriver diagnostics integration into ASP.NET applications. You can add Error Reporting and Tracing for MVC and Web API with a line of code to your ASP.NET application. And while ASP.NET does not have a logging API, we have also integrated Stackdriver Logging with log4net in our Google.Cloud.Logging.Log4Net package. 

Our goal is to make GCP a great place to build and run ASP.NET and ASP.NET Core applications, and troubleshooting performance and errors is a big part of that. Let us know what you think of this new functionality, and leave us your feedback on GitHub.

Add log statements to your application on the fly with Stackdriver Debugger Logpoints



In 2014 we launched Snapshots for Stackdriver Debugger, which gave developers the ability to examine their application’s call stack and variables in production with no impact to users. In the past year, developers have taken over three hundred thousand production snapshots across their services running on Google App Engine and on VMs and containers hosted anywhere.

Today we’re showing off Stackdriver Debugger Logpoints. With Logpoints, you can instantly add log statements to your production application without rebuilding or redeploying it. Like Snapshots, this is immensely useful when diagnosing tricky production issues that lack an obvious root cause. Even better, Logpoints fits into existing logs-based workflows.
(click to enlarge)
Adding a logpoint is as simple as clicking a line in the Debugger source viewer and typing in your new log message (just make sure that you open the Logpoints tab in the right hand pane first). If you haven’t synced your source code, you can add Logpoints by specifying the target file and line number in the right-hand pane or via the gcloud command line tools. Variables can be referenced by {variableName}. You can review the full documentation here.

Because Logpoints writes its output through your app’s existing logging mechanism, it's compatible with any logging aggregation and analysis system, including Splunk or Kibana, or you can read its output from locally stored logs. However, Stackdriver Logging customers benefit from being able to read their log output from within the Stackdriver Debugger UI.


Logpoints is already available for applications written in Java, Go, Node.js, Python and Ruby via the Stackdriver Debugger agents. As with Snapshots, this same set of languages is supported across VMs (including Google Compute Engine), containers (including Google Container Engine), and Google App Engine. Logpoints has been accessible through the gcloud command line interface for some time, and the process for using Logpoints in the CLI hasn’t changed.

Each logpoint lasts up to twenty-four hours or until it's deleted or when the application is redeployed. Adding a logpoint incurs a performance cost on par with adding an additional log statement to your code directly. However, the Stackdriver Debugger agents automatically throttle any logpoints that negatively impact your application’s performance and any logpoints or snapshots with conditions that take too long to evaluate.

At Google, we use technology like Snapshots and Logpoints to solve production problems every day to make our services more performant and reliable. We’ve heard from our customers how snapshots are the bread and butter of their problem-solving processes, and we’re excited to see how you use Logpoints to make your cloud applications better.