Code Monkey home page Code Monkey logo

dd-trace-go's Introduction

Main Branch and Release Tests System Tests CodeQL APM Parametric Tests codecov

Godoc

Datadog Client Libraries for Go

This repository contains Go packages for the client-side components of the Datadog product suite for Application Performance Monitoring, Continuous Profiling and Application Security Monitoring of Go applications.

  • Datadog Application Performance Monitoring (APM): Trace requests as they flow across web servers, databases and microservices so that developers have great visiblity into bottlenecks and troublesome requests. The package gopkg.in/DataDog/dd-trace-go.v1/ddtrace/tracer allows you to trace any piece of your Go code, and commonly used Go libraries can be automatically traced thanks to our out-of-the-box integrations which can be found in the package gopkg.in/DataDog/dd-trace-go.v1/ddtrace/contrib.

  • Datadog Go Continuous Profiler: Continuously profile your Go apps to find CPU, memory, and synchronization bottlenecks, broken down by function name, and line number, to significantly reduce end-user latency and infrastructure costs. The package gopkg.in/DataDog/dd-trace-go.v1/profiler allows you to periodically collect and send Go profiles to the Datadog API.

  • Datadog Application Security Management (ASM) provides in-app monitoring and protection against application-level attacks that aim to exploit code-level vulnerabilities, such as a Server-Side-Request-Forgery (SSRF), a SQL injection (SQLi), or Reflected Cross-Site-Scripting (XSS). ASM identifies services exposed to application attacks and leverages in-app security rules to detect and protect against threats in your application environment. ASM is not a standalone Go package and is transparently integrated into the APM tracer. You can simply enable it with DD_APPSEC_ENABLED=true.

Installing

This module contains many packages, but most users should probably install the two packages below:

go get gopkg.in/DataDog/dd-trace-go.v1/ddtrace/tracer
go get gopkg.in/DataDog/dd-trace-go.v1/profiler

Additionally there are many contrib packages that can be installed to automatically instrument and trace commonly used Go libraries such as net/http, gorilla/mux or database/sql:

go get gopkg.in/DataDog/dd-trace-go.v1/contrib/gorilla/mux

If you installed more packages than you intended, you can use go mod tidy to remove any unused packages.

Documentation

Support Policy

Datadog APM for Go is built upon dependencies defined in specific versions of the host operating system, Go releases, and the Datadog Agent/API. For Go the two latest releases are GA supported and the version before that is in Maintenance. We do make efforts to support older releases, but generally these releases are considered Legacy. This library only officially supports first class ports of Go.

Level Support provided
General Availability (GA) Full implementation of all features. Full support for new features, bug & security fixes.
Maintenance Full implementation of existing features. May receive new features. Support for bug & security fixes only.
Legacy Legacy implementation. May have limited function, but no maintenance provided. Not guaranteed to compile the latest version of dd-trace-go. Contact our customer support team for special requests.

Supported Versions

Go Version Support level
1.22 GA
1.21 GA
1.20 Maintenance
1.19 Legacy
  • Datadog's Trace Agent >= 5.21.1

Package Versioning

A Minor version change will be released whenever a new version of Go is released. At that time the newest version of Go is added to GA, the second oldest supported version moved to Maintenance and the oldest previously supported version dropped to Legacy. For example: For a dd-trace-go version 1.37.*

Go Version Support
1.18 GA
1.17 GA
1.16 Maintenance

Then after Go 1.19 is released there will be a new dd-trace-go version 1.38.0 with support:

Go Version Support
1.19 GA
1.18 GA
1.17 Maintenance
1.16 Legacy

Contributing

Before considering contributions to the project, please take a moment to read our brief contribution guidelines.

Testing

Tests can be run locally using the Go toolset. The grpc.v12 integration will fail (and this is normal), because it covers for deprecated methods. In the CI environment we vendor this version of the library inside the integration. Under normal circumstances this is not something that we want to do, because users using this integration might be running versions different from the vendored one, creating hard to debug conflicts.

To run integration tests locally, you should set the INTEGRATION environment variable. The dependencies of the integration tests are best run via Docker. To get an idea about the versions and the set-up take a look at our docker-compose config.

The best way to run the entire test suite is using the test.sh script. You'll need Docker and docker-compose installed. If this is your first time running the tests, you should run ./test.sh -t to install any missing test tools/dependencies. Run ./test.sh --all to run all of the integration tests through the docker-compose environment. Run ./test.sh --help for more options.

If you're only interested in the tests for a specific integration it can be useful to spin up just the required containers via docker-compose. For example if you're running tests that need the mysql database container to be up:

docker compose -f docker-compose.yaml -p dd-trace-go up -d mysql

dd-trace-go's People

Contributors

ahmed-mez avatar ajgajg1134 avatar bmermet avatar cgilmour avatar clutchski avatar darccio avatar dd-caleb avatar ddyurchenko avatar dependabot[bot] avatar dianashevchenko avatar elijahandrews avatar eliottness avatar evanj avatar felixge avatar gbbr avatar hellzy avatar julio-guerra avatar katiehockman avatar knusbaum avatar lievan avatar mackjmr avatar mtoffl01 avatar nsrip-dd avatar piochelepiotr avatar raphaelgavache avatar rarguellof avatar romainmuller avatar talwai avatar ufoot avatar zarirhamza avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dd-trace-go's Issues

contrib/net/http: wrap an http.Handler

#115 provides new integrations for net/http.Mux and gorila/mux.Router.
However, it doesn't allow to wrap a simple http.handler with a tracer.

Some of my projects don't use a mux/router.
They just use a simple http.Handler.

It could be interesting to provide a wrapper function for this use case.

The implementation is very simple:

func WrapHandler(h http.Handler, service, resource string, tr *tracer.Tracer) http.Handler {
	return http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
		internal.Trace(h, w, req, service, resource, tr)
	})
}

What do you think ?

opentracing: don't use net/http.DefaultClient|DefaultTransport

Hello,

I'm currently using the master branch, with opentracing (I know it's not yet supported).
I wrapped net/http.DefaultTransport, see #172

Problem: the outgoing HTTP requests to the DataDog agent are now visible in my traces.

Solution, dd-trace-go should:
not uset he global net/http.DefaultClient|DefaultTransport, and create its own client
OR
provide a way to customize the HTTP client/transport.

Actually, my real problem is not that the DataDog outgoing requests are visible in my traces (it's not a big issue).
My real problem is that for one of my project, the http.request_out trace has replaced the most important trace amqp.consume.
(My hypothesis is that the DataDog trace is written more often than the other one.)
If you want to check (or I can contact your support):

Provide An Easy Way To Disable Tracing For Tests

We run continuous integration and local testing, both of which we do not want to tracing for. Please provide a simple way to toggle tracing on/off. Our tests have started running slow trying to connect to a dd-trace client that does not exist.

contrib/go-redis: key not being found always generates error

I setup tracing for redis-go and data flows through from a redis call and sends a trace off to datadog fine. I noticed that when i call redis (ex: GET "key") if the value doesn't exist datadog picks it up as an error and logs in a stack trace. A key not being found in redis is a valid use case in our application and show not generate an error. http://prntscr.com/ff6it6 <- screenshot of what it looks like in datadog. The error is "internal.RedisError: redis: nil", note the "redis: nil" just means that there was no value to find from the key.

Agent is receiving empty, zero value spans despite the application creating full bodied spans

The datadog agent seems to be receiving empty, zero value spans, even though it appears that dd-trace-agent is sending them fully formed.

Logs from Trace Agent

I'm receiving this log from /var/log/datadog/tracer-agent.log:

2017-07-21 21:32:39 ERROR (receiver.go:232) - dropping trace reason: invalid span Span[t_id:0,s_id:0,p_id:0,ser:,name:,res:]: span.normalize: empty `Service` (debug for more info), [Span[t_id:0,s_id:0,p_id:0,ser:,name:,res:] Span[t_id:0,s_id:0,p_id:0,ser:,name:,res:]]

Below is my tracer set up & logs from the webserver process with the trace (forked dd-tracer-go and added log statement in tracer/span.go & tracer/buffer.go)

Tracer and Span Setup

func sendToken() {
        t := tracer.NewTracer()
        t.SetEnabled(true)
        t.DebugLoggingEnabled = true
        t.SetServiceInfo("gladys", "go", "webserver")
        span := t.NewRootSpan("sendToken.http.request", "gladys", "/token")
        defer span.Finish()

        childSpan := t.NewChildSpan("Sending auth token", span)
        if err := sendAuthToken(); err != nil {
            childSpan.FinishWithErr(err)
            return
        }
        childSpan.Finish()
        return
}

Logs from Webserver Process

I created a fork and added logs, the following is a reference to log statement and the corresponding log:

https://github.com/getbread/dd-trace-go/blob/42c9590004670ce5c322f02b8e38b076a2ac4d24/tracer/buffer.go#L101

------------------ DoFlush called with spans:
Span #0: Name: sendToken.http.request
Service: gladys
Resource: /token
TraceID: 5807396313473053053
SpanID: 5807396313473053053
ParentID: 0
Start: 2017-07-21 21:32:37.328912911 +0000 UTC
Duration: 21.678358ms
Error: 0
Type:
Tags:
	system.pid:26686
	http.status:200
Span #1: Name: Sending auth token
Service: gladys
Resource: Sending auth token
TraceID: 5807396313473053053
SpanID: 393932223006074207
ParentID: 5807396313473053053
Start: 2017-07-21 21:32:37.330201662 +0000 UTC
Duration: 17.738033ms
Error: 0
Type:
Tags:

https://github.com/getbread/dd-trace-go/blob/42c9590004670ce5c322f02b8e38b076a2ac4d24/tracer/span.go#L222

------------ SPAN FINISHED with ID: 393932223006074207 & Service: gladys
------------ SPAN FINISHED with ID: 5807396313473053053 & Service: gladys

These are the logs I received from setting tracer.DebugLoggingEnabled = true:

Sending 1 traces
TRACE: 5807396313473053053
SPAN:
Name: sendToken.http.request
Service: gladys
Resource: /token
TraceID: 5807396313473053053
SpanID: 5807396313473053053
ParentID: 0
Start: 2017-07-21 21:32:37.328912911 +0000 UTC
Duration: 21.678358ms
Error: 0
Type:
Tags:
	system.pid:26686
	http.status:200
SPAN:
Name: Sending auth token
Service: gladys
Resource: Sending auth token
TraceID: 5807396313473053053
SpanID: 393932223006074207
ParentID: 5807396313473053053
Start: 2017-07-21 21:32:37.330201662 +0000 UTC
Duration: 17.738033ms
Error: 0
Type:
Tags:

Questions

Why might the trace agent not be receiving the fully formed spans I'm attempting to send to it?

Add new version tag

Glide fetches the latest available tag to properly set a version (0.3.0). This version doesn't provide all the features available in your documentation (such as .SetMeta on Tracer).

Adding a new version tag on your latest stable commit should fix this issue.

trace: Span.Context() uses a basic type string as key

return context.WithValue(ctx, spanKey, s)

My linter reports this:

should not use basic type string as key in context.WithValue (golint)

IIRC, the best practice is to:

  • create your own context key type
  • create global variable(s) for each context key

Currently, if a third party service wraps the context with the same string, it will overwrite it.
The best practice mentioned above prevents that.

contrib/gorilla/mux: nil pointer dereference (NotFoundHandler)

I'm using the contrib/gorilla/mux.Router with the NotFoundHandler field.
I think there is a bug in the current code, because it panics (nil pointer dereference).

func (r *Router) ServeHTTP(w http.ResponseWriter, req *http.Request) {
var (
match mux.RouteMatch
route string
err error
)
// get the resource associated to this request
if r.Match(req, &match) {
route, err = match.Route.GetPathTemplate()
if err != nil {
route = "unknown"
}
} else {
route = "unknown"
}
resource := req.Method + " " + route
internal.TraceAndServe(r.Router, w, req, r.service, resource, r.tracer)
}

It crashes exactly here
route, err = match.Route.GetPathTemplate()

because match.Route is nil.
Even if r.Match() returned true !

This is an expected behavior
https://github.com/gorilla/mux/blob/c0091a029979286890368b4c7b301261e448e242/mux.go#L103-L108

  • Route is nil
  • MatchErr is defined
  • the method returns true

My suggestion: contrib/gorilla/mux.Router should also check that match.Route is not nil.
I think we don't need to check match.MatchErr.

Public span fields cannot be safely accessed

We have a bunch of public fields on Span that cannot be safely accessed in multi-threaded code. These fields include Error, Name, Meta, and Metrics. We can't even ask the user to lock the span themselves before accessing these fields, as the mutex that protects them is private.

This is particular concerning for Meta and Metrics, as concurrent hash accesses can cause panics in newer versions of Go.

We should make these fields private and write public, thread-safe getters and setters to access this data.

Add Missing support for span propagation

The opentrancing api allows to propagate spans to upstream servers and for upstream servers to extract spans from incoming requests:

func makeSomeRequest(ctx context.Context) ... {
    if span := opentracing.SpanFromContext(ctx); span != nil {
        httpClient := &http.Client{}
        httpReq, _ := http.NewRequest("GET", "http://myservice/", nil)

        // Transmit the span's TraceContext as HTTP headers on our
        // outbound request.
        opentracing.GlobalTracer().Inject(
            span.Context(),
            opentracing.HTTPHeaders,
            opentracing.HTTPHeadersCarrier(httpReq.Header))

        resp, err := httpClient.Do(httpReq)
        ...
    }
    ...
}

Extract the span on the upstream server:

http.HandleFunc("/", func(w http.ResponseWriter, req *http.Request) {
    var serverSpan opentracing.Span
    appSpecificOperationName := ...
    wireContext, err := opentracing.GlobalTracer().Extract(
        opentracing.HTTPHeaders,
        opentracing.HTTPHeadersCarrier(req.Header))
    if err != nil {
        // Optionally record something about err here
    }

    // Create the span referring to the RPC client if available.
    // If wireContext == nil, a root span will be created.
    serverSpan = opentracing.StartSpan(
        appSpecificOperationName,
        ext.RPCServerOption(wireContext))

    defer serverSpan.Finish()

    ctx := opentracing.ContextWithSpan(context.Background(), serverSpan)
    ...
}

Opentracing propagation should return ErrSpanContextNotFound if X-Datadog-Trace-Id header is missing.

The basictracer example demonstrates this: https://github.com/opentracing/basictracer-go/blob/c7c0202a8a77f658aeb2193a27b6c0cfcc821038/propagation_ot.go#L96

The comment in the opentracing-go code: https://github.com/opentracing/opentracing-go/blob/master/propagation.go#L20

With the current code, a ParentId and TracerId of 0 is used and that is rejected by the datadog-apm agent with the following log line: 2018-01-02 23:25:06 ERROR (receiver.go:219) - dropping trace reason: invalid span Span[t_id:0,s_id:8489972898621406477,p_id:0,ser:...

opentracing: support for logging

Is there a plan to implement the logging functionality for span in opentracing?

func (s *Span) LogFields(fields ...log.Field) {
	// TODO: implementation missing
}

tracer: issues with span.buffer

Hello
I have the following error "no span buffer" since the recent changes to the lib.
I create childs across services by sending parent info (tradeid, parentid, spanid...) in HTTP headers. (see https://github.com/gchaincl/dd-go-opentracing)
Today the parent's buffer is required to create a child but since it's a private fields it is inaccessible.
Do you think it is possible to add a nil check before span.buffer = parent.bufer in the NewChildSpan func ?

undefined: metadata.FromContext

Hi friends,

I am getting this error:

github.com/DataDog/dd-trace-go/tracer/contrib/tracegrpc/grpc.go:112:15: undefined: metadata.FromContext

Seems like the dependancies have changed?

opentracing: diverging from spec

It looks like this library has swapped the meaning of component and operation versus the opentracing spec.

The operation should be passed to StartSpanFromContext, but instead you're setting it onto a custom tag resource.name

The component should be set via e.g. ext.Component.Set(sp, "http.request"), but instead you're passing it in as the operation name in StartSpanFromContext.

[EDIT]
It looks like the opentracing branch is perpetuating this mistake - given that is going to be such a big breaking change anyways perhaps it could be fixed in that branch

tracer: running on localhost keeps saying "cannot flush traces"

I'm trying to set up the APM in our codebase, and during development I'm getting the following message:

2017/02/24 20:57:56 cannot flush traces: Post http://localhost:7777/v0.3/traces: dial tcp [::1]:7777: getsockopt: connection refused
2017/02/24 20:57:56 lost 3 spans

I have two questions:

  1. why is this happening and what does it mean?
  2. is there a way to pass in a logger object to the library so that these logs get logged into our own logging library (https://github.com/sirupsen/logrus) instead of the default?

Thank you!

ddtrace/tracer: Client.Timeout exceeded

I'm running into a lot of DataDog Tracer Errors on the different services we run.

Datadog Tracer Error: Post http://dd-agent.kube-system:8126/v0.3/traces: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Datadog Tracer Error: unable to flush traces, lost 500 traces

This issue occurs on both of our clusters, both running Kubernetes: one self-managed, the other managed by Tectonic. We don't have connectivity issues between our services.
Please note that this issue is intermittent. We do receive traces on DataDog, but there's always a few that get lost.

# go version
go version go1.9.2 darwin/amd64

Shared Span buffer too small in threaded application

We started implementing tracing in one of our low volume Go applications (30 requests/s). This process has 4 worker threads processing data. All traces and spans are within a single thread and never span multiple threads. I noticed that a large majority of our traces were missing spans (see attached screenshots for two identical traces with one missing spans). After some digging it appears that the issue is that when you reach the max buffer size (hard coded const of 10000) any pushes to the buffer will randomly pick an index and overwrite the span at that index https://github.com/DataDog/dd-trace-go/blob/master/tracer/buffer.go#L34

Even at a 1% sample rate we still are not getting full traces, 1 or 2 out of 10 are complete, which makes the tracing unreliable at best, unusable at worst.

Unfortunately both the const for the buffer size, and the buffer struct are private so there's no way to override this value. Is there an undocumented reason for this? Would you guys be open to making the buffer struct a public struct so that a custom sized buffer can be created if need be? (I can make a PR for this if approved). If not the only option I have is to reimplement the buffer.go file in my own package to allow the size to be configurable, however if there's a valid reason this size was chosen I don't want to circumvent that.

Full trace

screenshot at feb 06 16-36-06

Trace with missing spans

screenshot at feb 06 16-38-03

trace: unbounded trace channel can cause out-of-memory crashes

It was actually seen in production, with large traces (around 2 MiB but it is actually an issue too) and heavy load, the channel got filled until the host memory became exhausted.

The channel can take 1000 traces and force flushes when it is half-filled.

https://github.com/DataDog/dd-trace-go/blob/master/tracer/channels.go#L8

We need to ensure two properties:

  • the tracer memory usage should remain low (defining a SLA on this one would be great)
  • the flush payload should not reach 10 MiB which is the maximum input size accepted by the agent

Return context when creating a span

I often find myself writing:

span := tracer.NewChildSpanFromContext("name", ctx)
ctx = span.Context(ctx)

I would love a version of tracer.NewChildSpanFromContext that also returns a context with the created span:

span, ctx := tracer.Span("name", ctx) // name tbd

Can't get tests to pass

I want to contribute to this library, but I can't get the tests to pass locally.

vagrant@ubuntu-1204:~/src/dd-trace-go/tracer [16:44:50][master] $ go test
2017/03/29 16:44:58 tracer.SetSpansBufferSize max size must be greater than 0, current: 10000
--- FAIL: TestTracesAgentIntegration (0.00s)
        Location:       transport_test.go:63
    Error:      Expected nil, but got: &url.Error{Op:"Post", URL:"http://localhost:8126/v0.3/traces", Err:(*net.OpError)(0xc420073220)}
        Location:       transport_test.go:65
    Error:      Not equal: 200 (expected)
                    != 0 (actual)
        Location:       transport_test.go:63
    Error:      Expected nil, but got: &url.Error{Op:"Post", URL:"http://localhost:8126/v0.3/traces", Err:(*net.OpError)(0xc420073310)}
        Location:       transport_test.go:65
    Error:      Not equal: 200 (expected)
                    != 0 (actual)
        Location:       transport_test.go:63
    Error:      Expected nil, but got: &url.Error{Op:"Post", URL:"http://localhost:8126/v0.3/traces", Err:(*net.OpError)(0xc420073400)}
        Location:       transport_test.go:65
    Error:      Not equal: 200 (expected)
                    != 0 (actual)
        Location:       transport_test.go:63
    Error:      Expected nil, but got: &url.Error{Op:"Post", URL:"http://localhost:8126/v0.3/traces", Err:(*net.OpError)(0xc42018c370)}
        Location:       transport_test.go:65
    Error:      Not equal: 200 (expected)
                    != 0 (actual)
--- FAIL: TestAPIDowngrade (0.00s)
        Location:       transport_test.go:77
    Error:      Expected nil, but got: &url.Error{Op:"Post", URL:"http://localhost:8126/v0.0/traces", Err:(*net.OpError)(0xc420088460)}
        Location:       transport_test.go:79
    Error:      Not equal: 200 (expected)
                    != 0 (actual)
--- FAIL: TestEncoderDowngrade (0.00s)
        Location:       transport_test.go:90
    Error:      Expected nil, but got: &url.Error{Op:"Post", URL:"http://localhost:8126/v0.2/traces", Err:(*net.OpError)(0xc420088550)}
        Location:       transport_test.go:92
    Error:      Not equal: 200 (expected)
                    != 0 (actual)
--- FAIL: TestTransportServices (0.00s)
        Location:       transport_test.go:101
    Error:      Expected nil, but got: &url.Error{Op:"Post", URL:"http://localhost:8126/v0.3/services", Err:(*net.OpError)(0xc4200887d0)}
        Location:       transport_test.go:103
    Error:      Not equal: 200 (expected)
                    != 0 (actual)
--- FAIL: TestTransportServicesDowngrade_0_0 (0.00s)
        Location:       transport_test.go:113
    Error:      Expected nil, but got: &url.Error{Op:"Post", URL:"http://localhost:8126/v0.0/services", Err:(*net.OpError)(0xc420089180)}
        Location:       transport_test.go:115
    Error:      Not equal: 200 (expected)
                    != 0 (actual)
--- FAIL: TestTransportServicesDowngrade_0_2 (0.00s)
        Location:       transport_test.go:125
    Error:      Expected nil, but got: &url.Error{Op:"Post", URL:"http://localhost:8126/v0.2/services", Err:(*net.OpError)(0xc420073770)}
        Location:       transport_test.go:127
    Error:      Not equal: 200 (expected)
                    != 0 (actual)
FAIL
exit status 1
FAIL    _/home/vagrant/src/dd-trace-go/tracer   0.065s

Can you write some docs to get my development environment set up?

opentracing: config and DefaultTracer

I'm trying to use the opentracing "bridge" + integrations (like gorilla/mux).
I'm initializing my code with the example from the front page https://github.com/DataDog/dd-trace-go .
In my configuration, I set some GlobalTags: env and version (version of my app).

It works properly if I start my own span with the official opentracing lib.
I see my tags in my spans.

However, if I use the gorilla/mux integration, it doesn't work as expected.
I don't see the tags/meta or the service name in the spans created by gorilla/mux.
(I create the router with

func NewRouter() *Router {
)

I think that the issue is somewhere here

func NewTracer(config *Configuration) (ot.Tracer, io.Closer, error) {
if config.ServiceName == "" {
// abort initialization if a `ServiceName` is not defined
return nil, nil, errors.New("A Datadog Tracer requires a valid `ServiceName` set")
}
if config.Enabled == false {
// return a no-op implementation so Datadog provides the minimum overhead
return &ot.NoopTracer{}, &noopCloser{}, nil
}
// configure a Datadog Tracer
transport := ddtrace.NewTransport(config.AgentHostname, config.AgentPort)
tracer := &Tracer{
impl: ddtrace.NewTracerTransport(transport),
config: config,
}
tracer.impl.SetDebugLogging(config.Debug)
tracer.impl.SetSampleRate(config.SampleRate)
// set the new Datadog Tracer as a `DefaultTracer` so it can be
// used in integrations. NOTE: this is a temporary implementation
// that can be removed once all integrations have been migrated
// to the OpenTracing API.
ddtrace.DefaultTracer = tracer.impl
return tracer, tracer, nil
}

ddtrace.DefaultTracer is initialized, but the meta/tags and service name are not copied.

For now I will fix this on my side and initializes DefaultTracer with my own config.

contrib/gorilla/context: incompatible with context.Context

Gorilla's context is a map keyed off request references, meaning that it breaks rather catastrophically when mixing with the more recent request.Context() stdlib approach.

Taken from the gorilla docs:

Note: gorilla/context, having been born well before context.Context existed, does not play well with the shallow copying of the request that http.Request.WithContext (added to net/http Go 1.7 onwards) performs. You should either use just gorilla/context, or moving forward, the new http.Request.Context().

In light of this, it's probably best to consider Gorilla deprecated in favour of the standard library context and routers that make use of it.

If you want to keep a gorilla implementation around, muxtrace.SetRequestSpan and muxtrace.GetRequestSpan should be updated to use Gorilla's context.

configuration environment variables

it might be nice to allow environment variables to configure the tracing client. i think we need the following and it should be consistent across all clients (with the implied defaults)

DATADOG_TRACE_TARGET=localhost:7777
DATADOG_TRACE_ENABLED=true

Number of dependencies

I started updating the vendored copy of dd-trace-go today and noticed that the number of dependencies became huge. It was not like this just a few months ago.

2017/08/16 16:05:34 Fetching: github.com/DataDog/dd-trace-go/tracer
2017/08/16 16:05:35 · Fetching recursive dependency: github.com/gocql/gocql
2017/08/16 16:05:36 ·· Fetching recursive dependency: github.com/golang/snappy
2017/08/16 16:05:37 ·· Fetching recursive dependency: gopkg.in/inf.v0
2017/08/16 16:05:39 ·· Fetching recursive dependency: github.com/hailocab/go-hostpool
2017/08/16 16:05:40 · Fetching recursive dependency: github.com/DataDog/dd-trace-go/vendor/github.com/ugorji/go/codec
2017/08/16 16:05:40 · Fetching recursive dependency: github.com/gorilla/mux
2017/08/16 16:05:41 ·· Fetching recursive dependency: github.com/gorilla/context
2017/08/16 16:05:42 · Skipping (existing): github.com/garyburd/redigo/redis
2017/08/16 16:05:42 · Fetching recursive dependency: github.com/go-redis/redis
2017/08/16 16:05:43 · Fetching recursive dependency: github.com/stretchr/testify/assert
2017/08/16 16:05:44 ·· Fetching recursive dependency: github.com/stretchr/testify/vendor/github.com/pmezard/go-difflib/difflib
2017/08/16 16:05:44 ·· Fetching recursive dependency: github.com/stretchr/testify/vendor/github.com/davecgh/go-spew/spew
2017/08/16 16:05:44 · Fetching recursive dependency: github.com/cihub/seelog
2017/08/16 16:05:45 · Fetching recursive dependency: golang.org/x/net/context
2017/08/16 16:05:47 · Fetching recursive dependency: github.com/gin-gonic/gin
2017/08/16 16:05:48 ·· Fetching recursive dependency: github.com/ugorji/go/codec
2017/08/16 16:05:49 ·· Fetching recursive dependency: github.com/thinkerou/favicon
2017/08/16 16:05:49 ·· Fetching recursive dependency: github.com/json-iterator/go
2017/08/16 16:05:51 ·· Fetching recursive dependency: github.com/dustin/go-broadcast
2017/08/16 16:05:52 ·· Fetching recursive dependency: github.com/mattn/go-isatty
2017/08/16 16:05:53 ··· Fetching recursive dependency: golang.org/x/sys/unix
2017/08/16 16:05:55 ·· Fetching recursive dependency: github.com/gin-gonic/autotls
2017/08/16 16:05:55 ··· Fetching recursive dependency: golang.org/x/crypto/acme/autocert
2017/08/16 16:05:57 ···· Fetching recursive dependency: golang.org/x/crypto/acme
2017/08/16 16:05:57 ·· Fetching recursive dependency: github.com/manucorporat/stats
2017/08/16 16:05:58 ·· Fetching recursive dependency: gopkg.in/yaml.v2
2017/08/16 16:06:02 ·· Fetching recursive dependency: gopkg.in/go-playground/validator.v8
2017/08/16 16:06:05 ·· Fetching recursive dependency: github.com/golang/protobuf/proto
2017/08/16 16:06:08 ··· Fetching recursive dependency: github.com/golang/protobuf/ptypes/any
2017/08/16 16:06:08 ·· Fetching recursive dependency: github.com/gin-contrib/sse
2017/08/16 16:06:09 · Fetching recursive dependency: github.com/jmoiron/sqlx
2017/08/16 16:06:10 · Fetching recursive dependency: google.golang.org/grpc/metadata
2017/08/16 16:06:12 · Fetching recursive dependency: golang.org/x/sys/windows
2017/08/16 16:06:12 · Fetching recursive dependency: google.golang.org/grpc
2017/08/16 16:06:12 ·· Fetching recursive dependency: golang.org/x/net/trace
2017/08/16 16:06:12 ··· Fetching recursive dependency: golang.org/x/net/internal/timeseries
2017/08/16 16:06:12 ·· Fetching recursive dependency: golang.org/x/oauth2
2017/08/16 16:06:13 ··· Fetching recursive dependency: cloud.google.com/go/compute/metadata
2017/08/16 16:06:17 ··· Fetching recursive dependency: google.golang.org/appengine/urlfetch
2017/08/16 16:06:19 ···· Fetching recursive dependency: google.golang.org/appengine/internal/urlfetch
2017/08/16 16:06:19 ···· Fetching recursive dependency: google.golang.org/appengine/internal
2017/08/16 16:06:19 ··· Fetching recursive dependency: google.golang.org/appengine
2017/08/16 16:06:19 ·· Fetching recursive dependency: github.com/golang/protobuf/ptypes
2017/08/16 16:06:19 ·· Fetching recursive dependency: github.com/golang/glog
2017/08/16 16:06:20 ·· Fetching recursive dependency: golang.org/x/net/http2/hpack
2017/08/16 16:06:20 ·· Fetching recursive dependency: github.com/golang/protobuf/protoc-gen-go/descriptor
2017/08/16 16:06:20 ·· Fetching recursive dependency: github.com/golang/mock/gomock
2017/08/16 16:06:21 ·· Fetching recursive dependency: golang.org/x/net/http2
2017/08/16 16:06:21 ··· Fetching recursive dependency: golang.org/x/crypto/ssh/terminal
2017/08/16 16:06:21 ··· Fetching recursive dependency: go4.org/syncutil/singleflight
2017/08/16 16:06:23 ··· Fetching recursive dependency: google.golang.org/api/compute/v1
2017/08/16 16:06:26 ···· Fetching recursive dependency: google.golang.org/api/gensupport
2017/08/16 16:06:26 ····· Fetching recursive dependency: google.golang.org/api/googleapi
2017/08/16 16:06:26 ··· Fetching recursive dependency: golang.org/x/net/idna
2017/08/16 16:06:26 ···· Fetching recursive dependency: golang.org/x/text/secure/bidirule
2017/08/16 16:06:29 ····· Fetching recursive dependency: golang.org/x/text/unicode/bidi
2017/08/16 16:06:29 ······ Fetching recursive dependency: golang.org/x/text/unicode/rangetable
2017/08/16 16:06:29 ······· Fetching recursive dependency: golang.org/x/text/internal/gen
2017/08/16 16:06:29 ········ Fetching recursive dependency: golang.org/x/text/unicode/cldr
2017/08/16 16:06:29 ······· Fetching recursive dependency: golang.org/x/text/internal/ucd
2017/08/16 16:06:29 ······ Fetching recursive dependency: golang.org/x/text/internal/triegen
2017/08/16 16:06:29 ····· Fetching recursive dependency: golang.org/x/text/transform
2017/08/16 16:06:29 ···· Fetching recursive dependency: golang.org/x/text/unicode/norm
2017/08/16 16:06:29 ··· Fetching recursive dependency: golang.org/x/net/lex/httplex
2017/08/16 16:06:29 ·· Fetching recursive dependency: google.golang.org/genproto/googleapis/rpc/status

We closely monitor all dependencies in our project and review changes in every vendored library. This update more than doubles the number of libraries we depend on.

I understand that it might not be a big issue for other projects but I am not sure we can continue using DataDog tracing in 1Password :(

Name, service, and resource?

I am having a hard time trying to understand the logic behind SetServiceInfo and the difference between the name, service, and resource.

I re-read the source code several time and I still end up just trying difference variations of names in NewRootSpan to get the result I want.

Is there any documentation that explains the difference between them? It seems that it would be much simpler to just have a name and type for each span and use meta for everything else.

math/rand needs to be seeded

math/rand needs to be seeded. If it's not, it will seed with 1 every time.

Because of this, trace and span id generation is deterministic across restarts (and across different applications); this really confuses the DataDog dashboard ;)

There should be some documentation about seeding random before use. Or even better, the tracer should hold its own random source that it ensures is seeded, or use crypto/rand.

opentracting: no way to set Error

We can set ErrorMsg, ErrorType, ErrorStack, but we can't set the Error boolean to mark the trace as an error.

A possible approach would be for it to be set automatically if any of those 3 tags.

contrib/sqltraced: compatibility with gorm

Hello. I am trying to incorporate dd-trace-go for tracing our postgres queries. We use GORM which sets up a new sql.DB like this:

import (
    "github.com/jinzhu/gorm"
    _ "github.com/jinzhu/gorm/dialects/postgres"
)

func main() {
  db, err := gorm.Open("postgres", "host=myhost user=gorm dbname=gorm sslmode=disable password=mypassword")
  defer db.Close()
}

// use db for all queries, db.Query etc

This seems to incompatible with using sqltracerd calls for the following code:

// The first argument is a reference to the driver to trace.
// The second argument is the dataSourceName.
// The third argument is used to specify the name of the service under which traces will appear in the Datadog app.
// The last argument allows you to specify a custom tracer to use for tracing.
db, err := sqltraced.OpenTraced(&pq.Driver{}, "postgres://pqgotest:password@localhost/pqgotest?sslmode=disable", "web-backend")
if err != nil {
    log.Fatal(err)
}

// Use the database/sql API as usual and see traces appear in the Datadog app.
rows, err := db.Query("SELECT name FROM users WHERE age=?", 27)
if err != nil {
    log.Fatal(err)
}
defer rows.Close()

I guess I need to pick one, as I can't use both db pointers to query my actual db? Hoping my issue is clear and that there is a solution.

opentracing/dd-tracer: no way to set resource as different than name for a rootspan

I'm using opentracing/dd to for tracing throughout my gRPC application and on each new call I start a new span from the global tracer (initialized to be datadog's). However, since I initialize the new span for this call with the method name, say grpc_Status, the root span and resource is thus set to that. So now when I go to my dashboard, I can't see the different resources listed, I only see the one resource with the root span name that was luckily chosen.

https://github.com/DataDog/dd-trace-go/blob/master/opentracing/tracer.go#L62
Here the root span name == root span resource name, which doesn't allow me to see all my different 'resources' in the DataDog dashboard, for now the way I've gotten around this is: https://github.com/processout/dd-trace-go/pull/1/files

2e52b9b5-4087-46c6-a44c-6466f1cef618

SpanFromContextDefault causes panics

After the fix for #46 was applied, if SpanFromContextDefault is called when there is no existing span, a new one is created with a zero valued Transport, including a nil randGen.

Using this span previously would allow you to record data, but it wouldn't go anywhere. Now, trying to use this span's tracer to make a new child span will cause a panic when it tries to generate a new span id for the child.

Opentracing propagator not compatible with legacy propagation

In the opentracing code, https://github.com/DataDog/dd-trace-go/blob/master/opentracing/propagators.go#L34 we are writing a base16 version of the trace-id and parent-id, whereas in the non opentracing libraries we are expecting a base10 version of the trace-id: https://github.com/DataDog/dd-trace-rb/blob/6534aaf725bd17df0363340595ebf3fc741e673e/lib/ddtrace/propagation/distributed_headers.rb#L19

This means we can't get distributed traces between opentracing (golang) services and legacy (ruby) services.

opentracing implementation of tracer doesn't sets the tracer in the span creation

` func (t *Tracer) startSpanWithOptions(operationName string, options ot.StartSpanOptions) ot.Span {

    otSpan := &Span{
	Span: span,
	context: SpanContext{
		traceID:  span.TraceID,
		spanID:   span.SpanID,
		parentID: span.ParentID,
		sampled:  span.Sampled,
	},
}

}`

We need to set the tracer in the struct initialization here above.
Simple fix:

`
otSpan := &Span {

	Span: span,
	context: SpanContext{
		traceID:  span.TraceID,
		spanID:   span.SpanID,
		parentID: span.ParentID,
		sampled:  span.Sampled,
	},
          tracer: t
}`

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.