opentracing-contrib / go-stdlib Goto Github PK
View Code? Open in Web Editor NEWOpenTracing instrumentation for packages in the Go stdlib
License: Apache License 2.0
OpenTracing instrumentation for packages in the Go stdlib
License: Apache License 2.0
Currently, instrumentation is just provided for net/http
. Instrumentation for net/rpc
and database/sql
seems also worth doing.
I am a master who want to apply GSoC 2017, I 'd like to implement the instrumentation for net/rpc
or database/sql
as my GSoC project.
cc @pritianka
Last fixes 2b2d270#diff-3fde75a8887529bd00893a052634c273 require in handler h(sct.wrappedResponseWriter(), r)
call WriteHeader
. If not call WriteHeader
defer
mark span as error.
I use echo http server and dont call WriteHeader
in my handlers - echo server alone make this call. In result all my spans in this middleware marked as errors.
github.com/gorilla/mux
is a powerful URL router and dispatcher for golang and heavily used. So I have a proposal that we can integrate mux with go-stdlib.
But we know gorilla/mux manages a private context,named github.com/gorilla/context for itself to store additional data, such variables from Paths can have variables
So we also should forward the context.
I tried to introduce http request tracing in one of our applications and stumpled about a nil pointer dereference that was triggered by a testcase.
The nil pointer dereference can be reproduced with the following program that follows the go-stdlib Godoc documentation:
package main
import (
"context"
"net/http"
"github.com/opentracing-contrib/go-stdlib/nethttp"
"github.com/opentracing/opentracing-go"
)
func main() {
ctx := context.Background()
req, err := http.NewRequest("POST", "http://localhost:8080/", nil)
if err != nil {
panic(err.Error())
}
req = req.WithContext(ctx)
req, ht := nethttp.TraceRequest(opentracing.GlobalTracer(), req)
defer ht.Finish()
_, err = http.DefaultClient.Do(req)
if err != nil {
panic(err.Error())
}
}
I expect that either http.DefaultClient.Do()
succeeds or returns an error (because e.g. nothing is listening on localhost:8080
).
But it triggers the following panic in go-stdlib:
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x70 pc=0x69dd06]
goroutine 1 [running]:
sisu.sh/go/vendor/github.com/opentracing/opentracing-go/ext.stringTagName.Set(...)
/home/fho/git/workspace/src/sisu.sh/go/vendor/github.com/opentracing/opentracing-go/ext/tags.go:170
sisu.sh/go/vendor/github.com/opentracing-contrib/go-stdlib/nethttp.(*Tracer).getConn(0xc0000b4380, 0xc0000ae292, 0xe)
/home/fho/git/workspace/src/sisu.sh/go/vendor/github.com/opentracing-contrib/go-stdlib/nethttp/client.go:257 +0x106
net/http.(*Transport).getConn(0x9c2e20, 0xc00009cdb0, 0x0, 0x744464, 0x4, 0xc0000ae292, 0xe, 0x0, 0x0, 0x0, ...)
/usr/lib/go/src/net/http/transport.go:955 +0xf4
net/http.(*Transport).roundTrip(0x9c2e20, 0xc00010e200, 0x59bfca, 0x74446b, 0xc0000dbb78)
/usr/lib/go/src/net/http/transport.go:467 +0x6ef
net/http.(*Transport).RoundTrip(0x9c2e20, 0xc00010e200, 0x9c2e20, 0x0, 0x0)
/usr/lib/go/src/net/http/roundtrip.go:17 +0x35
net/http.send(0xc00010e200, 0x7a6c60, 0x9c2e20, 0x0, 0x0, 0x0, 0xc0000b0038, 0x8, 0x1, 0x0)
/usr/lib/go/src/net/http/client.go:250 +0x461
net/http.(*Client).send(0x9c7fc0, 0xc00010e200, 0x0, 0x0, 0x0, 0xc0000b0038, 0x0, 0x1, 0x30)
/usr/lib/go/src/net/http/client.go:174 +0xfb
net/http.(*Client).do(0x9c7fc0, 0xc00010e200, 0x0, 0x0, 0x0)
/usr/lib/go/src/net/http/client.go:641 +0x279
net/http.(*Client).Do(...)
/usr/lib/go/src/net/http/client.go:509
main.main()
/home/fho/git/workspace/src/sisu.sh/go/code/testtracing.go:23 +0x181
exit status 2
The panic happens because h.sp
is nil when it's passed here: https://github.com/opentracing-contrib/go-stdlib/blob/master/nethttp/client.go#L255.
I didn't initialized the opentracing.GlobalTracer()
in my program. So it uses the Nooptracer. I expect that the operations succeed nonetheless without a panic.
I'm using:
OpenTracing conventions say that sampling.priority
>0 should capture the trace. However, when using Jaeger, any tags set before this are ignored jaegertracing/jaeger-client-go#391.
I think this could be done via spanObserver
in nethttp.Middleware
, if that were called before the method, url, etc., are set.
Some urls may contain credentials, eg
http.url: https://Aladdin:[email protected]/index.html
Currently the client code seems to log these verbatim.
Would it make sense to have a way to redact the credentials?
(Potentially by providing a way to call a user supplied redact function.)
When build a request and call TraceRequest(), two spans will be recorded, a root span and a child span, so why we use two spans to trace one request?
Most OpenTracing projects are moving to Apache 2.0 license.
@ianwoolf @acehko @tomwilkie @pquerna @dominikh - Any objections?
Looks like nethttp.Transport.RoundTrip uses the response of the downstream RoundTripper even if an error is returned. I took a look at the docs and don't see anything that says resp
should always be valid - so I'm thinking this is a bug in this library?
2018/08/30 17:47:19 http: panic serving 10.52.7.38:43394: runtime error: invalid memory address or nil pointer dereference
goroutine 78445 [running]:
net/http.(*conn).serve.func1(0xc420be4140)
/usr/local/go/src/net/http/server.go:1726 +0xd0
panic(0xa35d80, 0x1070fd0)
/usr/local/go/src/runtime/panic.go:502 +0x229
github.com/weaveworks/cortex/vendor/github.com/opentracing-contrib/go-stdlib/nethttp.(*Transport).RoundTrip(0xc4201dcfc0, 0xc420895000, 0xc42047e740, 0x0, 0x0)
/go/src/github.com/weaveworks/cortex/vendor/github.com/opentracing-contrib/go-stdlib/nethttp/client.go:128 +0x26b
github.com/weaveworks/cortex/pkg/querier/frontend.(*Frontend).ServeHTTP(0xc4201dcfa0, 0xb6c760, 0xc420525620, 0xc420895000)
/go/src/github.com/weaveworks/cortex/pkg/querier/frontend/frontend.go:106 +0xc4
github.com/weaveworks/cortex/vendor/github.com/weaveworks/common/middleware.glob..func1.1(0xb6c760, 0xc420525620, 0xc420894e00)
/go/src/github.com/weaveworks/cortex/vendor/github.com/weaveworks/common/middleware/http_auth.go:17 +0x107
net/http.HandlerFunc.ServeHTTP(0xc4202b0860, 0xb6c760, 0xc420525620, 0xc420894e00)
/usr/local/go/src/net/http/server.go:1947 +0x44
github.com/weaveworks/cortex/vendor/github.com/gorilla/mux.(*Router).ServeHTTP(0xc4201dcd20, 0xb6c760, 0xc420525620, 0xc420894e00)
/go/src/github.com/weaveworks/cortex/vendor/github.com/gorilla/mux/mux.go:114 +0xdc
github.com/weaveworks/cortex/vendor/github.com/weaveworks/common/middleware.Instrument.Wrap.func1(0xb6c720, 0xc42047e700, 0xc420894c00)
/go/src/github.com/weaveworks/cortex/vendor/github.com/weaveworks/common/middleware/instrument.go:49 +0x175
net/http.HandlerFunc.ServeHTTP(0xc4202de420, 0xb6c720, 0xc42047e700, 0xc420894c00)
/usr/local/go/src/net/http/server.go:1947 +0x44
github.com/weaveworks/cortex/vendor/github.com/weaveworks/common/middleware.Log.Wrap.func1(0xb6bfe0, 0xc420525600, 0xc420894c00)
/go/src/github.com/weaveworks/cortex/vendor/github.com/weaveworks/common/middleware/logging.go:41 +0x1af
net/http.HandlerFunc.ServeHTTP(0xc4202de450, 0xb6bfe0, 0xc420525600, 0xc420894c00)
/usr/local/go/src/net/http/server.go:1947 +0x44
github.com/weaveworks/cortex/vendor/github.com/opentracing-contrib/go-stdlib/nethttp.Middleware.func2(0xb6cee0, 0xc42022a380, 0xc420894b00)
/go/src/github.com/weaveworks/cortex/vendor/github.com/opentracing-contrib/go-stdlib/nethttp/server.go:74 +0x3f3
net/http.HandlerFunc.ServeHTTP(0xc4202de480, 0xb6cee0, 0xc42022a380, 0xc420894b00)
/usr/local/go/src/net/http/server.go:1947 +0x44
github.com/weaveworks/cortex/vendor/github.com/weaveworks/common/middleware.Tracer.Wrap.func1(0xb6cee0, 0xc42022a380, 0xc420894b00)
/go/src/github.com/weaveworks/cortex/vendor/github.com/weaveworks/common/middleware/http_tracing.go:28 +0x8e
net/http.HandlerFunc.ServeHTTP(0xc4202de4b0, 0xb6cee0, 0xc42022a380, 0xc420894b00)
/usr/local/go/src/net/http/server.go:1947 +0x44
net/http.serverHandler.ServeHTTP(0xc4201c1110, 0xb6cee0, 0xc42022a380, 0xc420894b00)
/usr/local/go/src/net/http/server.go:2694 +0xbc
net/http.(*conn).serve(0xc420be4140, 0xb6d660, 0xc4206dc780)
/usr/local/go/src/net/http/server.go:1830 +0x651
created by net/http.(*Server).Serve
/usr/local/go/src/net/http/server.go:2795 +0x27b
I was experimenting with using the client nethttp package, and I've been seeing what seems to be double nested spans for each client call. This seems to add noise to the traces when they are rendered, making them a bit harder to read, and diverges from the way some other instrumentation libraries I've experiments with do (I am using Jaeger for capturing and rendering the events currently).
Looking at the code, it seems that this might be because there is a root
and an sp
span, where the root
nests the sp
. Is this wrapping intentional, or is there a way to turn it off? Or perhaps, is there some benefit it provides that I'm missing?
When i used nethttp,create span twice,I don't think it makes sense
req, ht := nethttp.TraceRequest(c.Tracer, req, nethttp.OperationName("HTTP GET: "+endpoint))
defer ht.Finish()
Then I look at the source code
func (h *Tracer) start(req *http.Request) opentracing.Span {
if h.root == nil {
parent := opentracing.SpanFromContext(req.Context())
var spanctx opentracing.SpanContext
if parent != nil {
spanctx = parent.Context()
}
operationName := h.opts.operationName
if operationName == "" {
operationName = "HTTP Client"
}
root := h.tr.StartSpan(operationName, opentracing.ChildOf(spanctx))
h.root = root
}
ctx := h.root.Context()
h.sp = h.tr.StartSpan("HTTP "+req.Method, opentracing.ChildOf(ctx))
ext.SpanKindRPCClient.Set(h.sp)
componentName := h.opts.componentName
if componentName == "" {
componentName = defaultComponentName
}
ext.Component.Set(h.sp, componentName)
return h.sp
}
I don't understand why span h.sp is created
Existing HTTP handler code that needs to get at the underlying http.Flusher (or http.Hijacker etc) interfaces from the http.ResponseWriter breaks when the Middleware
is installed.
One complex approach to address this would be to do something similar to https://github.com/DataDog/dd-trace-go/blob/v1/contrib/internal/httputil/trace_gen.go#L18.
A more simple approach would be to add something like
+func (w *statusCodeTracker) InnerResponseWriter() http.ResponseWriter {
+ return w.ResponseWriter
+}
+
+type ResponseWriterWrapper interface {
+ InnerResponseWriter() http.ResponseWriter
+}
The existing code can then be modified to first try to cast to a nethttp.ResponseWriterWrapper
, unwrap it if it works, and then get the underlying http.Flusher
etc. from that. I'll submit a patch using this approach and we can discuss from there.
Currently both server and client wrappers use "HTTP "+r.Method
as the operation name, which in practice is very bad. Considering that these instrumentation functions must be called explicitly, I think they should allow passing the operation name, e.g. via functional option pattern.
This is more a query than an issue.
It would be useful to have a gin (https://github.com/gin-gonic/gin) version of this middleware.
It's not clear that the approach used for gorilla (https://github.com/opentracing-contrib/go-gorilla/blob/master/gorilla/server.go) would work for gin though. This is because the middleware signatures are a bit different.
Would it be ok to reuse/tweak the code here to produce a separate gin version?
The gin middleware specific part would look as follows:
// Middleware is a gin native version of the equivalent middleware in:
// https://github.com/opentracing-contrib/go-stdlib/
func Middleware(tr opentracing.Tracer, options ...MWOption) gin.HandlerFunc {
opts := mwOptions{
opNameFunc: func(r *http.Request) string {
return "HTTP " + r.Method
},
spanObserver: func(span opentracing.Span, r *http.Request) {},
}
for _, opt := range options {
opt(&opts)
}
return func(c *gin.Context) {
carrier := opentracing.HTTPHeadersCarrier(c.Request.Header)
ctx, _ := tr.Extract(opentracing.HTTPHeaders, carrier)
op := opts.opNameFunc(c.Request)
sp := tr.StartSpan(op, ext.RPCServerOption(ctx))
ext.HTTPMethod.Set(sp, c.Request.Method)
ext.HTTPUrl.Set(sp, c.Request.URL.String())
opts.spanObserver(sp, c.Request)
// set component name, use "net/http" if caller does not specify
componentName := opts.componentName
if componentName == "" {
componentName = defaultComponentName
}
ext.Component.Set(sp, componentName)
c.Request = c.Request.WithContext(
opentracing.ContextWithSpan(c.Request.Context(), sp))
c.Next()
ext.HTTPStatusCode.Set(sp, uint16(c.Writer.Status()))
sp.Finish()
}
}
Whould be nice to have support for runtime/trace.
I wrapped a transport like below
type tracingTransport struct {
http.RoundTripper
tracer opentracing.Tracer
}
func NewTracingTransport(rt http.RoundTripper, tracer opentracing.Tracer) http.RoundTripper {
t := &tracingTransport{
RoundTripper: rt,
tracer: tracer,
}
return t
}
func (t *tracingTransport) RoundTrip(req *http.Request) (*http.Response, error) {
rt := &nethttp.Transport{
RoundTripper: t.RoundTripper,
}
req, ht := nethttp.TraceRequest(t.tracer, req)
defer ht.Finish()
return rt.RoundTrip(req)
}
and make a test:
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
}))
defer ts.Close()
tracer := mocktracer.New()
transport := client.NewTracingTransport(ts.Client().Transport, tracer)
transport.RoundTrip(httptest.NewRequest("GET", ts.URL, nil))
and it panic
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x70 pc=0x755224]
goroutine 19 [running]:
testing.tRunner.func1.1(0x7bfb20, 0xb43430)
/home/ali/.asdf/installs/golang/1.14.7/go/src/testing/testing.go:988 +0x30d
testing.tRunner.func1(0xc0000d4480)
/home/ali/.asdf/installs/golang/1.14.7/go/src/testing/testing.go:991 +0x3f9
panic(0x7bfb20, 0xb43430)
/home/ali/.asdf/installs/golang/1.14.7/go/src/runtime/panic.go:969 +0x166
github.com/opentracing/opentracing-go/ext.stringTagName.Set(...)
/home/ali/.asdf/installs/golang/1.14.7/packages/pkg/mod/github.com/opentracing/[email protected]/ext/tags.go:170
github.com/opentracing-contrib/go-stdlib/nethttp.(*Tracer).getConn(0xc0000de8c0, 0xc0000a6440, 0xf)
/home/ali/.asdf/installs/golang/1.14.7/packages/pkg/mod/github.com/opentracing-contrib/[email protected]/nethttp/client.go:269 +0x84
reflect.Value.call(0x79bc80, 0xc0000f8180, 0x193, 0x8292e9, 0x4, 0xc0000b8560, 0x1, 0x1, 0x0, 0x0, ...)
/home/ali/.asdf/installs/golang/1.14.7/go/src/reflect/value.go:460 +0x8ab
reflect.Value.Call(0x79bc80, 0xc0000f8180, 0x193, 0xc0000b8560, 0x1, 0x1, 0x0, 0x0, 0x0)
/home/ali/.asdf/installs/golang/1.14.7/go/src/reflect/value.go:321 +0xb4
net/http/httptrace.(*ClientTrace).compose.func1(0xc0000b8560, 0x1, 0x1, 0xc0000b8560, 0x110, 0x7f11c0)
/home/ali/.asdf/installs/golang/1.14.7/go/src/net/http/httptrace/trace.go:204 +0xa5
reflect.callReflect(0xc00009f1a0, 0xc000107830, 0xc000107818)
/home/ali/.asdf/installs/golang/1.14.7/go/src/reflect/value.go:549 +0x322
reflect.makeFuncStub(0xc0000a6440, 0xf, 0xd0, 0x7f0d40, 0xb51780, 0x7f3b8a8e47d0, 0x0, 0xd0, 0xc0000a1d40, 0x30, ...)
/home/ali/.asdf/installs/golang/1.14.7/go/src/reflect/asm_amd64.s:20 +0x42
net/http.(*Transport).getConn(0xc0000e43c0, 0xc00009fa40, 0x0, 0xc0000a8124, 0x4, 0xc0000a6440, 0xf, 0x0, 0x0, 0x0, ...)
/home/ali/.asdf/installs/golang/1.14.7/go/src/net/http/transport.go:1252 +0xe6
net/http.(*Transport).roundTrip(0xc0000e43c0, 0xc000120200, 0x0, 0x79bfc0, 0xb11282)
/home/ali/.asdf/installs/golang/1.14.7/go/src/net/http/transport.go:552 +0x726
net/http.(*Transport).RoundTrip(0xc0000e43c0, 0xc000120200, 0xc0000b84e0, 0x79bfc0, 0xb11282)
/home/ali/.asdf/installs/golang/1.14.7/go/src/net/http/roundtrip.go:17 +0x35
github.com/opentracing-contrib/go-stdlib/nethttp.(*Transport).RoundTrip(0xc000056eb8, 0xc000120200, 0xc000120100, 0x0, 0x0)
/home/ali/.asdf/installs/golang/1.14.7/packages/pkg/mod/github.com/opentracing-contrib/[email protected]/nethttp/client.go:184 +0x256
I did wrong or something?
func TraceRequest
performs the core OpenTracing instrumentation necessary for proper tracing of microservices and context propagation, but it also always enables httptrace.WithClientTrace
, which creates a lot more data and is not always desirable.
I suggest making httptrace.WithClientTrace
optional, via function option pattern.
https://github.com/opentracing-contrib/go-stdlib/blob/master/nethttp/client.go#L162 this seems to be replacing the previously set full URL with just host:port
.
I want to avoid tracing Options and Head http methods. Options mostly handled by cors (i have angular spa that uses preflight requests). And Head is used by monitoring. So how can i in my code says that this methods must not be traced because it not contains any needed info for me.
Zipkin implementation requires to setup this tag only on span creation.
https://github.com/openzipkin-contrib/zipkin-go-opentracing/blob/master/span.go#L60-L68
Now:
h.sp = h.tr.StartSpan("HTTP "+req.Method, opentracing.ChildOf(ctx))
ext.SpanKindRPCClient.Set(h.sp)
Fix:
h.sp = h.tr.StartSpan("HTTP "+req.Method, opentracing.ChildOf(ctx), ext.SpanKindRPCClient)
I want to set some tag in Middleware for for analysis and statistics. But Middleware function could not support.
I do it in myself code. I could commit a pr if it could be support. :)
In following example:
client := &http.Client{Transport: &nethttp.Transport{}}
req, err := http.NewRequest("GET", "http://google.com", nil)
if err != nil {
return err
}
req = req.WithContext(ctx) // extend existing trace, if any
req, ht := nethttp.TraceRequest(tracer, req)
defer ht.Finish()
res, err := client.Do(req)
if err != nil {
return err
}
res.Body.Close()
return nil
nethttp.TraceRequest is extra. It is ok for people wanting greater control, but it should default to using GlobalTracer().
Similar to opencensus variant, ochttp, https://github.com/census-instrumentation/opencensus-go/tree/master/plugin/ochttp, things work out of the box. Just using their server middleware and their roundtripper plus adding parent request headers to context (via req = req.WithContext(parent.Context())
)
However currently without req, ht := nethttp.TraceRequest(tracer, req)
parent span isn't correlated to the child span
When connecting to external services sending the tracing context in HTTP headers provides no value and should be able to be suppressed. The client side tracing is still however useful, so it is not desirable to simply not trace the request at all.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.