Code Monkey home page Code Monkey logo

kamon-akka-http's Introduction

Kamon.io

Sources for the Kamon website.

Development

For installing requirements, read the Jekyll Docs.

To start the local server, run:

$ bundle install
$ bundle exec jekyll serve --livereload
  • by default, local server will be listening on port 4000.
  • bundle exec restricts the Ruby environment to only use gems set in the project's Gemfile.

For updating gem versions, run:

$ bundle update

kamon-akka-http's People

Contributors

cspinetta avatar cwegrzyn avatar diebauer avatar dpsoft avatar falmarri avatar fggarcia avatar ivantopo avatar jypma avatar kubukoz avatar lustefaniak avatar mladens avatar petitviolet avatar pjfanning avatar ptrlaszlo avatar rabzu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kamon-akka-http's Issues

Host header replaced in 1.0.0-RC7

Hi guys, I'm testing 1.0.0-RC7 which is overall very good, but I've spotted small issue with akka-http module.

If custom host header is provided with http request it wont be respected. Custom header has been used to route the request in the service mash we use.

Issue is simple to reproduce:

  val request = HttpRequest(uri = "http://localhost:8090/", headers = List[HttpHeader](Host("some.header")))
  Http().singleRequest(request).foreach(println)

with kamon-akka-http-2.5 will produce

List(Host: localhost:8090, X-B3-TraceId: 1c79f68f94675332, X-B3-Sampled: 0, X-B3-ParentSpanId: , X-B3-SpanId: ab7098e5cf342606, User-Agent: akka-http/10.0.11, Timeout-Access: <function1>)

on server side.

If kamon-akka-http-2.5 is removed from class path, we get expected result

List(Host: some.header, User-Agent: akka-http/10.0.11, Timeout-Access: <function1>)

Host header as expected.

I've tried to pin-point peace of code responsible for header replacement but could not find it.

2.12 compatible release

Need an 0.6.5 compatible release for these first:

kamon-akka
kamon-log-reporter

I believe the final compatibility matrix would look like this:

2.10 - no support
2.11 - 2.4.x/10.0.x
2.12 - 10.0.x

HTTP/1.0 responses must not have a chunked entity

After enabling Kamon 2.0 on a Play 2.7.3 project every request fails with a status code 500 and I'm seeing this error on the logs:

java.lang.IllegalArgumentException: requirement failed: HTTP/1.0 responses must not have a chunked entity
at akka.http.scaladsl.model.HttpResponse.<init>(HttpMessage.scala:459)
at akka.http.scaladsl.model.HttpResponse.copy(HttpMessage.scala:491)
at akka.http.scaladsl.model.HttpResponse.withEntity(HttpMessage.scala:477)
at kamon.instrumentation.akka.http.ServerFlowWrapper$$anon$1$$anon$2$$anon$5.onPush(ServerFlowWrapper.scala:131)```

To Do before first release

  • Set artifact name as "-experimental".
  • Remove kamon-scala as a dependency.
  • Upgrade Kamon Version.
  • Include a brief explanation in the Doc.

Link not working: Server online at http://localhost:8080/

It seems kamon-akka-http module is being loaded fine but the portal link is not working (http://localhost:8080). Is there a way to change the port number? Also I don't see any modules related to kamon-akka-http-playground are loaded? Do I need a separate jar included? I see my traces are being reported by kamon-log-reporter.

Connected to the target VM, address: '127.0.0.1:50331', transport: 'socket'
21:35:48.169 [main] INFO kamon.Kamon$Instance - Initializing Kamon...
21:35:48.462 [main] INFO kamon.Kamon$Instance - Kamon-autoweave has been successfully loaded.
21:35:48.463 [main] INFO kamon.Kamon$Instance - The AspectJ load time weaving agent is now attached to the JVM (you don't need to use -javaagent).
21:35:48.463 [main] INFO kamon.Kamon$Instance - This offers extra flexibility but obviously any classes loaded before attachment will not be woven.
21:35:48.560 [main] DEBUG kamon.ModuleLoaderExtension - Auto starting the [kamon-log-reporter] module.
[INFO] [07/25/2017 21:35:48.564] [main] [LogReporterExtension(akka://kamon)] Starting the Kamon(LogReporter) extension
21:35:48.594 [main] DEBUG kamon.ModuleLoaderExtension - Auto starting the [kamon-statsd] module.
[INFO] [07/25/2017 21:35:48.598] [main] [StatsDExtension(akka://kamon)] Starting the Kamon(StatsD) extension
21:35:48.618 [main] DEBUG kamon.ModuleLoaderExtension - Auto starting the [kamon-system-metrics] module.
[INFO] [07/25/2017 21:35:48.621] [main] [SystemMetricsExtension(akka://kamon)] Starting the Kamon(SystemMetrics) extension
Jul 25, 2017 9:35:48 PM kamon.sigar.SigarProvisioner provision
INFO: Sigar library provisioned: /Users/shashigireddy/code/engine-poc/native/libsigar-universal64-macosx.dylib
21:35:50.381 [main] INFO kamon.akka.http.AkkaHttpExtension - Starting the Kamon(Akka-Http) extension
Server online at http://localhost:8080/
Press RETURN to stop...
[INFO] [07/25/2017 21:35:58.557] [kamon-akka.actor.default-dispatcher-12] [akka://kamon/user/kamon-log-reporter]

properly handle 4xx status codes

We have this little piece of code in the server side instrumentation:

          if(status >= 400 && status <= 499) {
            span.setOperationName("not-found")
          }

That is definitely wrong, but I'm not sure of what to do about it ๐Ÿ˜•.. the motivation behind this was to prevent requests to unknown resources from generating lots of metrics in Kamon since the operation name is used as a metric tag and anything left out could lead to cardinality explosion. This solution is too conservative, though.

@Falmarri brought this up on Gitter, but not the first time this shows up in the last few days.. @ptrlaszlo recently proposed a PR that I think is somehow related to this and @n1ko-w1ll also gave me some feedback regarding this being sort of a problem so, your inputs on how we could make this handling more robust and still keep cardinality at bay would be greatly appreciated!

In a ideal integration all routes recognized by the application will already have a stable name provided by either a name generator or the operationName directive.. maybe the best course of action would be to have a rejection handler that will turn everything else into a unhandled operation name and additionally be able to automatically add the status code as a metric tag via config option as @ptrlaszlo proposed.

How does that sound to you? /cc @dpsoft @mladens @jypma, would like to hear your comments as well! :)

Dangling meta character '*' near index

After release 1.1.2 which introduced https://github.com/kamon-io/kamon-akka-http/pull/45/files#diff-69e737f09f7562f86ce0ef09ace8f34eR90 I can't use * in the Segment.

I have matcher like that:

get {
      pathPrefix("respondent" / Segment) { entityId =>

When I do GET /respondent/foo** I end up with the exception:

Error happened when processing request java.util.regex.PatternSyntaxException: Dangling meta character '*' near index 13 (?i)(^|/)foo**($|/) ^ at java.util.regex.Pattern.error(Pattern.java:1957
 at java.util.regex.Pattern.sequence(Pattern.java:2125
 at java.util.regex.Pattern.expr(Pattern.java:1998
 at java.util.regex.Pattern.compile(Pattern.java:1698
 at java.util.regex.Pattern.<init>(Pattern.java:1351
 at java.util.regex.Pattern.compile(Pattern.java:1028
 at java.lang.String.replaceFirst(String.java:2178
 at kamon.akka.http.instrumentation.ServerRequestInstrumentation.$anonfun$singleMatch$2(ServerRequestInstrumentation.scala:91
 at scala.collection.LinearSeqOptimized.foldLeft(LinearSeqOptimized.scala:126
 at scala.collection.LinearSeqOptimized.foldLeft$(LinearSeqOptimized.scala:122
 at scala.collection.immutable.List.foldLeft(List.scala:89
 at scala.collection.TraversableOnce.fold(TraversableOnce.scala:215
 at scala.collection.TraversableOnce.fold$(TraversableOnce.scala:215
 at scala.collection.AbstractTraversable.fold(Traversable.scala:108
 at kamon.akka.http.instrumentation.ServerRequestInstrumentation.singleMatch(ServerRequestInstrumentation.scala:89
 at kamon.akka.http.instrumentation.ServerRequestInstrumentation.$anonfun$aroundCtxComplete$1(ServerRequestInstrumentation.scala:66
 at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237
 at scala.collection.immutable.List.foreach(List.scala:392
 at scala.collection.TraversableLike.map(TraversableLike.scala:237
 at scala.collection.TraversableLike.map$(TraversableLike.scala:230
 at scala.collection.immutable.List.map(List.scala:298
 at kamon.akka.http.instrumentation.ServerRequestInstrumentation.aroundCtxComplete(ServerRequestInstrumentation.scala:66
 at akka.http.scaladsl.server.RequestContextImpl.complete(RequestContextImpl.scala:1
 at akka.http.scaladsl.server.directives.RouteDirectives.$anonfun$complete$1(RouteDirectives.scala:47
 at akka.http.scaladsl.server.StandardRoute$$anon$1.apply(StandardRoute.scala:19
 at akka.http.scaladsl.server.StandardRoute$$anon$1.apply(StandardRoute.scala:19) at

The module does not work

build.sbt

libraryDependencies ++= Seq(
  "kamon-core",
  "kamon-akka",
  "kamon-scala",
  "kamon-jdbc",
  "kamon-influxdb",
  "kamon-system-metrics",
  "kamon-akka-http-experimental"
).map("io.kamon" %% _ % "0.6.2")

application.conf

kamon {
  subscriptions {
    histogram = ["**"]
    min-max-counter = ["**"]
    gauge = ["**"]
    counter = ["**"]
    trace = ["**"]
    trace-segment = ["**"]
    akka-actor = ["**"]
    akka-dispatcher = ["**"]
    akka-router = ["**"]
    system-metric = ["**"]
    http-server = ["**"]
    akka-http-server = ["**"]
  }

  metric {
    tick-interval = 5 seconds
    default-collection-context-buffer-size = 1000

    filters {
      trace.includes = ["**"]

      akka-dispatcher {
        includes = [ "**" ]
        excludes = [ ]
      }

      akka-actor {
        includes = [ "**" ]
        excludes = [ ]
      }

      akka-http {
        includes = [ "**" ]
        excludes = [ ]
      }
    }
  }

  akka-http {
    trace-token-header-name = "X-Trace-Token"
    automatic-trace-token-propagation = true
    name-generator = kamon.akka.http.DefaultNameGenerator

    client {
      instrumentation-level = request-level
    }
  }

  influxdb {
    hostname = "xxx"
    port = 8086
    max-packet-size = 1024
    database = "akka-metrics"
    protocol = "http"
    application-name = "akka-http"
    hostname-override = none
    percentiles = [50.0, 70.5]
  }
}

Logs

root[ERROR] [main] INFO Kamon - Initializing Kamon...
root [INFO] [08/24/2016 13:50:53.262] [main] [InfluxDBExtension(akka://kamon)] Starting the Kamon(InfluxDB) extension
root [INFO] [08/24/2016 13:50:53.404] [main] [SystemMetricsExtension(akka://kamon)] Starting the Kamon(SystemMetrics) extension
root[ERROR] Aug 24, 2016 1:50:54 PM kamon.sigar.SigarProvisioner provision
root[ERROR] INFO: Sigar library provisioned: E:\github\AkkaHttp1\native\sigar-amd64-winnt.dll
root[ERROR] [main] INFO kamon.akka.http.AkkaHttpExtension - Starting the Kamon(Akka-Http) extension
root [INFO] [08/24/2016 13:51:02.345] [ForkJoinPool-3-worker-15] [InfluxDBHttpClient(akka://kamon)] 204 No Content
root [INFO] [08/24/2016 13:51:02.345] [ForkJoinPool-3-worker-15] [InfluxDBHttpClient(akka://kamon)] 204 No Content
root [INFO] [08/24/2016 13:51:02.345] [ForkJoinPool-3-worker-15] [InfluxDBHttpClient(akka://kamon)] 204 No Content
root [INFO] [08/24/2016 13:51:02.345] [ForkJoinPool-3-worker-15] [InfluxDBHttpClient(akka://kamon)] 204 No Content
...

Akka actors/dispatchers metrics works: they are in InfluxDB for sure. But akka-http-server category and these https://github.com/kamon-io/kamon-akka-http/blob/master/kamon-akka-http/src/main/scala/kamon/akka/http/metrics/AkkaHttpServerMetrics.scala#L28-L29 metrics are absent:

image

Kamon Operation Name Mapping cannot be configured

I tried to use the new glob replace feature but I cannot make it work. I tried it in the modules tests but the tests fail.

To reproduce:
Apply the example configuration so it's used in tests:

diff --git a/kamon-akka-http/src/test/resources/application.conf b/kamon-akka-http/src/test/resources/application.conf
index 51f62a4..0d796a6 100644
--- a/kamon-akka-http/src/test/resources/application.conf
+++ b/kamon-akka-http/src/test/resources/application.conf
@@ -1,5 +1,42 @@
 kamon {
   trace.sampler = "always"
+
+
+  instrumentation.akka.http {
+    server {
+      tracing {
+        operations {
+
+          # The default operation name to be used when creating Spans to handle the HTTP server requests. In most
+          # cases it is not possible to define an operation name right at the moment of starting the HTTP server Span
+          # and in those cases, this operation name will be initially assigned to the Span. Instrumentation authors
+          # should do their best effort to provide a suitable operation name or make use of the "mappings" facilities.
+          #default = "http.server.request"
+
+          # Provides custom mappings from HTTP paths into operation names. Meant to be used in cases where the bytecode
+          # instrumentation is not able to provide a sensible operation name that is free of high cardinality values.
+          # For example, with the following configuration:
+          #   mappings {
+          #     "/organization/*/user/*/profile" = "/organization/:orgID/user/:userID/profile"
+          #     "/events/*/rsvps" = "EventRSVPs"
+          #   }
+          #
+          # Requests to "/organization/3651/user/39652/profile" and "/organization/22234/user/54543/profile" will have
+          # the same operation name "/organization/:orgID/user/:userID/profile".
+          #
+          # Similarly, requests to "/events/aaa-bb-ccc/rsvps" and "/events/1234/rsvps" will have the same operation
+          # name "EventRSVPs".
+          #
+          # The patterns are expressed as globs and the operation names are free form.
+          #
+          mappings {
+            "/organization/*/user/*/profile" = "/organization/:orgID/user/:userID/profile"
+            "/events/*/rsvps" = "EventRSVPs"
+          }
+        }
+      }
+    }
+  }
 }

The config tests fail with:

[info] ServerFlowWrapperSpec:
[info] the server flow wrapper
[info] - should keep strict entities strict *** FAILED ***
[info]   com.typesafe.config.ConfigException$BadPath: path parameter: Invalid path '/events/*/rsvps': Token not allowed in path expression: '*' (Reserved character '*' is not allowed outside quotes) (you can double-quote this token if you really want it here)
[info]   at com.typesafe.config.impl.PathParser.parsePathExpression(PathParser.java:155)
[info]   at com.typesafe.config.impl.PathParser.parsePathExpression(PathParser.java:74)
[info]   at com.typesafe.config.impl.PathParser.parsePath(PathParser.java:61)
[info]   at com.typesafe.config.impl.Path.newPath(Path.java:230)
[info]   at com.typesafe.config.impl.SimpleConfig.find(SimpleConfig.java:192)
[info]   at com.typesafe.config.impl.SimpleConfig.getString(SimpleConfig.java:250)
[info]   at kamon.package$UtilsOnConfig$.$anonfun$pairs$1(package.scala:97)
[info]   at scala.collection.StrictOptimizedIterableOps.map(StrictOptimizedIterableOps.scala:100)
[info]   at scala.collection.StrictOptimizedIterableOps.map$(StrictOptimizedIterableOps.scala:87)
[info]   at scala.collection.immutable.Set$Set2.map(Set.scala:159)
[info]   ...

Ensure low cardinality on the operation tag for span.processing-time

Given that operation names in the HTTP server and client spans are used as a operation tag in the span metrics we should ensure that the default behavior of this module wont result in a gazillion different metrics. The main problem at the moment is that the full request path is being used as the operation name and any variation of the path will lead to a new metric to track that particular operation. Since having identifiers in the request path is a common thing then this becomes a problem to most of our users.

There was a bug on the 1.0.0 release, which was rooted on a failed effort to reduce cardinality and we should find a proper solution to this problem.

Possible actions I see are:

  • Being extremely conservative on the default operation name generator. If the default name for a operation was fixed (e.g. unnamed-operation) or with a fixed domain (e.g. use just the HTTP method name) then by default it wouldn't be possible to have cardinality explosion, but the metrics wouldn't be of much use until the users (1) add a custom name generator that makes a better job and/or (2) use the operationName directive in their routing tree to set decent operation names.
  • Instrument the Akka HTTP directives to somehow reconstruct the HTTP template and get operation names that are variable-free (e.g. /users/:number/accounts/:number). We already took a look at this idea before and it's possible, but kind of complicated. This is what I would consider the best possible solution for the default behavior but it requires a non trivial effort.

NoClassDefFoundError with Kamon and Akka HTTP when using Kanela agent

I've encountered an issue with Akka HTTP and Kamon when using the Kanela agent where any HTTP route will throw a NoClassDefFoundError.

I've created a repository that reproduces the problem: https://github.com/archena/kanela-akka-http-problem. The README in that repository has more detail.

The original issue surfaced when running an application in Docker. Everything works in SBT, so the difference appears to be that the Kanela Java agent is added. It might well be that I'm doing something wrong here, and if so I'd be very grateful if someone can point me in the right direction.

Wrong operation name for routes wrapped with flatMaped directives

I have encountered this issue after migration to Scala 2.13 and Kamon 2.0. The simplest way to reproduce it is to add authenticateBasic wrapper in the route of TestWebServer.

      pathPrefix("extraction") {
        authenticateBasic("realm", credentials => Option("Okay")) { srt =>
          (post | get) {
            pathPrefix("nested") {
              pathPrefix(IntNumber / "fixed") { num =>
                pathPrefix("anchor" / IntNumber.? / JavaUUID / "fixed") { (number, uuid) =>
                  pathPrefix(LongNumber / HexIntNumber) { (longNum, hex) =>
                    complete("OK")
                  }
                }
              }
            } ~
            pathPrefix("concat") {
              path("fixed" ~ JavaUUID ~ HexIntNumber) { (uuid, num) =>
                complete("OK")
              }
            } ~
            pathPrefix("on-complete" / IntNumber) { _ =>
              onComplete(Future("hello")) { _ =>
                extract(samplingDecision) { decision =>
                  path("more-path") {
                    complete(decision.toString)
                  }
                }
              }
            } ~
            pathPrefix("on-success" / IntNumber) { _ =>
              onSuccess(Future("hello")) { text =>
                pathPrefix("after") {
                  complete(text)
                }
              }
            } ~
            pathPrefix("complete-or-recover-with" / IntNumber) { _ =>
              completeOrRecoverWith(Future("bad".charAt(10).toString)) { failure =>
                pathPrefix("after") {
                  failWith(failure)
                }
              }
            } ~
            pathPrefix("complete-or-recover-with-success" / IntNumber) { _ =>
              completeOrRecoverWith(Future("good")) { failure =>
                pathPrefix("after") {
                  failWith(failure)
                }
              }
            }
          }
        }
      } 

Test results look like this

[info] FastFutureInstrumentationSpec:
[info] the FastFuture instrumentation
[info]   should keep the Context captured by the Future from which it was created
[info]   - when calling .map/.flatMap/.onComplete and the original Future has not completed yet
[info]   - when calling .map/.flatMap/.onComplete and the original Future has already completed
[info] AkkaHttpServerTracingSpec:
[info] the Akka HTTP server instrumentation
[info] - should create a server Span when receiving requests
[info]   should not include variables in operation name
[info]   - when including nested directives *** FAILED ***
[info]     The code passed to eventually never returned normally. Attempted 567 times over 10.007980149000002 seconds. Last failure message: The Option on which value was invoked was not defined.. (AkkaHttpServerTracingSpec.scala:68)
[info]   - when take a sampling decision when the routing tree hits an onComplete directive *** FAILED ***
[info]     The code passed to eventually never returned normally. Attempted 573 times over 10.016707381999998 seconds. Last failure message: The Option on which value was invoked was not defined.. (AkkaHttpServerTracingSpec.scala:80)
[info]   - when take a sampling decision when the routing tree hits an onSuccess directive *** FAILED ***
[info]     The code passed to eventually never returned normally. Attempted 579 times over 10.009030289999998 seconds. Last failure message: The Option on which value was invoked was not defined.. (AkkaHttpServerTracingSpec.scala:92)
[ERROR] [07/29/2019 18:02:46.191] [http-server-instrumentation-spec-akka.actor.default-dispatcher-2] [akka.actor.ActorSystemImpl(http-server-instrumentation-spec)] Error during processing of request: 'String index out of range: 10'. Completing with 500 Internal Server Error response. To change default exception handling behavior, provide a custom ExceptionHandler.
java.lang.StringIndexOutOfBoundsException: String index out of range: 10
	at java.lang.String.charAt(String.java:658)
	at kamon.testkit.TestWebServer.$anonfun$startServer$33(TestWebServer.scala:92)
	at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:660)
	at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:430)
	at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
	at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:92)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
	at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94)
	at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:92)
	at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
	at kamon.instrumentation.executor.ExecutorInstrumentation$InstrumentedForkJoinPool$TimingRunnable.run(ExecutorInstrumentation.scala:653)
	at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:49)
	at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
	at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
	at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
	at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

[info]   - when take a sampling decision when the routing tree hits a completeOrRecoverWith directive with a failed future *** FAILED ***
[info]     The code passed to eventually never returned normally. Attempted 590 times over 10.008884834 seconds. Last failure message: The Option on which value was invoked was not defined.. (AkkaHttpServerTracingSpec.scala:104)
[info]   - when take a sampling decision when the routing tree hits a completeOrRecoverWith directive with a successful future *** FAILED ***
[info]     The code passed to eventually never returned normally. Attempted 584 times over 10.01066485 seconds. Last failure message: The Option on which value was invoked was not defined.. (AkkaHttpServerTracingSpec.scala:116)
[info]   - when including ambiguous nested directives
[info] - should change the Span operation name when using the operationName directive
[info] - should mark spans as failed when request fails
[info] - should change the operation name to 'unhandled' when the response status code is 404
[info] - should correctly time entity transfer timings
[info] - should include the trace-id and keep all user-provided headers in the responses
[info] AkkaHttpServerMetricsSpec:
[info] the Akka HTTP server instrumentation
[info] - should track the number of open connections and active requests on the Server side
[info] AkkaHttpClientTracingSpec:
[info] the Akka HTTP client instrumentation
[info] - should create a client Span when using the request level API - Http().singleRequest(...)
[info] - should serialize the current context into HTTP Headers
[info] - should mark Spans as errors if the client request failed
[info] Run completed in 57 seconds, 326 milliseconds.
[info] Total number of tests run: 18
[info] Suites: completed 4, aborted 0
[info] Tests: succeeded 13, failed 5, canceled 0, ignored 0, pending 0
[info] *** 5 TESTS FAILED ***

Span metrics reported with wrong operation tag

When using a custom kamon.akka.http.AkkaHttp.OperationNameGenerator to render own operation names these get overwritten when the span is reported as a metric.

The span itself is created with the correct name and all sub-spans one runs in the app get the correct parent span. But the request itself will have it's name overwritten when it is completed, then reverting it back to the original URL/path name.

Very simple to re-create

package com.area51

import akka.actor.ActorSystem
import akka.http.scaladsl.Http
import akka.http.scaladsl.model.{ContentTypes, HttpEntity, HttpRequest}
import akka.http.scaladsl.server.Directives.{complete, get, pathPrefix, _}
import akka.http.scaladsl.server.Route
import akka.stream.ActorMaterializer
import com.typesafe.config.{Config, ConfigFactory}
import kamon.akka.http.AkkaHttp.OperationNameGenerator
import kamon.metric.PeriodSnapshot
import kamon.{Kamon, MetricReporter}

class SimpleNameGenerator extends OperationNameGenerator {
  override def clientOperationName(request: HttpRequest): String = "MY-CLIENT"
  override def serverOperationName(request: HttpRequest): String = "MY-SERVER"
}

class SimpleMetricReporter extends MetricReporter {
  override def start(): Unit = {}
  override def stop(): Unit = {}
  override def reconfigure(config: Config): Unit = {}
  override def reportPeriodSnapshot(snapshot: PeriodSnapshot): Unit = {
    //snapshot.metrics.histograms
    snapshot.metrics.histograms.filter(_.name.startsWith("span.processing")).foreach(println)
  }
}

object SimpleKamon extends App {

  private val cfg = ConfigFactory.parseString(
    """kamon {
      |  environment.service = "simple-kamon"
      |
      |  akka-http {
      |    name-generator = "com.area51.SimpleNameGenerator"
      |    not-found-operation-name = "unhandled"
      |    add-http-status-code-as-metric-tag = true
      |  }
      |  
      |  span-metric.tags.upstream-service = no
      |  modules {
      |    kamon-system-metrics.auto-start = false
      |    kamon-akka-http.requires-aspectj = true
      |  }
      | span-metrics.scope-spans-to-parent=false
      |  system-metrics {
      |    host.enabled = false
      |    jmx.enabled = false
      |  }
      |  metric.tick-interval = 10 seconds
      |}""".stripMargin
  )
    .withFallback(ConfigFactory.defaultReference())
    .resolve()
  
  private implicit val system = ActorSystem("simple-kamon")
  private implicit val materializer = ActorMaterializer()
  private implicit val executionContext = system.dispatcher
  
  private val route: Route =
    pathPrefix("api") {
      pathPrefix("execute") {
        get {
          println(Kamon.currentSpan().operationName())
          Kamon.buildSpan("read.user.data").withMetricTag("component", "database").start().finish()
          Kamon.buildSpan("update.cache").withMetricTag("component", "in-memory").start().finish()
          complete(HttpEntity(ContentTypes.`text/html(UTF-8)`, s"<h1>HELLO WORLD!</h1>"))
        }
      }
    }
  
  Kamon.reconfigure(cfg)
  Kamon.addReporter(new SimpleMetricReporter())

  Http().bindAndHandle(route, "0.0.0.0", 6969).foreach(_ => println("Started!"))
}

build.sb as reference

val akkaVersion = "2.5.25"
lazy val kamon = project.in(file("kamon"))
  .settings(
    organization := "com.area51",
    scalaVersion := "2.12.10",
    mainClass in (Compile, run) := Some("com.area51.SimpleKamon"),
    libraryDependencies ++= Seq(
      "com.typesafe.akka" %% "akka-actor"   % akkaVersion,
      "com.typesafe.akka" %% "akka-slf4j"   % akkaVersion,
      "com.typesafe.akka" %% "akka-stream"  % akkaVersion,
      "com.typesafe.akka" %% "akka-http-core"  % "10.1.9",
      "com.typesafe.akka" %% "akka-http"       % "10.1.9",
      "io.kamon" %% "kamon-core"           % "1.1.6",
      "io.kamon" %% "kamon-akka-http-2.5"  % "1.1.2",
      "io.kamon" %% "kamon-akka-2.5"       % "1.1.4"
    )
  )

Sending a single request to the service one can see these printouts

MY-SERVER
MetricDistribution(span.processing-time,Map(operation -> /api/execute, error -> false, span.kind -> server, http.status_code -> 200),MeasurementUnit(Dimension(time),Magnitude(nanoseconds,1.0E-9)),DynamicRange(1,3600000000000,2),kamon.metric.SnapshotCreation$ZigZagCountsDistribution@9e07ae2)
MetricDistribution(span.processing-time,Map(operation -> read.user.data, error -> false, component -> database, parentOperation -> MY-SERVER),MeasurementUnit(Dimension(time),Magnitude(nanoseconds,1.0E-9)),DynamicRange(1,3600000000000,2),kamon.metric.SnapshotCreation$ZigZagCountsDistribution@48b2e947)
MetricDistribution(span.processing-time,Map(operation -> update.cache, error -> false, component -> in-memory, parentOperation -> MY-SERVER),MeasurementUnit(Dimension(time),Magnitude(nanoseconds,1.0E-9)),DynamicRange(1,3600000000000,2),kamon.metric.SnapshotCreation$ZigZagCountsDistribution@5ca82cba)

The name of the span is correct but the metric has the wrong operation
It all boils down to this line in kamon-akka-http
ServerRequestInstrumentation:68
It just overwrites the operation name with the original URL.
This causes huge issues as our naming rule is based on knowing our URL patterns, now I get massive cardinality on operation names due to our URLs containing unique identifiers

[Question] Production readiness fo kamon-akka-http

Hi guys,

A temporary implementation of the Kamon akka-http module.

I am a bit worried about temporary implementation statement in the description. Is the module production-ready? If it is, can we change the description, please?

Many thanks.

Remove kamonLogReporter as compile dependency

If I understand the core concept of kamon, it's that back-ends for reporting are supposed to be pluggable. In that spirit, I'm pretty sure the log reporter should be removed as a compile-scoped dependency of kamon-akka-http.

Support Akka Http2

The current Kamon instrumentation only support the ``HttpExt.bindAndHandle` method .

When we configure HTTP2 support this will use the Http2Ext.bindAndHandleAsync method. This won't be instrumented then.

Not in central maven?

Hi, this version is not presented there, the one you can actually find is the experimental one, not the updated version using "0.6.5".

With tracing on, latency metrics are not broken out by individual trace

Using the traceName directive, for two of our routes, I observe the following behavior.

200/400 responses get grouped a percentile counts in metrics such as

nutmeg.http_server.ProductDecision_200
nutmeg.http_server.ProductDecision_400

however the latency metrics are not grouped by the trace name (i.e. ProductDecision) but instead are all under the trace category:

nutmeg.trace.elapsed_time.99percentile
nutmeg.trace.elapsed_time.95percentile

shouldn't there be latency metrics for the traces and shouldn't they be under a similar metric path?

Safari/WebKit - Invalid UTF-8 sequence in header value

Hey!

I've had a very very annoying issue with kamon-akka-http-2.5 (1.0.0 and 1.0.1) and Safari. We are using akka-http for WebSockets and Safari/WebKit has a bug that if empty header is present in response headers it raises Invalid UTF-8 sequence in header value error.

  • I've traced the bug down to kamon-akka-http-2.5.
  • Akka HTTP - 10.0.11
  • Akka - 2.5.11
HTTP/1.1 400 Bad Request
X-B3-TraceId: d93e80a606ef9bb5
X-B3-Sampled: 0
X-B3-ParentSpanId:                                    <-- this
X-B3-SpanId: 968b5f7aeaab956c
Server: akka-http/10.0.11
Date: Sun, 25 Mar 2018 00:06:06 GMT
Content-Type: text/plain; charset=UTF-8
Content-Length: 34
Via: 1.1 google

Expected WebSocket Upgrade request

As you can see from the trace X-B3-ParentSpanId: didn't have any value. After disabling kamon-akka-http-2.5 in out project it all started working again.

Hope this saves someone out there some gray hair. :)

P.s.: Let me know if I can help you with this thing in any way.

Cheers!

  • Oto

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.