Code Monkey home page Code Monkey logo

akka-http-metrics's Introduction

akka-http-metrics

Continuous Integration Maven Central Software License Scala Steward badge

Easily collect and expose metrics in your akka-http server.

After the akka licencing change, no further development is expected on akka-http-metrics. If you're migrating to pekko-http, see pekko-http-metrics.

The following implementations are supported:

Versions

Version Release date Akka Http version Scala versions
1.7.1 2022-06-07 10.2.9 2.13.8, 2.12.15
1.7.0 2022-04-11 10.2.9 2.13.8, 2.12.15
1.6.0 2021-05-07 10.2.4 2.13.5, 2.12.13
1.5.1 2021-02-16 10.2.3 2.13.4, 2.12.12
1.5.0 2021-01-12 10.2.2 2.13.4, 2.12.12

The complete list can be found in the CHANGELOG file.

Getting akka-http-metrics

Libraries are published to Maven Central. Add to your build.sbt:

libraryDependencies += "fr.davit" %% "akka-http-metrics-<backend>" % <version>

Server metrics

The library enables you to easily record the following metrics from an akka-http server into a registry. The following labeled metrics are recorded:

  • requests (counter) [method]
  • requests active (gauge) [method]
  • requests failures (counter) [method]
  • requests size (histogram) [method]
  • responses (counter) [method | path | status group]
  • responses errors [method | path | status group]
  • responses duration (histogram) [method | path | status group]
  • response size (histogram) [method | path | status group]
  • connections (counter)
  • connections active (gauge)

Record metrics from your akka server by creating an HttpMetricsServerBuilder with the newMeteredServerAt extension method located in HttpMetrics

import akka.actor.ActorSystem
import akka.http.scaladsl.Http
import akka.http.scaladsl.server.Directives._
import akka.http.scaladsl.server.Route
import fr.davit.akka.http.metrics.core.{HttpMetricsRegistry, HttpMetricsSettings}
import fr.davit.akka.http.metrics.core.HttpMetrics._ // import extension methods

implicit val system = ActorSystem()

val settings: HttpMetricsSettings = ... // concrete settings implementation

val registry: HttpMetricsRegistry = ... // concrete registry implementation

val route: Route = ... // your route

Http()
  .newMeteredServerAt("localhost", 8080, registry)
  .bindFlow(route)

Requests failure counter is incremented when no response could be emitted by the server (network error, ...)

By default, the response error counter will be incremented when the returned status code is an Server error (5xx). You can override this behaviour in the settings.

settings.withDefineError(_.status.isFailure)

In this example, all responses with status >= 400 are considered as errors.

For HTTP2 you must use the bind or bindSync on the HttpMetricsServerBuilder. In this case the connection metrics won't be available.

Http()
  .newMeteredServerAt("localhost", 8080)
  .bind(route)

Dimensions

By default, metrics dimensions are disabled. You can enable them in the settings.

settings
  .withIncludeMethodDimension(true)
  .withIncludePathDimension(true)
  .withIncludeStatusDimension(true)

Custom dimensions can be added to the message metrics:

  • extend the HttpRequestLabeler to add labels on requests & their associated response
  • extend the HttpResponseLabeler to add labels on responses only

In the example below, the browser dimension will be populated based on the user-agent header on requests and responses. The responses going through the route will have the user dimension set with the provided username, other responses will be unlabelled.

import fr.davit.akka.http.metrics.core.{AttributeLabeler, HttpRequestLabeler}

// based on https://developer.mozilla.org/en-US/docs/Web/HTTP/Browser_detection_using_the_user_agent#browser_name
object BrowserLabeler extends HttpRequestLabeler {
 override def name: String = "browser"
 override def label(request: HttpRequest): String = {
  val products = for {
   ua <- request.header[`User-Agent`].toSeq
   pv <- ua.products
  } yield pv.product
  if (products.contains("Seamonkey")) "seamonkey"
  else if (products.contains("Firefox")) "firefox"
  else if (products.contains("Chromium")) "chromium"
  else if (products.contains("Chrome")) "chrome"
  else if (products.contains("Safari")) "safari"
  else if (products.contains("OPR") || products.contains("Opera")) "opera"
  else "other"
 }
}

object UserLabeler extends AttributeLabeler {
  def name: String = "user"
}

val route = auth { username =>
 metricsLabeled(UserLabeler, username) {
  ...
 }
}

settings.withCustomDimensions(BrowserLabeler, UserLabeler)

Additional static server-level dimensions can be set to all metrics collected by the library. In the example below, the env dimension with prod label will be added.

import fr.davit.akka.http.metrics.core.Dimension
settings.withServerDimensions(Dimension("env", "prod"))
Method

The method of the request is used as dimension on the metrics. eg. GET

Path

Matched path of the request is used as dimension on the metrics.

When enabled, all metrics will get unlabelled as path dimension by default, You must use the labelled path directives defined in HttpMetricsDirectives to set the dimension value.

You must also be careful about cardinality: see here. If your path contains unbounded dynamic segments, you must give an explicit label to override the dynamic part:

import fr.davit.akka.http.metrics.core.scaladsl.server.HttpMetricsDirectives._

val route = pathPrefixLabel("api") {
  pathLabeled("user" / JavaUUID, "user/:user-id") { userId =>
    ...
  }
}

Moreover, all unhandled requests will have path dimension set to unhandled.

Status group

The status group creates the following dimensions on the metrics: 1xx|2xx|3xx|4xx|5xx|other

Expose metrics

Expose the metrics from the registry on an http endpoint with the metrics directive.

import fr.davit.akka.http.metrics.core.scaladsl.server.HttpMetricsDirectives._

val route = (get & path("metrics"))(metrics(registry))

Of course, you will also need to have the implicit marshaller for your registry in scope.

Implementations

metric name
requests requests_count
requests active requests_active
requests failures requests_failures_count
requests size requests_bytes
responses responses_count
responses errors responses_errors_count
responses duration responses_duration
responses size responses_bytes
connections connections_count
connections active connections_active

The DatadogRegistry is just a facade to publish to your StatsD server. The registry itself not located in the JVM, for this reason it is not possible to expose the metrics in your API.

Add to your build.sbt:

libraryDependencies += "fr.davit" %% "akka-http-metrics-datadog" % <version>

Create your registry

import com.timgroup.statsd.StatsDClient
import fr.davit.akka.http.metrics.core.HttpMetricsSettings
import fr.davit.akka.http.metrics.datadog.{DatadogRegistry, DatadogSettings}

val client: StatsDClient = ... // your statsd client
val settings: HttpMetricsSettings = DatadogSettings.default
val registry = DatadogRegistry(client, settings) // or DatadogRegistry(client) to use default settings

See datadog's documentation on how to create a StatsD client.

metric name
requests requests
requests active requests.active
requests failures requests.failures
requests size requests.bytes
responses responses
responses errors responses.errors
responses duration responses.duration
responses size responses.bytes
connections connections
connections active connections.active

Important: The DropwizardRegistry does not support labels. This feature will be available with dropwizard v5, which development is paused at the moment.

Add to your build.sbt:

libraryDependencies += "fr.davit" %% "akka-http-metrics-dropwizard" % <version>
// or for dropwizard v5
libraryDependencies += "fr.davit" %% "akka-http-metrics-dropwizard-v5" % <version>

Create your registry

import com.codahale.metrics.MetricRegistry
import fr.davit.akka.http.metrics.core.HttpMetricsSettings
import fr.davit.akka.http.metrics.dropwizard.{DropwizardRegistry, DropwizardSettings}

val dropwizard: MetricRegistry = ... // your dropwizard registry
val settings: HttpMetricsSettings = DropwizardSettings.default
val registry = DropwizardRegistry(dropwizard, settings) // or DropwizardRegistry() to use a fresh registry & default settings

Expose the metrics

import fr.davit.akka.http.metrics.core.scaladsl.server.HttpMetricsDirectives._
import fr.davit.akka.http.metrics.dropwizard.marshalling.DropwizardMarshallers._

val route = (get & path("metrics"))(metrics(registry))

All metrics from the dropwizard metrics registry will be exposed. You can find some external exporters here. For instance, to expose some JVM metrics, you have to add the dedicated dependency and register the metrics set into your collector registry:

libraryDependencies += "com.codahale.metrics" % "metrics-jvm" % <version>
import com.codahale.metrics.jvm._

val dropwizard: MetricRegistry = ... // your dropwizard registry
dropwizard.register("jvm.gc", new GarbageCollectorMetricSet())
dropwizard.register("jvm.threads", new CachedThreadStatesGaugeSet(10, TimeUnit.SECONDS))
dropwizard.register("jvm.memory", new MemoryUsageGaugeSet())

val registry = DropwizardRegistry(dropwizard, settings)
metric name
requests requests
requests active requests.active
requests failures requests.failures
requests size requests.bytes
responses responses
responses errors responses.errors
responses duration responses.duration
response size responses.bytes
connections connections
connections active connections.active

Add to your build.sbt:

libraryDependencies += "fr.davit" %% "akka-http-metrics-graphite" % <version>

Create your carbon client and your registry

import fr.davit.akka.http.metrics.core.HttpMetricsSettings
import fr.davit.akka.http.metrics.graphite.{CarbonClient, GraphiteRegistry, GraphiteSettings}

val carbonClient: CarbonClient = CarbonClient("hostname", 2003)
val settings: HttpMetricsSettings = GraphiteSettings.default
val registry = GraphiteRegistry(carbonClient, settings) // or PrometheusRegistry(carbonClient) to use default settings
metric name
requests requests_total
requests active requests_active
requests failures requests_failures_total
requests size requests_size_bytes
responses responses_total
responses errors responses_errors_total
responses duration responses_duration_seconds
responses size responses_size_bytes
connections connections_total
connections active connections_active

Add to your build.sbt:

libraryDependencies += "fr.davit" %% "akka-http-metrics-prometheus" % <version>

Create your registry

import io.prometheus.client.CollectorRegistry
import fr.davit.akka.http.metrics.prometheus.{PrometheusRegistry, PrometheusSettings}

val prometheus: CollectorRegistry = ... // your prometheus registry
val settings: PrometheusSettings = PrometheusSettings.default
val registry = PrometheusRegistry(prometheus, settings) // or PrometheusRegistry() to use the default registry & settings

You can fine-tune the histogram/summary configuration of buckets/quantiles for the request size, duration and response size metrics.

settings
  .withDurationConfig(Buckets(1, 2, 3, 5, 8, 13, 21, 34))
  .withReceivedBytesConfig(Quantiles(0.5, 0.75, 0.9, 0.95, 0.99))
  .withSentBytesConfig(PrometheusSettings.DefaultQuantiles)

Expose the metrics

import fr.davit.akka.http.metrics.core.scaladsl.server.HttpMetricsDirectives._
import fr.davit.akka.http.metrics.prometheus.marshalling.PrometheusMarshallers._

val route = (get & path("metrics"))(metrics(registry))

All metrics from the prometheus collector registry will be exposed. You can find some external exporters here. For instance, to expose some JVM metrics, you have to add the dedicated client dependency and initialize/register it to your collector registry:

libraryDependencies += "io.prometheus" % "simpleclient_hotspot" % <vesion>
import io.prometheus.client.hotspot.DefaultExports

val prometheus: CollectorRegistry = ... // your prometheus registry
DefaultExports.register(prometheus)  // or DefaultExports.initialize() to use the default registry

akka-http-metrics's People

Contributors

aaabramov avatar aleksandr-vin avatar fraer avatar jsimek avatar kpritam avatar manuzhang avatar moonkev avatar pdezwart avatar rustedbones avatar scala-steward avatar xperimental avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

akka-http-metrics's Issues

Adding http method as dimension

Would it be possible to add the http method as a dimension to the path. Currently I've made separate "pathLabeled" specifying if its a GET, POST etc. But it would be better to be able to have each path endpoint combined, no matter what the http method is.

StatusGroup / Path for requests?

Since rejected response can't have labeled path, there is no way to know exactly what request is rejected. Can we add StatusGroup and Path to requests?

Rejected requests can create metrics dimension with unmatched path

When path dimension is enabled, all rejected requests will create a metrics with their own path as dimension value.

The path dimension is not then bounded and does not follow the metrics guideline.

Rejected requests should not generate metrics containing unmatched path in their label.

Hiding metrics route from scrapping

Is it possible to avoid metrics collection for metrics endpoint?

Despite HttpMetricsDirectives provides special method for metrics route

def metrics[T <: HttpMetricsRegistry: ToEntityMarshaller](registry: T): StandardRoute = complete(registry)

Metrics are still being collected with unlabelled path.

My routes:

val routes: Route =
    handleExceptions(appExceptionHandler) {
      concat(
        path("metrics") {
          metrics(registry)(PrometheusMarshallers.marshaller)
        },
        pathPrefixLabeled("prefix1") {
          concat(
            pathPrefixLabeled("prefix1_1") {
              ???
            },
            pathPrefixLabeled("prefix1_2") {
              ???
            }
          )
        }
      )
    }

And then

val prometheus = CollectorRegistry.defaultRegistry
val settings =
  PrometheusSettings
    .default
    .withIncludeMethodDimension(true)
    .withIncludePathDimension(true)
    .withIncludeStatusDimension(true)
    .withDefineError(_.status.isFailure)

val registry  = PrometheusRegistry(prometheus, settings)

val futureBinding =
    Http()
      .newMeteredServerAt("0.0.0.0", 9000, registry)
      .bindFlow(HttpMetrics.metricsRouteToFlow(routes))

bug: prometheus label issues in 1.7.0

Hi there

We were using path, status and method dimensions in our app

Something like:

 PrometheusSettings.default
   .withDurationConfig(Buckets(buckets: _*))
    .withDefineError(_.status.isFailure)
    .withIncludeMethodDimension(true)
    .withIncludePathDimension(true)
    .withIncludeStatusDimension(true)
    .withServerDimensions(
      Dimension("env", env),
      Dimension("stack", stack),
      Dimension("version", version)
    )

but it seems that the label values are been swapped e.g.

akka_http_responses_duration_seconds_bucket{env="prd",method="vacation",path="d6bfafe",stack="GET",status="/health/liveness",version="2xx",le="1.5",} 56.0

while in 1.6.0 it was working flawlessly

akka_http_responses_duration_seconds_bucket{env="prd",method="GET",path="/health/liveness",stack="vacation",status="2xx",version="d6bfafe",le="1.5",} 56.0

If I find some spare time, I will try to issue a PR to fix the issue :)

Prometheus duration conversion is off by x10

Hi,

I setup akka-http-metrics to my akka http project and I have following metrics for requests duration:

akka_http_requests_duration_seconds{quantile="0.75",} 30.29
akka_http_requests_duration_seconds{quantile="0.95",} 30.29
akka_http_requests_duration_seconds{quantile="0.98",} 30.29
akka_http_requests_duration_seconds{quantile="0.99",} 30.29
akka_http_requests_duration_seconds{quantile="0.999",} 30.29
akka_http_requests_duration_seconds_count 6.0
akka_http_requests_duration_seconds_sum 139.39999999999998

This is local testing so it takes forever :).
But response time is never more than 3 seconds, but it show 30 seconds here. Does it somehow mean 3 seconds?

For other request I see similar things, when request lasts 6 ms, it shows 0.06 seconds (should be 0.006)

Error in stage [fr.davit.akka.http.metrics.core.MeterStage$$anon$1-MeterStage]: No value present

AKKA 2.6.13 with HTTP 10.2.4 & Scala 2.12.12.

Error:

2021-03-27 14:56:38.315 ERROR [t-dispatcher-18] r$$anonfun$receive$1: Error in stage [fr.davit.akka.http.metrics.core.MeterStage$$anon$1-MeterStage]: No value present
java.util.NoSuchElementException: No value present
	at java.util.Optional.get(Optional.java:148) ~[?:?]
	at fr.davit.akka.http.metrics.core.MeterStage$$anon$1$$anon$3.onPush(MeterStage.scala:76) ~[akka-http-metrics-core_2.12-1.5.1.jar:1.5.1]
	at akka.stream.impl.fusing.GraphInterpreter.processPush(GraphInterpreter.scala:541) ~[akka-stream_2.12-2.6.13.jar:2.6.13]
	at akka.stream.impl.fusing.GraphInterpreter.execute(GraphInterpreter.scala:423) ~[akka-stream_2.12-2.6.13.jar:2.6.13]
	at akka.stream.impl.fusing.GraphInterpreterShell.runBatch(ActorGraphInterpreter.scala:625) ~[akka-stream_2.12-2.6.13.jar:2.6.13]
	at akka.stream.impl.fusing.GraphInterpreterShell$AsyncInput.execute(ActorGraphInterpreter.scala:502) ~[akka-stream_2.12-2.6.13.jar:2.6.13]
	at akka.stream.impl.fusing.GraphInterpreterShell.processEvent(ActorGraphInterpreter.scala:600) ~[akka-stream_2.12-2.6.13.jar:2.6.13]
	at akka.stream.impl.fusing.ActorGraphInterpreter.akka$stream$impl$fusing$ActorGraphInterpreter$$processEvent(ActorGraphInterpreter.scala:773) ~[akka-stream_2.12-2.6.13.jar:2.6.13]
	at akka.stream.impl.fusing.ActorGraphInterpreter$$anonfun$receive$1.applyOrElse(ActorGraphInterpreter.scala:788) ~[akka-stream_2.12-2.6.13.jar:2.6.13]
	at akka.actor.Actor.aroundReceive(Actor.scala:537) ~[akka-actor_2.12-2.6.13.jar:2.6.13]
	at akka.actor.Actor.aroundReceive$(Actor.scala:535) ~[akka-actor_2.12-2.6.13.jar:2.6.13]
	at akka.stream.impl.fusing.ActorGraphInterpreter.aroundReceive(ActorGraphInterpreter.scala:691) ~[akka-stream_2.12-2.6.13.jar:2.6.13]
	at akka.actor.ActorCell.receiveMessage(ActorCell.scala:577) [akka-actor_2.12-2.6.13.jar:2.6.13]
	at akka.actor.ActorCell.invoke(ActorCell.scala:547) [akka-actor_2.12-2.6.13.jar:2.6.13]
	at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270) [akka-actor_2.12-2.6.13.jar:2.6.13]
	at akka.dispatch.Mailbox.run(Mailbox.scala:231) [akka-actor_2.12-2.6.13.jar:2.6.13]
	at akka.dispatch.Mailbox.exec(Mailbox.scala:243) [akka-actor_2.12-2.6.13.jar:2.6.13]
	at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290) [?:?]
	at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020) [?:?]
	at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656) [?:?]
	at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594) [?:?]
	at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183) [?:?]

Debug path with some data:

image

image

image

Application code:

image

All non-WS paths work properly.
This is how the application binds:

val bindingFuture =
            portBind.prometheusRegistry
              .map(Http().newMeteredServerAt(portBind.host, port, _))
              .getOrElse(Http().newServerAt(portBind.host, port))
              .bindFlow(
                if (portBind.secureConfiguration.isDefined) redirectHandler
                else portBind.handler
              )

Add directive to provide custom dimensions

We provide graphql endpoint and need to populate operation name as dimension of metrics. Currently we use a workaround to add it as a suffix to path dimension:

def labeled(name: String): Directive0 = mapResponse { response =>
    response.addAttribute(HttpMetrics.PathLabel, s"($name)")
}

pathLabeled("graphql") {
      post {
        entity(as[GraphQLRequest]) { query =>
          labeled(query.operationName.getOrElse("unknown")) {
            ...

Would be nice to have it as a separate dimension. Plus we can populate Referer (after mapping into finite cardinality).

No akka_http_responses_size_bytes_bucket for streamed endpoints

Hello, with akka-http-metrics-prometheus v1.3.0

I noticed that when an endpoint responds with streamed data, for ex:

pathLabeled("testStream", "testStream") {
  get {
     complete(Source(List("a","b","c")))
  }
} 

There is no akka_http_responses_size_bytes_bucket metric for label testStream, however for non-streamed endpoints it is computed correctly.

Here is an example of /metrics output after 1 call to /testStream and 1 call to /metrics (only method and path dimensions are enabled):

# HELP akka_http_responses_duration_seconds HTTP response duration
# TYPE akka_http_responses_duration_seconds histogram
akka_http_responses_duration_seconds_bucket{method="GET",path="/testStream",le="0.005",} 0.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/testStream",le="0.01",} 0.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/testStream",le="0.025",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/testStream",le="0.05",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/testStream",le="0.075",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/testStream",le="0.1",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/testStream",le="0.25",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/testStream",le="0.5",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/testStream",le="0.75",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/testStream",le="1.0",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/testStream",le="2.5",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/testStream",le="5.0",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/testStream",le="7.5",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/testStream",le="10.0",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/testStream",le="+Inf",} 1.0
akka_http_responses_duration_seconds_count{method="GET",path="/testStream",} 1.0
akka_http_responses_duration_seconds_sum{method="GET",path="/testStream",} 0.024
akka_http_responses_duration_seconds_bucket{method="GET",path="/metrics",le="0.005",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/metrics",le="0.01",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/metrics",le="0.025",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/metrics",le="0.05",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/metrics",le="0.075",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/metrics",le="0.1",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/metrics",le="0.25",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/metrics",le="0.5",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/metrics",le="0.75",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/metrics",le="1.0",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/metrics",le="2.5",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/metrics",le="5.0",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/metrics",le="7.5",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/metrics",le="10.0",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/metrics",le="+Inf",} 1.0
akka_http_responses_duration_seconds_count{method="GET",path="/metrics",} 1.0
akka_http_responses_duration_seconds_sum{method="GET",path="/metrics",} 0.0
# HELP akka_http_requests_active Active HTTP requests
# TYPE akka_http_requests_active gauge
akka_http_requests_active 0.0
# HELP akka_http_requests_size_bytes HTTP request size
# TYPE akka_http_requests_size_bytes histogram
akka_http_requests_size_bytes_bucket{le="0.0",} 2.0
akka_http_requests_size_bytes_bucket{le="100.0",} 2.0
akka_http_requests_size_bytes_bucket{le="200.0",} 2.0
akka_http_requests_size_bytes_bucket{le="300.0",} 2.0
akka_http_requests_size_bytes_bucket{le="400.0",} 2.0
akka_http_requests_size_bytes_bucket{le="500.0",} 2.0
akka_http_requests_size_bytes_bucket{le="600.0",} 2.0
akka_http_requests_size_bytes_bucket{le="700.0",} 2.0
akka_http_requests_size_bytes_bucket{le="800.0",} 2.0
akka_http_requests_size_bytes_bucket{le="900.0",} 2.0
akka_http_requests_size_bytes_bucket{le="1000.0",} 2.0
akka_http_requests_size_bytes_bucket{le="2000.0",} 2.0
akka_http_requests_size_bytes_bucket{le="3000.0",} 2.0
akka_http_requests_size_bytes_bucket{le="4000.0",} 2.0
akka_http_requests_size_bytes_bucket{le="5000.0",} 2.0
akka_http_requests_size_bytes_bucket{le="6000.0",} 2.0
akka_http_requests_size_bytes_bucket{le="7000.0",} 2.0
akka_http_requests_size_bytes_bucket{le="8000.0",} 2.0
akka_http_requests_size_bytes_bucket{le="9000.0",} 2.0
akka_http_requests_size_bytes_bucket{le="10000.0",} 2.0
akka_http_requests_size_bytes_bucket{le="20000.0",} 2.0
akka_http_requests_size_bytes_bucket{le="30000.0",} 2.0
akka_http_requests_size_bytes_bucket{le="40000.0",} 2.0
akka_http_requests_size_bytes_bucket{le="50000.0",} 2.0
akka_http_requests_size_bytes_bucket{le="60000.0",} 2.0
akka_http_requests_size_bytes_bucket{le="70000.0",} 2.0
akka_http_requests_size_bytes_bucket{le="80000.0",} 2.0
akka_http_requests_size_bytes_bucket{le="90000.0",} 2.0
akka_http_requests_size_bytes_bucket{le="+Inf",} 2.0
akka_http_requests_size_bytes_count 2.0
akka_http_requests_size_bytes_sum 0.0
# HELP akka_http_requests_total Total HTTP requests
# TYPE akka_http_requests_total counter
akka_http_requests_total 2.0
# HELP akka_http_responses_size_bytes HTTP response size
# TYPE akka_http_responses_size_bytes histogram
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="0.0",} 0.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="100.0",} 0.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="200.0",} 0.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="300.0",} 0.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="400.0",} 0.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="500.0",} 0.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="600.0",} 0.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="700.0",} 0.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="800.0",} 0.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="900.0",} 0.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="1000.0",} 0.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="2000.0",} 0.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="3000.0",} 0.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="4000.0",} 1.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="5000.0",} 1.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="6000.0",} 1.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="7000.0",} 1.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="8000.0",} 1.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="9000.0",} 1.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="10000.0",} 1.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="20000.0",} 1.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="30000.0",} 1.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="40000.0",} 1.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="50000.0",} 1.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="60000.0",} 1.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="70000.0",} 1.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="80000.0",} 1.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="90000.0",} 1.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="+Inf",} 1.0
akka_http_responses_size_bytes_count{method="GET",path="/metrics",} 1.0
akka_http_responses_size_bytes_sum{method="GET",path="/metrics",} 3827.0
# HELP akka_http_responses_total HTTP responses
# TYPE akka_http_responses_total counter
akka_http_responses_total{method="GET",path="/testStream",} 1.0
akka_http_responses_total{method="GET",path="/metrics",} 1.0

Problem with big requests (>~100kb)

Hi!

After updating to akka-http 10.2.2 and akka-http-metrics 1.4.1, I'm having problems with "big" requests (~> 100kb or so) resulting in 500 internal server error. I have a really hard time tracking down what's really causing this, but my conclusion is that it at least seems to be related to akka-http-metrics. In my experience, the problem arises whenever I add the following import in my Main file where I set up my server.

import fr.davit.akka.http.metrics.core.HttpMetrics._

I've tried to debug but it seems like the request never even enters the route, so I suspect that something might be crashing pretty early in the request management, resulting in an automatic 500 being returned.

I know this is not much to go on and it's possible this is not a problem with akka-http-metrics at all, but I've been banging my head against the wall with this problem for days now and just curious if there's anyone else seeing the same problem.

I'm using the following versions:

akka-http: 10.2.2
akka: 2.5.32
akka-http-metrics: 1.4.1

Unable to get started with akk-http-metrics for scala Akka HTTP and prometheus.

I have a AKKA HTTP application written in scala and would like to integrated akka-http-metrics for exposing API usage metrics in prometheus format.

I started with adding the dependency

<dependency>
      <groupId>fr.davit</groupId>
      <artifactId>akka-http-metrics-prometheus_2.12</artifactId>
      <version>1.4.1</version>
    </dependency>

I have multiple issues to get started

and then I see that your docs mention

Record metrics from your akka server by importing the implicits from HttpMetricsRoute

Hence, i imported the class.
There are two erros

  1. My IDE is not able to resolve the class HttpMetricsRoute
  2. The mvn build fails.
[ERROR] error: missing or invalid dependency detected while loading class file 'HttpMetrics.class'.
[INFO] Could not access type ClassicActorSystemProvider in package akka.actor,
[INFO] because it (or its dependencies) are missing. Check your build definition for
[INFO] missing or conflicting dependencies. (Re-run with `-Ylog-classpath` to see the problematic classpath.)
[INFO] A full rebuild may help if 'HttpMetrics.class' was compiled against an incompatible version of akka.actor.
[WARNING] three warnings found
[ERROR] one error found
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  12.044 s
[INFO] Finished at: 2020-12-16T09:43:43+05:30
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal net.alchim31.maven:scala-maven-plugin:3.2.2:compile (scala-compile) on project provisioner-server-rest: wrap: org.apache.commons.exec.ExecuteException: Process exited with an error: 1 (Exit value: 1) -> [Help 1]

My existing route is

 Http().bindAndHandle(
      service.route, conf.getString(PROVISIONER_LISTEN_INTERFACE), conf.getInt(PROVISIONER_PORT)
    )

In docs, you mention to use newMeteredServerAt(). I get compilation errors with

val registry: HttpMetricsRegistry = ... // concrete registry implementation

    Http().newMeteredServerAt(conf.getString(PROVISIONER_LISTEN_INTERFACE), conf.getInt(PROVISIONER_PORT), registry).bindFlow(service.route)
  1. How do i instantiate the metric registry. My existing code has no such object / class.

Appreciate your support. Is it possible to do a ZM meeting for quick resolution ?

Provide support for HTTP/2

As of today, Akka Http only allows using HTTP2 with bindAndHandleAsync method. Since .recordMetrics returns a flow, I can't use this library with HTTP2 which only takes HttpRequest ⇒ Future[HttpResponse] as handler.

Getting error when using bindFlow : `.Error in stage [fr.davit.akka.http.metrics.core.MeterStage$$anon$1-MeterStage]: No value present

Hello,
I'm getting the following stack, when trying to use the v1.5.1 in the following manner:

object Metrics {
      import akka.util.Timeout
      import fr.davit.akka.http.metrics.core.scaladsl.server.HttpMetricsDirectives.metrics
      import fr.davit.akka.http.metrics.prometheus.marshalling.PrometheusMarshallers._
      import fr.davit.akka.http.metrics.prometheus.{Buckets, PrometheusRegistry, PrometheusSettings, Quantiles}
      import io.prometheus.client.CollectorRegistry

      class ClusterMetricsRouter(implicit context : ActorRefFactory, timeout : Timeout) {
        import ClusterMetricsRouter._

        implicit val _ = Implicits.system
        implicit val executionContext = context.dispatcher

        val route : Route = {
          metrics(ClusterMetricsRouter.registry)
        }

        val internal_route : Route = {
          metrics(ClusterMetricsRouter.internal_registry)
        }
      }
      object ClusterMetricsRouter {
        private val settings: PrometheusSettings = PrometheusSettings
          .default
          .withIncludePathDimension(true)
          .withIncludeMethodDimension(true)
          .withIncludeStatusDimension(true)
          .withDurationConfig(Buckets(1, 2, 3, 5, 8, 13, 21, 34))
          .withReceivedBytesConfig(Quantiles(0.5, 0.75, 0.9, 0.95, 0.99))
          .withSentBytesConfig(PrometheusSettings.DefaultQuantiles)
          .withDefineError(_.status.isFailure)

        private val collector: CollectorRegistry = CollectorRegistry.defaultRegistry

        val registry: PrometheusRegistry = PrometheusRegistry(collector, settings)
    }
}

...

def routeWithClose(route : Future[akka.Done] => Route) : Flow[HttpRequest, HttpResponse, Any] = {
      import akka.http.scaladsl.server
      Flows.lazyFlow { () =>
        val p = Promise[akka.Done]()
        server.Route.toFlow(route(p.future)).watchTermination() { case (mat, done) =>
          p.completeWith(done)
          mat
        }
      }
    }

 ...

Http()
          .newMeteredServerAt(
            "0.0.0.0",
            8443,
            Routes.Metrics.ClusterMetricsRouter.registry
          )
          .enableHttps(ConnectionContext.httpsServer(ssl))
          .bindFlow(routeWithClose(done => Routes.route(done)))

I'm using an older release because of this issue, since I need .bindFlow above.
#184

However - this throws the following stack:

2021-11-01T21:20:37.641+00:00 | ERROR | default-akka.actor.default-dispatcher-12 | akka.actor.RepointableActorRef | Error in stage [fr.davit.akka.http.metrics.core.MeterStage$$anon$1-MeterStage]: No value present |
"" | java.util.NoSuchElementException: No value present
        at java.base/java.util.Optional.get(Optional.java:141)
        at fr.davit.akka.http.metrics.core.MeterStage$$anon$1$$anon$3.onPush(MeterStage.scala:76)
        at akka.stream.impl.fusing.GraphInterpreter.processPush(GraphInterpreter.scala:542)
        at akka.stream.impl.fusing.GraphInterpreter.processEvent(GraphInterpreter.scala:496)
        at akka.stream.impl.fusing.GraphInterpreter.execute(GraphInterpreter.scala:390)
        at akka.stream.impl.fusing.GraphInterpreterShell.runBatch(ActorGraphInterpreter.scala:650)
        at akka.stream.impl.fusing.GraphInterpreterShell$AsyncInput.execute(ActorGraphInterpreter.scala:521)
        at akka.stream.impl.fusing.GraphInterpreterShell.processEvent(ActorGraphInterpreter.scala:625)
        at akka.stream.impl.fusing.ActorGraphInterpreter.akka$stream$impl$fusing$ActorGraphInterpreter$$processEvent(ActorGraphInterpreter.scala:800)
        at akka.stream.impl.fusing.ActorGraphInterpreter.akka$stream$impl$fusing$ActorGraphInterpreter$$shortCircuitBatch(ActorGraphInterpreter.scala:787)
        at akka.stream.impl.fusing.ActorGraphInterpreter$$anonfun$receive$1.applyOrElse(ActorGraphInterpreter.scala:819)
        at akka.actor.Actor.aroundReceive(Actor.scala:537)
        at akka.actor.Actor.aroundReceive$(Actor.scala:535)
        at akka.stream.impl.fusing.ActorGraphInterpreter.aroundReceive(ActorGraphInterpreter.scala:716)
        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:580)
        at akka.actor.ActorCell.invoke(ActorCell.scala:548)
        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270)
        at akka.dispatch.Mailbox.run(Mailbox.scala:231)
        at akka.dispatch.Mailbox.exec(Mailbox.scala:243)
        at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
        at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1016)
        at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1665)
        at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1598)
        at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183)

Is there some setup that I'm missing from the above? Equivalent code works elsewhere, when not using the .bindFlow

I'm hoping that this is a user error, and you can point me to something I'm doing wrong in my code to resolve the issue?

Request/response headers as metric dimensions

It would be nice to be able to configure some headers to be picked up as dimensions for a namespace, for example to track user-agent usage or responses from APIs marked with deprecation.

Is there any alternative suggestion for this, currently?

metric for "active_connections" ?

First of all, thank you so much! This is tool so convenient.

Coming to my question. I see that there is a metric for active requests, but I was wondering if it's possible to get a metric for active connections as well?

I have been trying to get hold of active connections from an akka-http application, but as of now, I have no idea where to get it from.

Possible Enhancement: Make metric names configurable

Hi @RustedBones - would you be open to a PR that makes the names of metrics configurable? Possibly add to each individual backend's settings. For example for the prometheus backend, active requests setting, it could be accessed from the prometheus settings as follows

  override lazy val active: Gauge = io.prometheus.client.Gauge
    .build()
    .namespace(settings.namespace)
    .name(settings.activeRequestsMetricName) // Could also have a separate MetricsName case class, etc.

Where activeRequests requests would default to "requests.active" (as it is now).

If you are open to this, I will happily do for all backends. Happy to do using another implementation suggestion as well. We actually have a couple of different use cases where this would be quite beneficial.

Custom dimensions

Hi,

I was wondering how can we introduce more metrics such as counts per response code. Can we extend the existing counters/timer/Gauge to introduce a new one according to needs?

Thanks

Exposing datadog metrics

Hi,
I don't see any example to expose datadog metrics. Is this functionality is supported or not?

thanks

NullPointerException when running with recordMetrics

I am trying to use the akka-http-metrics-prometheus registry

Scala Version: 2.12.11
Akka Http Version: 10.1.12
Akka Http Metrics Prometheus Version: 1.1.0

My Code looks like this

object Registry {

  def apply() = {

    val settings = PrometheusSettings
      .default
      .withDurationConfig(Buckets(1, 2, 3, 5, 8, 13, 21, 34))
      .withReceivedBytesConfig(Quantiles(0.5, 0.75, 0.9, 0.95, 0.99))
      .withSentBytesConfig(PrometheusSettings.DefaultQuantiles)

    val prometheus = new CollectorRegistry()

    PrometheusRegistry(settings = settings)

  }

}
Http().bindAndHandle(routes.recordMetrics(Registry()), appConfig.server.interface,  appConfig.server.port)

When i try to run any route, I am facing a NullPointerException.

Could not materialize handling flow for IncomingConnection(/127.0.0.1:8080,/127.0.0.1:64662,Flow(FlowShape(IncomingTCP.in(1710177258),GraphStages$Detacher.out(817845304))))
java.lang.NullPointerException
	at scala.concurrent.impl.Promise$DefaultPromise.onComplete(Promise.scala:307)
	at fr.davit.akka.http.metrics.core.HttpMetricsRegistry.onConnection(HttpMetricsRegistry.scala:126)
	at fr.davit.akka.http.metrics.core.scaladsl.server.HttpMetricsRoute.$anonfun$recordMetrics$1(HttpMetricsRoute.scala:65)
	at akka.stream.impl.Compose.apply(TraversalBuilder.scala:169)
	at akka.stream.impl.PhasedFusingActorMaterializer.materialize(PhasedFusingActorMaterializer.scala:529)
	at akka.stream.impl.PhasedFusingActorMaterializer.materialize(PhasedFusingActorMaterializer.scala:449)
	at akka.stream.impl.PhasedFusingActorMaterializer.materialize(PhasedFusingActorMaterializer.scala:441)
	at akka.stream.scaladsl.RunnableGraph.run(Flow.scala:703)
	at akka.http.scaladsl.HttpExt.$anonfun$bindAndHandle$1(Http.scala:252)
	at akka.stream.impl.fusing.MapAsyncUnordered$$anon$31.onPush(Ops.scala:1401)
	at akka.stream.impl.fusing.GraphInterpreter.processPush(GraphInterpreter.scala:541)
	at akka.stream.impl.fusing.GraphInterpreter.processEvent(GraphInterpreter.scala:495)
	at akka.stream.impl.fusing.GraphInterpreter.execute(GraphInterpreter.scala:390)
	at akka.stream.impl.fusing.GraphInterpreterShell.runBatch(ActorGraphInterpreter.scala:624)
	at akka.stream.impl.fusing.GraphInterpreterShell$AsyncInput.execute(ActorGraphInterpreter.scala:501)
	at akka.stream.impl.fusing.GraphInterpreterShell.processEvent(ActorGraphInterpreter.scala:599)
	at akka.stream.impl.fusing.ActorGraphInterpreter.akka$stream$impl$fusing$ActorGraphInterpreter$$processEvent(ActorGraphInterpreter.scala:768)
	at akka.stream.impl.fusing.ActorGraphInterpreter$$anonfun$receive$1.applyOrElse(ActorGraphInterpreter.scala:783)
	at akka.actor.Actor.aroundReceive(Actor.scala:534)
	at akka.actor.Actor.aroundReceive$(Actor.scala:532)
	at akka.stream.impl.fusing.ActorGraphInterpreter.aroundReceive(ActorGraphInterpreter.scala:690)
	at akka.actor.ActorCell.receiveMessage(ActorCell.scala:573)
	at akka.actor.ActorCell.invoke(ActorCell.scala:543)
	at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:269)
	at akka.dispatch.Mailbox.run(Mailbox.scala:230)
	at akka.dispatch.Mailbox.exec(Mailbox.scala:242)
	at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
	at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
	at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
	at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)

I am pretty sure that i have made some sort of mistake during initialisation and hence the issue.
Would really appreciate if you can point me in the right direction.

Thanks

Not all metrics are exposed for Prometheus

Project using:

AKKA 2.6.6
AKKA HTTP 10.1.12

    "fr.davit" %% "akka-http-metrics-prometheus" % "1.1.1"

Creating registry as:

  protected val prometheusCollector = CollectorRegistry
    .defaultRegistry
  protected val prometheusSettings = PrometheusSettings
    .default
    .withNamespace(metricsNamespace)
    .withIncludeMethodDimension(true)
    .withIncludePathDimension(true)
    .withIncludeStatusDimension(true)
    .withDefineError(_.status.isFailure)
  protected val prometheusRegistry =
    PrometheusRegistry(prometheusCollector, prometheusSettings)

Exposing as:

      pathLabeled("metrics") {
        import fr.davit.akka.http.metrics.prometheus.marshalling.PrometheusMarshallers._
        metrics(prometheusRegistry)
      } ~

But only the following metrics are exposed, moreover, it should show at least 1 for connection_total:

# HELP server_connections_active Active TCP connections
# TYPE server_connections_active gauge
server_connections_active 0.0
# HELP server_connections_total Total TCP connections
# TYPE server_connections_total counter
server_connections_total 0.0

More detail on how to plot relevant Duration Config on Prometheus

This is more of a question, than an issue.

I am currently using the prometheus package, and am publishing duration config exactly similar to the example that you've provided.

val settings = PrometheusSettings
      .default
      .withDurationConfig(Buckets(1, 2, 3, 5, 8, 13, 21, 34))
      .withReceivedBytesConfig(Quantiles(0.5, 0.75, 0.9, 0.95, 0.99))
      .withSentBytesConfig(PrometheusSettings.DefaultQuantiles)

On prometheus, i am setting up the following graph, which has the same query for 99, 95, 90 and 50 percentile.
histogram_quantile(0.99, avg(rate(akka_http_responses_duration_seconds_bucket[5m])) by (le))

Screenshot 2020-07-27 at 1 27 39 PM

The response times displayed in the graph are known to be inaccurate because i have set an overall timeout at 200ms.
Sorry for my lack of knowledge here, but if there would a little more explanation around how to configure the duration config would really help me debug better and resolve the issue on my end.

Getting exception when StatusDimension or PathDimension is true (prometheus)

scala = 2.13
akka-http-metrics-prometheus = 0.6.0
akka-stream = 2.5.25
akka-http = 10.1.9

val settings: HttpMetricsSettings = HttpMetricsSettings
      .default
      .withIncludeStatusDimension(true)
      .withIncludePathDimension(true)
val registry: PrometheusRegistry = PrometheusRegistry(settings)

val bindingFuture = Http().bindAndHandle(route.recordMetrics(registry), "localhost", 8080)

after any request:

akka.http.impl.util.One2OneBidiFlow$OutputTruncationException: Inner flow was completed without producing result elements for 1 outstanding elements
 at akka.http.impl.util.One2OneBidiFlow$OutputTruncationException$.apply(One2OneBidiFlow.scala:22)
 at akka.http.impl.util.One2OneBidiFlow$OutputTruncationException$.apply(One2OneBidiFlow.scala:22)
 at akka.http.impl.util.One2OneBidiFlow$One2OneBidi$$anon$1$$anon$4.onUpstreamFinish(One2OneBidiFlow.scala:97)
 at akka.stream.impl.fusing.GraphInterpreter.processEvent(GraphInterpreter.scala:506)
 at akka.stream.impl.fusing.GraphInterpreter.execute(GraphInterpreter.scala:376)
 at akka.stream.impl.fusing.GraphInterpreterShell.runBatch(ActorGraphInterpreter.scala:606)
 at akka.stream.impl.fusing.GraphInterpreterShell$AsyncInput.execute(ActorGraphInterpreter.scala:485)
 at akka.stream.impl.fusing.GraphInterpreterShell.processEvent(ActorGraphInterpreter.scala:581)
 at akka.stream.impl.fusing.ActorGraphInterpreter.akka$stream$impl$fusing$ActorGraphInterpreter$$processEvent(ActorGraphInterpreter.scala:749)
 at akka.stream.impl.fusing.ActorGraphInterpreter$$anonfun$receive$1.applyOrElse(ActorGraphInterpreter.scala:764)
 at akka.actor.Actor.aroundReceive(Actor.scala:539)
 at akka.actor.Actor.aroundReceive$(Actor.scala:537)
 at akka.stream.impl.fusing.ActorGraphInterpreter.aroundReceive(ActorGraphInterpreter.scala:671)
 at akka.actor.ActorCell.receiveMessage(ActorCell.scala:612)
 at akka.actor.ActorCell.invoke(ActorCell.scala:581)
 at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:268)
 at akka.dispatch.Mailbox.run(Mailbox.scala:229)
 at akka.dispatch.Mailbox.exec(Mailbox.scala:241)
 at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
 at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
 at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
 at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

if remove

.withIncludeStatusDimension(true)
.withIncludePathDimension(true)

then everything ok

Can't see http Status group & response time(duration) on graphite..

Hi :)

I used the following two dependencies

    "fr.davit"               %% "akka-http-metrics-core" % "0.5.0",
    "fr.davit"               %% "akka-http-metrics-datadog" % "0.5.0",

I have the following setting

  val registry = DatadogRegistry(JimiStatsDClient.apply)
    val settings = HttpMetricsSettings.default.withIncludeStatusDimension(true).withIncludePathDimension(true)

    Http().bindAndHandle(routes.recordMetrics(registry, settings), httpInterface, httpPort)

I have the following metrics folder in my graphite.
Screenshot 2019-08-08 at 13 00 59

However, I am not seeing,
http Status counts & response time(duration).

Please let me know if I have missed something.

Thank you,
Sean

5xx and 4xx responses go to unlabelled when handled with AKKA `ExecutionDirectives.handleExceptions`

All 2xx responses are recorded properly to their respective endpoint label, however, some 5xx and 4xx responses go to unlabelled:

image

image

The registry is created as follows:

protected def createPrometheusRegistry(
    metricsNamespace: String = "core_server"
  ): PrometheusRegistry =
    synchronized {
      assume(!metricsNamespace.endsWith("_"))
      val prometheusCollector = CollectorRegistry.defaultRegistry
      val prometheusSettings = PrometheusSettings
        .default
        .withNamespace(metricsNamespace)
        .withIncludeMethodDimension(true)
        .withIncludePathDimension(true)
        .withIncludeStatusDimension(true)
        .withDurationConfig(
          Buckets(0.005, 0.01, .025, .05, .075, .1, .25, .5, .75, 0.875, 1, 1.75, 2.5, 5, 7.5, 10,
            15, 20, 30)
        )
        .withReceivedBytesConfig {
          val buckets =
            Range(0, 1000, 100) ++ Range(1000, 10000, 1000) ++ Range(10000, 100000, 10000)
          Buckets(buckets.map(_.toDouble).toList)
        }
        .withSentBytesConfig {
          val buckets =
            Range(0, 1000, 100) ++ Range(1000, 10000, 1000) ++ Range(10000, 100000, 10000)
          Buckets(buckets.map(_.toDouble).toList)
        }

      PrometheusRegistry(prometheusCollector, prometheusSettings)
    }

And then we have the main route for that prometheusRegistry:

pathLabeled("metrics") {
            import fr.davit.akka.http.metrics.prometheus.marshalling.PrometheusMarshallers._
            metrics(prometheusRegistry)
          }

So that metrics are available on /metrics for the same AKKA server where we serve regular API calls. Therefore /api/v1/... (API calls) are routed through the load balancer to the end-user, so that the end-users can call paths that start with /api/v1, but /metrics is available only intra-cluster for Prometheus to scrape.

Below are examples on how we use pathPrefixLabeled and pathPrefix:

image

When a route responds with 400 for example, from its own route, let's say /api/v1/chat/ticket, the 4xx is properly recorded to route with label /api/v1/chat/ticket. However, when /api/v1/chat/ticket throws an exception and the exception is caught by AKKA's ExecutionDirectives.handleExceptions, when handleExceptions responds with 400, it is then recorded as unlabelled.

The exception handling encapsulator encapsulates all directives nearly, as follows:

handleRejections(rejectionHandler) {
      handleExceptions(exceptionHandler) {
        encodeResponse {
          cors(corsSettings) {
            pathPrefixLabeled("api" / Segment) {

The below code is from this very library, HttpMetricsDirectives. I can see that the response is patched with a PathLabel, but I figure this does not get passed when an exception is thrown from the path, because then there is no response.

private def rawPathPrefixLabeled[L](pm: PathMatcher[L], label: Option[String]): Directive[L] = {
    implicit val LIsTuple: Tuple[L] = pm.ev
    extractRequestContext.flatMap { ctx =>
      val pathCandidate = ctx.unmatchedPath.toString
      pm(ctx.unmatchedPath) match {
        case Matched(rest, values) =>
          tprovide(values) & mapRequestContext(_ withUnmatchedPath rest) & mapResponse { response =>
            val suffix = response.attribute(HttpMetrics.PathLabel).getOrElse("")
            val pathLabel = label match {
              case Some(l) => "/" + l + suffix // pm matches additional slash prefix
              case None    => pathCandidate.substring(0, pathCandidate.length - rest.charCount) + suffix
            }
            response.addAttribute(HttpMetrics.PathLabel, pathLabel)
          }
        case Unmatched =>
          reject
      }
    }
  }

Is it possible to label these responses originating from the exception handling encapsulator?

If not, my idea is to extend the pathPrefixLabeled and pathPrefix to include the handleExceptions(exceptionHandler) part themselves and then the response would be originating from the proper route /api/v1/chat/ticket?

Specify label for pathSingleSlash route

Hello, with akka-http-metrics-prometheus v1.2.0
when i use pathSingleSlash directive to match all incoming requests to the root or / for example:

pathSingleSlash {
   get {
        complete("welcome !")
    }
} ~  path("version") {
   get {
        complete("1.0.0")
   }
}

so when i hit the root
curl -X GET http://localhost:8080

The associated counter to this root endpoint is specified with an unlabelled path:

# TYPE akka_http_responses_total counter
akka_http_responses_total{method="GET",path="unlabelled",status="2xx",} 1.0

Is there a way to specify a label for the root path ?

Custom counter names

Hi!

Is it possible to somehow customize the name of the counters? I've looked at the code but it doesn't look like it.

If I want to use this lib in several different microservices (or similar), then I would want to know which microservice the counters belong to. So being able to specify a counter name prefix in the settings would be useful.

Thanks and regards,
Daniel

JVM metrics?

Hi,

I was wondering if there was a way to include JVM metrics such as heap size, garbage collections etc

tx.,

Add a method bindFlow(handler: Flow[I,O,Mat]) to HttpMetricsServerBuilder

When I plug the akka-http-metrics library version 1.6.0 there is no provision to pass a flow in the bindFlow method. The case class HttpMetricsServerBuilder only has the below method for bindFlow:

def bindFlow(route: Route): Future[ServerBinding]

The previous version 1.5.1 had the support for handlerFlow: Flow[HttpRequest, HttpResponse, _].

We use the library rocks.heikoseeberger.accessus.Accessus where we use the method ".withTimeStampedAccessLog(....) whcih return Flow[HttpRequest, HttpResponse, M]:

Http.newSeverAt(host,port).bindFlow(routes.withTimeStampedAccessLog(.....)

After using the akka-http-metrics library which we need to track latency for akka http routes the below code is no longer compiling due to constraints on the last line:

`
import rocks.heikoseeberger.accessus.Accessus._
import fr.davit.akka.http.metrics.core.HttpMetrics._

val server = Http()
.newMeteredServerAt(settings.server.host, settings.server.port, MetricsController.registry)

server.bindFlow(route.withTimestampedAccessLog(...)). //This line is not compiling as the bindFlow method expects a route
`

Implicit use of HttpMetricsRoute seems to no longer work

With the following code under Scala 2.12:

import fr.davit.akka.http.metrics.core.scaladsl.server.HttpMetricsRoute._
...
Http().bindAndHandle(route.recordMetrics(registry), "localhost", 8080)

even with scalac option -language:implicitConversions, I get the compile error:

... value recordMetrics is not a member of akka.http.scaladsl.server.Route
[error]     val server = Http().bindAndHandle(rootRoute.recordMetrics(metricsRegistry, settings), interface, port)

I'm using "fr.davit" %% "akka-http-metrics-prometheus" % "0.6.0"

However, this does work:

import fr.davit.akka.http.metrics.core.scaladsl.server.HttpMetricsRoute
...
val server = Http().bindAndHandle(HttpMetricsRoute(rootRoute).recordMetrics(metricsRegistry, settings), interface, port)

I feel it would be a good idea to update the example to use this explicit code that works under more situations.

1.7.0 not found in repos

[error] (update) sbt.librarymanagement.ResolveException: Error downloading fr.davit:akka-http-metrics-prometheus_2.13:1.7.0
--
11-Apr-2022 12:24:41 | [error]   Not found
11-Apr-2022 12:24:41 | [error]   Not found
11-Apr-2022 12:24:41 | [error]   not found: /home/java/.ivy2/localfr.davit/akka-http-metrics-prometheus_2.13/1.7.0/ivys/ivy.xml
11-Apr-2022 12:24:41 | [error]   not found: https://repo1.maven.org/maven2/fr/davit/akka-http-metrics-prometheus_2.13/1.7.0/akka-http-metrics-prometheus_2.13-1.7.0.pom
11-Apr-2022 12:24:41 | [error]   not found: https://oss.sonatype.org/content/repositories/public/fr/davit/akka-http-metrics-prometheus_2.13/1.7.0/akka-http-metrics-prometheus_2.13-1.7.0.pom

Bug 1.4.0: java.lang.IllegalArgumentException: requirement failed: Responses with this status code must have an empty entity

Upon upgrading to 1.4.0 with Scala Steward, 6 out of our 504 automatically fail due to:

java.lang.IllegalArgumentException: requirement failed: Responses with this status code must have an empty entity
	at scala.Predef$.require(Predef.scala:281) ~[scala-library.jar:?]
	at akka.http.scaladsl.model.HttpResponse.<init>(HttpMessage.scala:515) ~[akka-http-core_2.12-10.2.2.jar:10.2.2]
	at akka.http.scaladsl.model.HttpResponse.copyImpl(HttpMessage.scala:565) ~[akka-http-core_2.12-10.2.2.jar:10.2.2]
	at akka.http.scaladsl.model.HttpResponse.transformEntityDataBytes(HttpMessage.scala:549) ~[akka-http-core_2.12-10.2.2.jar:10.2.2]
	at fr.davit.akka.http.metrics.core.HttpMetricsRegistry.onResponse(HttpMetricsRegistry.scala:146) ~[akka-http-metrics-core_2.12-1.4.0.jar:1.4.0]
	at fr.davit.akka.http.metrics.core.MeterStage$$anon$1$$anon$3.onPush(MeterStage.scala:78) ~[akka-http-metrics-core_2.12-1.4.0.jar:1.4.0]
	at akka.stream.impl.fusing.GraphInterpreter.processPush(GraphInterpreter.scala:541) ~[akka-stream_2.12-2.6.10.jar:2.6.10]
	at akka.stream.impl.fusing.GraphInterpreter.execute(GraphInterpreter.scala:423) ~[akka-stream_2.12-2.6.10.jar:2.6.10]
	at akka.stream.impl.fusing.GraphInterpreterShell.runBatch(ActorGraphInterpreter.scala:625) ~[akka-stream_2.12-2.6.10.jar:2.6.10]
	at akka.stream.impl.fusing.GraphInterpreterShell$AsyncInput.execute(ActorGraphInterpreter.scala:502) ~[akka-stream_2.12-2.6.10.jar:2.6.10]
	at akka.stream.impl.fusing.GraphInterpreterShell.processEvent(ActorGraphInterpreter.scala:600) ~[akka-stream_2.12-2.6.10.jar:2.6.10]
	at akka.stream.impl.fusing.ActorGraphInterpreter.akka$stream$impl$fusing$ActorGraphInterpreter$$processEvent(ActorGraphInterpreter.scala:769) ~[akka-stream_2.12-2.6.10.jar:2.6.10]
	at akka.stream.impl.fusing.ActorGraphInterpreter$$anonfun$receive$1.applyOrElse(ActorGraphInterpreter.scala:784) ~[akka-stream_2.12-2.6.10.jar:2.6.10]
	at akka.actor.Actor.aroundReceive(Actor.scala:537) ~[akka-actor_2.12-2.6.10.jar:2.6.10]
	at akka.actor.Actor.aroundReceive$(Actor.scala:535) ~[akka-actor_2.12-2.6.10.jar:2.6.10]
	at akka.stream.impl.fusing.ActorGraphInterpreter.aroundReceive(ActorGraphInterpreter.scala:691) ~[akka-stream_2.12-2.6.10.jar:2.6.10]
	at akka.actor.ActorCell.receiveMessage(ActorCell.scala:577) [akka-actor_2.12-2.6.10.jar:2.6.10]
	at akka.actor.ActorCell.invoke(ActorCell.scala:547) [akka-actor_2.12-2.6.10.jar:2.6.10]
	at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270) [akka-actor_2.12-2.6.10.jar:2.6.10]
	at akka.dispatch.Mailbox.run(Mailbox.scala:231) [akka-actor_2.12-2.6.10.jar:2.6.10]
	at akka.dispatch.Mailbox.exec(Mailbox.scala:243) [akka-actor_2.12-2.6.10.jar:2.6.10]
	at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290) [?:?]
	at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020) [?:?]
	at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656) [?:?]
	at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594) [?:?]
	at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183) [?:?]

We create socket connections as:

pathLabeled("channel") {
                get {
                  handleWebSocketMessages(
                    fConversationFactory()
                      .socketForChat(chatID, kind)
                  )
                }
              } ~

Above that we have a pathPrefixLabeled and so on.

Path label is should reflect matched path only

Raised from #40

when developing akka-http-metrics I made the wrong assumption that success responses are created when path is fully matched. This is not the case.
There are 2 possible solutions:

  • Have a custom RequestContext that can tell on complete how much of the path was matched
  • Have the path labelled as opt-in
    In both cases this is not related to the http method. I'm closing this issue and will reference it in a new dedicated one.

Support individual metrics per route

Given the fact the .recordMetrics API returns a Flow, rather than a Route, it's not possible to instantiate separate collectors per API - it can only be done on the parent route.

This can be an important requirement when supporting multiple APIs in a single web server (e.g. ingesting streaming content and serving static resources).

Supporting individual metrics per route could either be done automatically (e.g. for Prometheus add a label with an inferred route name / path), or manually, by using instantiating the akka-http-metrics flow per Route.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.