Code Monkey home page Code Monkey logo

jmx-monitoring-stacks's Introduction

Overview

This repo demonstrates examples of JMX monitoring stacks that can monitor Confluent Cloud and Confluent Platform. While Confluent Cloud UI and Confluent Control Center provides an opinionated view of Apache Kafka monitoring, JMX monitoring stacks serve a larger purpose to our users, allowing them to setup monitoring across multiple parts of their organization, many outside of Kafka, and to have a single pane of glass.

This project provides metrics and dashboards for:

๐Ÿ“Š Dashboards

Some examples:

List of available dashboards for Confluent Platform:

Dashboard Prometheus and Grafana New Relic Metricbeat and Kibana Telegraf and Influx Datadog
Kafka Cluster yes yes yes yes yes
Zookeeper yes yes yes
KRaft yes
Schema Registry yes yes
Kafka Connect yes yes
ksqlDB yes yes
Producer/Consumer yes yes yes yes
Lag Exporter yes
Topics yes yes
Kafka Streams yes
Kafka Streams RocksDB yes
Quotas yes
TX Coordinator yes
Rest Proxy yes
Cluster Linking yes
Oracle CDC connector yes
Debezium connectors yes
Mongo connector yes
librdkafka clients yes
Confluent RBAC yes
Replicator yes
Tiered Storage yes

List of available dashboards for Confluent Cloud:

Dashboard Prometheus and Grafana New Relic Metricbeat and Kibana
Cluster yes yes yes
Producer/Consumer yes
ksql yes
Billing/Cost tracking yes

โš ๏ธ Alerts

Alerts are available for the stacks:

How to use with Confluent cp-ansible

To add JMX exporter configurations to Confluent cp-ansible, please refer to this README

How to use with Kubernetes and Confluent for Kubernetes Operator (CFK)

To add JMX exporter configurations to your Kubernetes workspace, please refer to this README

How to use with Confluent cp-demo

This repo is intended to work smoothly with Confluent cp-demo.

Make sure you have enough system resources on the local host to run this. Verify in the advanced Docker preferences settings that the memory available to Docker is at least 8 GB (default is 2 GB).

NOTE: jq is required to be installed on your machine to run the demo.

  1. Ensure that cp-demo is not already running on the local host.

  2. Decide which monitoring stack to demo: and set the MONITORING_STACK variable accordingly.

NOTE: New Relic requires a License Key to be added in jmxexporter-newrelic/start.sh

NOTE: Datadog requires a DATADOG_API_KEY and DATADOG_SITE to be added in datadog/start.sh. Datadog offers 14 day trial for new users.

# Set only one of these
MONITORING_STACK=jmxexporter-prometheus-grafana
MONITORING_STACK=metricbeat-elastic-kibana
MONITORING_STACK=jmxexporter-newrelic
MONITORING_STACK=jolokia
MONITORING_STACK=jolokia-telegraf-influxdb
MONITORING_STACK=datadog
  1. Clone cp-demo and checkout a branch.
# Example with CP-DEMO 7.6.1 version (all branches starting from 7.2.0 have been tested)
CP_DEMO_VERSION=7.6.1-post

[[ -d "cp-demo" ]] || git clone https://github.com/confluentinc/cp-demo.git
(cd cp-demo && git fetch && git checkout $CP_DEMO_VERSION && git pull)
  1. Clone jmx-monitoring-stacks and checkout main branch.
[[ -d "jmx-monitoring-stacks" ]] || git clone https://github.com/confluentinc/jmx-monitoring-stacks.git
(cd jmx-monitoring-stacks && git fetch && git checkout main && git pull)
  1. Start the monitoring solution with the STACK selected. This command also starts cp-demo, you do not need to start cp-demo separately.
${MONITORING_STACK}/start.sh
  1. Stop the monitoring solution. This command also stops cp-demo, you do not need to stop cp-demo separately.
${MONITORING_STACK}/stop.sh

How to use with Apache Kafka client applications (producers, consumers, kafka streams applications)

For an example that showcases how to monitor Apache Kafka client applications, and steps through various failure scenarios to see how they are reflected in the provided metrics, see the Observability for Apache Kafkaยฎ Clients to Confluent Cloud tutorial.

How to use with a minimal configuration: DEV-toolkit

Open in Gitpod

To run a lightweight dev environment:

  1. cd dev-toolkit
  2. Put your new dashboards into the grafana-wip folder
  3. start.sh -> It will create a minimal environment with a KRaft cluster, prometheus, grafana and a spring based java client
  4. For Grafana, go to http://localhost:3000, login with admin/password
  5. stop.sh

Run with profiles

To add more use cases, we are leveraging the docker profiles.

To run replicator scenario, i.e. start.sh --profile replicator. It's possible to combine profiles as well, i.e. start.sh --profile schema-registry --profile ksqldb.

Currently supported profiles:

  • replicator
  • schema-registry
  • ksqldb
  • consumer (with kafka-lag-exported included)
  • clientsreduced (kafka clients with a limited number of metrics exposed)

FAQ

  • What if I need more components?

More docker-compose envs will be released in the future, in the meantime you can use Kafka Docker Composer

  • What if I need more prometheus jobs?

You can add them to the start.sh, i.e.

# ADD client monitoring to prometheus config
cat <<EOF >> assets/prometheus/prometheus-config/prometheus.yml

  - job_name: 'spring-client'
    static_configs:
      - targets: ['spring-client:9191']
        labels:
          env: "dev"
EOF

You can also change the prometheus configuration here.

jmx-monitoring-stacks's People

Contributors

albefaedda avatar amith-at-github avatar aschmid13 avatar awalther28 avatar confluenttools avatar coughman avatar dabz avatar dhoard avatar gitfrog0 avatar hifly81 avatar jeanlouisboudart avatar jeqo avatar justberkhout avatar justinrlee avatar lgouellec avatar ludovic-boutros avatar mcolomerc avatar mosheblumbergx avatar ncapelle avatar oorobfuoo avatar pneff93 avatar ram-pi avatar schm1tz1 avatar sincejune avatar souquieresadam avatar tpham305 avatar tsuz avatar vdesabou avatar waliaabhishek avatar ybyzek avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

jmx-monitoring-stacks's Issues

Add metrics and grafana dashboard for Confluent Cluster Linking

The current kafka_broker.yml prometheus file is only capable to scrape these group of metrics related to Cluster Linking:

kafka_server_clientquotamanager_clusterlinkdiskthrottle
kafka_server_replicamanager_throttledclusterlinkreplicaspersec

kafka.network:type=RequestMetrics,name={ LocalTimeMs| RemoteTimeMs | RequestBytes | RequestQueueTimeMs| ResponseQueueTimeMs | ResponseSendIoTimeMs | ResponseSendTimeMs | ThrottleTimeMs| TotalTimeMs },request={CreateClusterLinks| DeleteClusterLinks| ListClusterLinks}
Depending on the request name, provides statistics on requests on the cluster link, including requests and response times, time requests wait in the queue, size of requests (bytes), and so forth.

kafka.server:type=DelayedOperationPurgatory,name=PurgatorySize,delayedOperation=ClusterLink
Provides metrics on delayed operations on the cluster link.

kafka.server:type=ReplicaManager,name=BlockedOnMirrorSourcePartitionCount
Number of mirrored partitions that are blocked on fetching data due to issues on the source cluster.

kafka.server:type=ReplicaManager,name=UnderMinIsrMirrorPartitionCount
Number of mirrored partitions that are under min ISR.

kafka.server:type=ReplicaManager,name=UnderReplicatedMirrorPartitions
Number of mirrored partitions that are under replicated.

kafka.server:type=ReplicaManager,name=UnderMinIsrMirrorPartitionCount
Number of mirrored partitions that are under min ISR.

It is required to add visibility to all the metrics listed in this page:
https://docs.confluent.io/platform/current/multi-dc-deployments/cluster-linking/metrics.html

Unable to get kafka_connect_cluster_id from JMX exporter

After importing JMX exporter config file, I couldnt find kafka_connect_cluster_id metric under the metric key kafka_connect_app_info.

This is showing only below mentioned metrics.

kafka_connect_app_info{start_time_ms="1652081661119",client_id="connect-1",job="kafka-connect"} 1.0
kafka_connect_app_info{commit_id="c6d7e3013b411760",client_id="connect-1",job="kafka-connect"} 1.0
kafka_connect_app_info{version="7.0.0-ccs",client_id="connect-1",job="kafka-connect"} 1.0

Discrepency between blog post and confluent docs.

Thank you for putting this repository together. It has helped me quite a lot in understanding the relationship between JMX, Kafka components, grafana and prometheus.

I have a question related to the blog post related to this repo and some confusion that the official documentation has caused me.

The confluent documentation on connector monitoring mentions "Connect can be configured to report stats using additional pluggable stats reporters using the metric.reporters configuration option". My understanding of that sentence is that it needs to be set in-order to serve JMX metrics however this confuses me because I can't see any usage of it for the connector agents withinjmx-monitoring-stacks/jmxexporter-prometheus-grafana and the cp-demo repo.

Q1) Is it actually the case that in order to expose JMX metrics from the connector all that needs to happen is for EXTRA_ARGS: -javaagent:/usr/share/jmx-exporter/jmx_prometheus_javaagent-0.16.1.jar=1234:/usr/share/jmx-exporter/kafka_connect.yml to be set and for the jmx exporter configs to be present on the machine?

Q2) As a followup question; why is it that only the kafka1 and kafka2 containers have the KAFKA_METRIC_REPORTERS properties set and yet all the other services are still able to expose metrics to prometheus?

(Windows) Metricbeat will not start due to config file permissions

When running on WSL2 on Windows, Metricbeat container will not start with error Exiting: error loading config file: config file ("metricbeat.yml") can only be writable by the owner but the permissions are "-rwxrwxrwx" (to fix the permissions use: 'chmod go-w /usr/share/metricbeat/metricbeat.yml') and end up in a restart loop. This may be related to known issues with file permissions on WSL/WSL2. This results in the Kibana dashboards showing "no data available".

Per Elastic documentation, this can be easily worked around by using the --strict.perms=false - for example modifying the docker-compose-override.yml file to command: -e --strict.perms=false. This is not a recommended fix but should be sufficient for demo and learning purposes.

Tested with 6.1.0-post, Ubuntu 18.04 on WSL2 (Windows 10). I suspect (but not tested) 6.2.0-post will have same issue. Impact for non-Windows users (e.g. Mac) not tested but I believe there will not be any impact from this change.

Recognize that we do not officially test these examples for Windows users, but this is a relatively simple workaround.

Connect - CPU total

Should this be a multiple of 100 to better correlate the metrics to the actual cores available on the server?

connect-overview.json

rate(process_cpu_seconds_total{job="connect",env="$env",cluster="$cluster",instance="$instance"}[5m])*100

Consumer/Producer dashboard with 'No Data (red)' in some widgets

Hi,

I just started jmx-monitoring-stacks with cp-demo, both using 6.1.0-post, and for producer and consumer dashboards I don't have data for some of the widgets. Is this the expected behavoir?

I also tried creating a topic and starting to produce data and nothing:

docker exec kafka1 bash -c 'KAFKA_OPTS="" kafka-topics --create --topic my-topic --partitions 3 --replication-factor 2 --bootstrap-server kafka1:12091'

docker exec kafka1 bash -c 'KAFKA_OPTS="" kafka-producer-perf-test --throughput 500 --num-records 100000000 --topic my-topic --record-size 1000 --producer-props bootstrap.servers=kafka1:12091'

Thanks!

Screenshot 2021-08-23 at 16 02 26

Screenshot 2021-08-23 at 16 02 37

Metrics don't appear in dashboards: $broker_id and $env are blank

Hello!
Would like to know if you have some pointers to debug this - I have cp-demo running & the Grafana & Prometheus dashboards come up. And in Prometheus I can see all the metrics, and if I pull up for example kafka_server_kafkaserver_brokerstate, I can see env="dev",instance="kafka1", etc.
But $broker_id and $env in Grafana Variables are blank, so none of the queries are working.

If I hack a query to remove the vars I can see the metrics plotted for kafka1 and kafka2:
rate(process_cpu_seconds_total{job="kafka",env="dev"}[3m])

Is the label_values function failing? Not sure how to debug this.
Thanks
Ken

7/6

I've seen weirdness in Grafana about getting metrics before so I did something I've tried before:

  • Went in to the Variables section, selected a variable, saved it again without making changes
  • Then, briefly, almost all of the Kafka Overview metrics appeared. It's quite a nice dashboard: very detailed.

And in the case of my own production dashboards that will usually last as long as I have the session running.
In this case it stopped working as soon as I navigated away, and I could never get it working again, despite messing with variables some more.

There's something very flaky and touchy about Grafana that I have never been able to figure out.

MONITORING_STACK/start.sh does not spawn everything is needed

Hi,

I'm trying to setup the environment for some tests.
I've followed along the readme the paragraph "How to run the cp-demo" but when I submit the command:
${MONITORING_STACK}/start.sh
the script is unable to spawn the cp demo cluster, I think.
Here is the log messages:

Using<CUSTOM_PATH>/jmx-monitoring-stacks/jmxexporter-prometheus-grafana/docker-compose.override.yaml for docker-compose override
Launch cp-demo in <CUSTOM_PATH>/cp-demo (version CONFLUENT_DOCKER_TAG=7.2.0) and monitoring stack in /mnt/c/Git/BNL/stream-monitoring/jmx-monitoring-stacks/jmxexporter-prometheus-grafana

ERROR: This script requires 'jq'. Please install 'jq' and run again.

Create user and certificates for kafkaLagExporter
<CUSTOM_PATH>/cp-demo/scripts/security/certs-create-per-user.sh: line 8: keytool: command not found
<CUSTOM_PATH>/cp-demo/scripts/security/certs-create-per-user.sh: line 19: keytool: command not found
Can't open kafkaLagExporter.csr for reading, No such file or directory
140184797439296:error:02001002:system library:fopen:No such file or directory:../crypto/bio/bss_file.c:69:fopen('kafkaLagExporter.csr','r')
140184797439296:error:2006D080:BIO routines:BIO_new_file:no such file:../crypto/bio/bss_file.c:76:
<CUSTOM_PATH>/cp-demo/scripts/security/certs-create-per-user.sh: line 52: keytool: command not found
<CUSTOM_PATH>/cp-demo/scripts/security/certs-create-per-user.sh: line 56: keytool: command not found
<CUSTOM_PATH>/cp-demo/scripts/security/certs-create-per-user.sh: line 60: keytool: command not found
<CUSTOM_PATH>/cp-demo/scripts/security/certs-create-per-user.sh: line 71: keytool: command not found
Can't open kafkaLagExporter.der for reading, No such file or directory
140005585380672:error:02001002:system library:fopen:No such file or directory:../crypto/bio/bss_file.c:69:fopen('kafkaLagExporter.der','rb')
140005585380672:error:2006D080:BIO routines:BIO_new_file:no such file:../crypto/bio/bss_file.c:76:
unable to load certificate
<CUSTOM_PATH>/cp-demo/scripts/security/certs-create-per-user.sh: line 73: keytool: command not found
Can't open kafkaLagExporter.keystore.p12 for reading, No such file or directory
140442245539136:error:02001002:system library:fopen:No such file or directory:../crypto/bio/bss_file.c:69:fopen('kafkaLagExporter.keystore.p12','rb')
140442245539136:error:2006D080:BIO routines:BIO_new_file:no such file:../crypto/bio/bss_file.c:76:
Create role binding for kafkaLagExporter
jmxexporter-prometheus-grafana/start.sh: line 38: jq: command not found
Launch <CUSTOM_PATH>/jmx-monitoring-stacks/jmxexporter-prometheus-grafana
WARNING: The REPOSITORY variable is not set. Defaulting to a blank string.
WARNING: The SSL_CIPHER_SUITES variable is not set. Defaulting to a blank string.
WARNING: The CONNECTOR_VERSION variable is not set. Defaulting to a blank string.
WARNING: The CONTROL_CENTER_KSQL_WIKIPEDIA_URL variable is not set. Defaulting to a blank string.
WARNING: The CONTROL_CENTER_KSQL_WIKIPEDIA_ADVERTISED_URL variable is not set. Defaulting to a blank string.

The script indeed proceeds and setup the grafana-prometheus & co part.
In fact I have the following container correctly set up:
immagine

But unfortunately, when I login to Grafana, all Kafka dashboards are empty.

Could you please help? Am I doing something wrong?

OS: Windows 11 (with ubuntu 20.04 WSL)

cp-demo end-to-end dataflow dashboard

Create a dashboard to correlate metrics from producers, connectors, topics and ksql queries tied to the demo use-case, as an example of how would project teams look at their dataflow.

Requested - Dashboard for Confluent Platform Rest-Proxy

Hi , we are exposing metrics using kafka-rest.yml that you supplied using a specified port on the Kafka-Rest process , and able to view the related metrics , however - we didn't find a Dashboard json file for Grafana ,
is it something that you can deliver ?
Thanks

Clarifying metric yam configuration file for ksql

I would like to understand the meaning of the comments on top of the metrics extraction as in this example

  #"kafka.consumer:type=consumer-node-metrics,client-id=*, node-id=*"
  # "kafka.consumer:type=consumer-fetch-manager-metrics,client-id=*, topic=*"
  # "kafka.producer:type=producer-node-metrics,client-id=*, node-id=*"
  # "kafka.producer:type=producer-topic-metrics,client-id=*, topic=*"
  - pattern: "kafka.(.+)<type=(.+), (.+)=(.+), (.+)=(.+)><>(.+):"
    name: kafka_$1_$2_$7
    type: GAUGE
    labels:
      client_type: $1
      $3: "$4"
      $5: "$6"

Do they stand for example of metrics extracted by the rule ?

Where i get confused is that, if that is the case why is it that at the top of the file we have in the blacklist the following:
blacklistObjectNames:

  - "io.confluent.ksql.metrics:name=*"
  - kafka.streams:type=kafka-metrics-count
  # This will ignore the admin client metrics from KSQL server and will blacklist certain metrics
  # that do not make sense for ingestion.
  - "kafka.admin.client:*"
  - "kafka.consumer:type=*,id=*"
  - "kafka.consumer:type=*,client-id=*"
  - "kafka.consumer:type=*,client-id=*,node-id=*"
  - "kafka.producer:type=*,id=*"
  - "kafka.producer:type=*,client-id=*"
  - "kafka.producer:type=*,client-id=*,node-id=*"
  - "kafka.streams:type=stream-processor-node-metrics,thread-id=*,task-id=*,processor-node-id=*"
  - "kafka.*:type=kafka-metrics-count,*"
  - "io.confluent.ksql.metrics:type=_confluent-ksql-rest-app-command-runner,*"

It feels like a contradiction to me.

I would like to inspire myself from the file, but i find it confusing at time. It seems we are blacklisting everything about consumer and producer, yet the rule below show how some of them might extract the very pattern that is blacklisted.

Can someone help me clear my confusion please

https://github.com/confluentinc/jmx-monitoring-stacks/blob/7.1-post/shared-assets/jmx-exporter/confluent_ksql.yml

Add cluster linking jmx metrics for kafka

We have a kafka cluster of 5 brokers. And have setup cluster linking with the DR environment. Want to setup grafana dashboard for cluster linking so that we can pull metrics for the same.

Suggest adding new metadata service metrics

An important one would be the MDS writer. Similar to a Controller, we should monitor to ensure there is a single writer:

confluent.metadata:type=KafkaAuthStore,name=active-writer-count

Keeping a running count of total role binding's to compare against the soft limit is also useful:

confluent-auth-store-metrics:name=rbac-role-bindings-count

ccloud stack start-script writes prometheus.yml to unwritable location

Problem

When I run ./start.sh from the /ccloud-prometheus-grafana stack, I run into an error:

$ ./start.sh
Generate Prometheus Configuration from Environemnet variables for  /home/justb/projects/jmx-monitoring-stacks/ccloud-prometheus-grafana
 creating prometheus configuration file /home/justb/projects/jmx-monitoring-stacks/ccloud-prometheus-grafana/assets/prometheus/prometheus-config/prometheus.yml
./start.sh: line 17: /home/justb/projects/jmx-monitoring-stacks/ccloud-prometheus-grafana/assets/prometheus/prometheus-config/prometheus.yml: No such file or directory
Launch /home/justb/projects/jmx-monitoring-stacks/ccloud-prometheus-grafana

The offendinf line (Line 17) is

envsubst < $MONITORING_STACK/utils/prometheus-template.yml > $MONITORING_STACK/assets/prometheus/prometheus-config/prometheus.yml

My system is Ubuntu 20.04 on WSL

Cause

It appears that the redirect (>) on my system creates the (new) folder $MONITORING_STACK/assets/prometheus/prometheus-config/ using root privileges before attempting to write prometheus.yml with unelevated user privileges (which fails). This results in an empty new dir:

Result

assets/prometheus/
โ””โ”€โ”€ prometheus-config

1 directory, 0 files

with root privs

$ ls -l assets/
total 8
drwxr-xr-x 3 justb justb 4096 May 23 13:19 grafana
drwxr-xr-x 3 root  root  4096 Jul 19 21:29 prometheus

Solution suggestion

I think explicitly creating the dir $MONITORING_STACK/assets/prometheus/prometheus-config/ in the running user context will create the dir with the correct privileges.

Questions regarding panels in Grafana

Hey guys,
first of all - thanks for the demo, it works great!

i'm interested in implementing some of the dashboard and the producer dashboard has this panel:

image
i'm only selecting 1 client id, can you please explain why there are 30+ panels(i would expect only 1 panel that shows the compression rate of that client)

same for retry rate:
image
selecting only 1 client_id, and i get tons of panel, while logically i only expect 1 panel to appear(or at least 1 panel per 1 client if "All" is elected)

(some labels were changed to fit our solution as you might notice)

Thanks a lot for reading!

Very slow getting metrics from jmx_exporter

Using this jmx_exporter config https://github.com/confluentinc/jmx-monitoring-stacks/blob/6.1.0-post/shared-assets/jmx-exporter/kafka_broker.yml
Get metrics from url takes 3 minutes !
How it is possible to use by anyone for any cluster? In what case could it be used?
This is very heavy config.
May I have to disable topic metrics.
Maybe I have to many topics and consumers for this exporter
kafka 2.7.2
2000 topics
40000 partitions
644 consumer groups (not all active)

Which metrics should I disable from this configuration first?

Add JMX Exporter Configs for Producers within the Confluent Server

When Control Center is enabled we leverage the ConfluentMetricsReporter within the Brokers. We should capture these metrics into prometheus.

Example Configs:

- pattern: kafka.producer<type=producer-metrics, client-id=(.+)><>(.+):\w* 
   name: kafka_producer_$2 
   labels: 
     client-id: "$1"
- pattern: kafka.producer<type=producer-node-metrics, client-id=(.+), node-id=(.+)><>(.+):\w* 
   name: kafka_producer_$3 
   labels: 
     client-id: "$1" 
     broker: "$2"
- pattern: kafka.producer<type=producer-topic-metrics, client-id=(.+), topic=(.+)><>(.+):\w* 
   name: kafka_producer_$3 
   labels: 
     client-id: "$1" 
     topic: "$2"

missing metric for zookeeeper clients online

can you please look into that, this metric does not exist

image

count(zookeeper_status_quorumsize{job="zookeeper",env="$env"})

returns N/A

and does not exist in the metric shown

`# HELP jvm_classes_loaded The number of classes that are currently loaded in the JVM

TYPE jvm_classes_loaded gauge

jvm_classes_loaded 3475.0

HELP jvm_classes_loaded_total The total number of classes that have been loaded since the JVM has started execution

TYPE jvm_classes_loaded_total counter

jvm_classes_loaded_total 3475.0

HELP jvm_classes_unloaded_total The total number of classes that have been unloaded since the JVM has started execution

TYPE jvm_classes_unloaded_total counter

jvm_classes_unloaded_total 0.0

HELP jmx_exporter_build_info A metric with a constant '1' value labeled with the version of the JMX exporter.

TYPE jmx_exporter_build_info gauge

jmx_exporter_build_info{version="0.16.1",name="jmx_prometheus_javaagent",} 1.0

HELP jvm_buffer_pool_used_bytes Used bytes of a given JVM buffer pool.

TYPE jvm_buffer_pool_used_bytes gauge

jvm_buffer_pool_used_bytes{pool="mapped",} 0.0
jvm_buffer_pool_used_bytes{pool="direct",} 313156.0

HELP jvm_buffer_pool_capacity_bytes Bytes capacity of a given JVM buffer pool.

TYPE jvm_buffer_pool_capacity_bytes gauge

jvm_buffer_pool_capacity_bytes{pool="mapped",} 0.0
jvm_buffer_pool_capacity_bytes{pool="direct",} 313156.0

HELP jvm_buffer_pool_used_buffers Used buffers of a given JVM buffer pool.

TYPE jvm_buffer_pool_used_buffers gauge

jvm_buffer_pool_used_buffers{pool="mapped",} 0.0
jvm_buffer_pool_used_buffers{pool="direct",} 15.0

HELP jvm_memory_objects_pending_finalization The number of objects waiting in the finalizer queue.

TYPE jvm_memory_objects_pending_finalization gauge

jvm_memory_objects_pending_finalization 0.0

HELP jvm_memory_bytes_used Used bytes of a given JVM memory area.

TYPE jvm_memory_bytes_used gauge

jvm_memory_bytes_used{area="heap",} 2.17100032E8
jvm_memory_bytes_used{area="nonheap",} 4.1909328E7

HELP jvm_memory_bytes_committed Committed (bytes) of a given JVM memory area.

TYPE jvm_memory_bytes_committed gauge

jvm_memory_bytes_committed{area="heap",} 5.36870912E8
jvm_memory_bytes_committed{area="nonheap",} 4.4630016E7

HELP jvm_memory_bytes_max Max (bytes) of a given JVM memory area.

TYPE jvm_memory_bytes_max gauge

jvm_memory_bytes_max{area="heap",} 5.36870912E8
jvm_memory_bytes_max{area="nonheap",} -1.0

HELP jvm_memory_bytes_init Initial bytes of a given JVM memory area.

TYPE jvm_memory_bytes_init gauge

jvm_memory_bytes_init{area="heap",} 5.36870912E8
jvm_memory_bytes_init{area="nonheap",} 7667712.0

HELP jvm_memory_pool_bytes_used Used bytes of a given JVM memory pool.

TYPE jvm_memory_pool_bytes_used gauge

jvm_memory_pool_bytes_used{pool="CodeHeap 'non-nmethods'",} 1261440.0
jvm_memory_pool_bytes_used{pool="Metaspace",} 2.4576672E7
jvm_memory_pool_bytes_used{pool="CodeHeap 'profiled nmethods'",} 9572096.0
jvm_memory_pool_bytes_used{pool="Compressed Class Space",} 2402736.0
jvm_memory_pool_bytes_used{pool="G1 Eden Space",} 1.96083712E8
jvm_memory_pool_bytes_used{pool="G1 Old Gen",} 1.9967744E7
jvm_memory_pool_bytes_used{pool="G1 Survivor Space",} 1048576.0
jvm_memory_pool_bytes_used{pool="CodeHeap 'non-profiled nmethods'",} 4096384.0

HELP jvm_memory_pool_bytes_committed Committed bytes of a given JVM memory pool.

TYPE jvm_memory_pool_bytes_committed gauge

jvm_memory_pool_bytes_committed{pool="CodeHeap 'non-nmethods'",} 2555904.0
jvm_memory_pool_bytes_committed{pool="Metaspace",} 2.555904E7
jvm_memory_pool_bytes_committed{pool="CodeHeap 'profiled nmethods'",} 9633792.0
jvm_memory_pool_bytes_committed{pool="Compressed Class Space",} 2752512.0
jvm_memory_pool_bytes_committed{pool="G1 Eden Space",} 3.37641472E8
jvm_memory_pool_bytes_committed{pool="G1 Old Gen",} 1.98180864E8
jvm_memory_pool_bytes_committed{pool="G1 Survivor Space",} 1048576.0
jvm_memory_pool_bytes_committed{pool="CodeHeap 'non-profiled nmethods'",} 4128768.0

HELP jvm_memory_pool_bytes_max Max bytes of a given JVM memory pool.

TYPE jvm_memory_pool_bytes_max gauge

jvm_memory_pool_bytes_max{pool="CodeHeap 'non-nmethods'",} 5828608.0
jvm_memory_pool_bytes_max{pool="Metaspace",} -1.0
jvm_memory_pool_bytes_max{pool="CodeHeap 'profiled nmethods'",} 1.22912768E8
jvm_memory_pool_bytes_max{pool="Compressed Class Space",} 1.073741824E9
jvm_memory_pool_bytes_max{pool="G1 Eden Space",} -1.0
jvm_memory_pool_bytes_max{pool="G1 Old Gen",} 5.36870912E8
jvm_memory_pool_bytes_max{pool="G1 Survivor Space",} -1.0
jvm_memory_pool_bytes_max{pool="CodeHeap 'non-profiled nmethods'",} 1.22916864E8

HELP jvm_memory_pool_bytes_init Initial bytes of a given JVM memory pool.

TYPE jvm_memory_pool_bytes_init gauge

jvm_memory_pool_bytes_init{pool="CodeHeap 'non-nmethods'",} 2555904.0
jvm_memory_pool_bytes_init{pool="Metaspace",} 0.0
jvm_memory_pool_bytes_init{pool="CodeHeap 'profiled nmethods'",} 2555904.0
jvm_memory_pool_bytes_init{pool="Compressed Class Space",} 0.0
jvm_memory_pool_bytes_init{pool="G1 Eden Space",} 2.8311552E7
jvm_memory_pool_bytes_init{pool="G1 Old Gen",} 5.0855936E8
jvm_memory_pool_bytes_init{pool="G1 Survivor Space",} 0.0
jvm_memory_pool_bytes_init{pool="CodeHeap 'non-profiled nmethods'",} 2555904.0

HELP jvm_memory_pool_collection_used_bytes Used bytes after last collection of a given JVM memory pool.

TYPE jvm_memory_pool_collection_used_bytes gauge

jvm_memory_pool_collection_used_bytes{pool="G1 Eden Space",} 0.0
jvm_memory_pool_collection_used_bytes{pool="G1 Old Gen",} 0.0
jvm_memory_pool_collection_used_bytes{pool="G1 Survivor Space",} 1048576.0

HELP jvm_memory_pool_collection_committed_bytes Committed after last collection bytes of a given JVM memory pool.

TYPE jvm_memory_pool_collection_committed_bytes gauge

jvm_memory_pool_collection_committed_bytes{pool="G1 Eden Space",} 3.37641472E8
jvm_memory_pool_collection_committed_bytes{pool="G1 Old Gen",} 0.0
jvm_memory_pool_collection_committed_bytes{pool="G1 Survivor Space",} 1048576.0

HELP jvm_memory_pool_collection_max_bytes Max bytes after last collection of a given JVM memory pool.

TYPE jvm_memory_pool_collection_max_bytes gauge

jvm_memory_pool_collection_max_bytes{pool="G1 Eden Space",} -1.0
jvm_memory_pool_collection_max_bytes{pool="G1 Old Gen",} 5.36870912E8
jvm_memory_pool_collection_max_bytes{pool="G1 Survivor Space",} -1.0

HELP jvm_memory_pool_collection_init_bytes Initial after last collection bytes of a given JVM memory pool.

TYPE jvm_memory_pool_collection_init_bytes gauge

jvm_memory_pool_collection_init_bytes{pool="G1 Eden Space",} 2.8311552E7
jvm_memory_pool_collection_init_bytes{pool="G1 Old Gen",} 5.0855936E8
jvm_memory_pool_collection_init_bytes{pool="G1 Survivor Space",} 0.0

HELP jvm_gc_collection_seconds Time spent in a given JVM garbage collector in seconds.

TYPE jvm_gc_collection_seconds summary

jvm_gc_collection_seconds_count{gc="G1 Young Generation",} 50.0
jvm_gc_collection_seconds_sum{gc="G1 Young Generation",} 0.13
jvm_gc_collection_seconds_count{gc="G1 Old Generation",} 0.0
jvm_gc_collection_seconds_sum{gc="G1 Old Generation",} 0.0

HELP jvm_memory_pool_allocated_bytes_total Total bytes allocated in a given JVM memory pool. Only updated after GC, not continuously.

TYPE jvm_memory_pool_allocated_bytes_total counter

jvm_memory_pool_allocated_bytes_total{pool="CodeHeap 'profiled nmethods'",} 9571200.0
jvm_memory_pool_allocated_bytes_total{pool="G1 Old Gen",} 2.045016E7
jvm_memory_pool_allocated_bytes_total{pool="G1 Eden Space",} 9.910091776E9
jvm_memory_pool_allocated_bytes_total{pool="CodeHeap 'non-profiled nmethods'",} 4062464.0
jvm_memory_pool_allocated_bytes_total{pool="G1 Survivor Space",} 4194304.0
jvm_memory_pool_allocated_bytes_total{pool="Compressed Class Space",} 2402736.0
jvm_memory_pool_allocated_bytes_total{pool="Metaspace",} 2.4576304E7
jvm_memory_pool_allocated_bytes_total{pool="CodeHeap 'non-nmethods'",} 1267200.0

HELP jvm_threads_current Current thread count of a JVM

TYPE jvm_threads_current gauge

jvm_threads_current 35.0

HELP jvm_threads_daemon Daemon thread count of a JVM

TYPE jvm_threads_daemon gauge

jvm_threads_daemon 20.0

HELP jvm_threads_peak Peak thread count of a JVM

TYPE jvm_threads_peak gauge

jvm_threads_peak 35.0

HELP jvm_threads_started_total Started thread count of a JVM

TYPE jvm_threads_started_total counter

jvm_threads_started_total 36.0

HELP jvm_threads_deadlocked Cycles of JVM-threads that are in deadlock waiting to acquire object monitors or ownable synchronizers

TYPE jvm_threads_deadlocked gauge

jvm_threads_deadlocked 0.0

HELP jvm_threads_deadlocked_monitor Cycles of JVM-threads that are in deadlock waiting to acquire object monitors

TYPE jvm_threads_deadlocked_monitor gauge

jvm_threads_deadlocked_monitor 0.0

HELP jvm_threads_state Current count of threads by state

TYPE jvm_threads_state gauge

jvm_threads_state{state="TERMINATED",} 0.0
jvm_threads_state{state="WAITING",} 13.0
jvm_threads_state{state="NEW",} 0.0
jvm_threads_state{state="TIMED_WAITING",} 13.0
jvm_threads_state{state="BLOCKED",} 0.0
jvm_threads_state{state="RUNNABLE",} 9.0

HELP jmx_config_reload_success_total Number of times configuration have successfully been reloaded.

TYPE jmx_config_reload_success_total counter

jmx_config_reload_success_total 0.0

HELP jmx_config_reload_failure_total Number of times configuration have failed to be reloaded.

TYPE jmx_config_reload_failure_total counter

jmx_config_reload_failure_total 0.0

HELP zookeeper_connections_sessiontimeout SessionTimeout (org.apache.ZooKeeperService<name0=StandaloneServer_port2181, name1=Connections, name2=172.20.0.9, name3=0x10000453ee7000a><>SessionTimeout)

TYPE zookeeper_connections_sessiontimeout untyped

zookeeper_connections_sessiontimeout{client_address="172.20.0.9",connection_id="0x10000453ee7000a",server_name="StandaloneServer_port2181",} 18000.0

HELP zookeeper_commitprocmaxreadbatchsize CommitProcMaxReadBatchSize (org.apache.ZooKeeperService<name0=StandaloneServer_port2181><>CommitProcMaxReadBatchSize)

TYPE zookeeper_commitprocmaxreadbatchsize gauge

zookeeper_commitprocmaxreadbatchsize 0.0

HELP zookeeper_connectiontokenfillcount ConnectionTokenFillCount (org.apache.ZooKeeperService<name0=StandaloneServer_port2181><>ConnectionTokenFillCount)

TYPE zookeeper_connectiontokenfillcount gauge

zookeeper_connectiontokenfillcount 1.0

HELP zookeeper_connections_maxlatency MaxLatency (org.apache.ZooKeeperService<name0=StandaloneServer_port2181, name1=Connections, name2=172.20.0.9, name3=0x10000453ee7000a><>MaxLatency)

TYPE zookeeper_connections_maxlatency untyped

zookeeper_connections_maxlatency{client_address="172.20.0.9",connection_id="0x10000453ee7000a",server_name="StandaloneServer_port2181",} 148.0

HELP zookeeper_connections_packetsreceived PacketsReceived (org.apache.ZooKeeperService<name0=StandaloneServer_port2181, name1=Connections, name2=172.20.0.9, name3=0x10000453ee7000a><>PacketsReceived)

TYPE zookeeper_connections_packetsreceived untyped

zookeeper_connections_packetsreceived{client_address="172.20.0.9",connection_id="0x10000453ee7000a",server_name="StandaloneServer_port2181",} 2192.0

HELP zookeeper_authfailedcount AuthFailedCount (org.apache.ZooKeeperService<name0=StandaloneServer_port2181><>AuthFailedCount)

TYPE zookeeper_authfailedcount gauge

zookeeper_authfailedcount 0.0

HELP zookeeper_jutemaxbuffersize JuteMaxBufferSize (org.apache.ZooKeeperService<name0=StandaloneServer_port2181><>JuteMaxBufferSize)

TYPE zookeeper_jutemaxbuffersize gauge

zookeeper_jutemaxbuffersize 1048575.0

HELP zookeeper_connectiondropdecrease ConnectionDropDecrease (org.apache.ZooKeeperService<name0=StandaloneServer_port2181><>ConnectionDropDecrease)

TYPE zookeeper_connectiondropdecrease gauge

zookeeper_connectiondropdecrease 0.002

HELP zookeeper_packetssent PacketsSent (org.apache.ZooKeeperService<name0=StandaloneServer_port2181><>PacketsSent)

TYPE zookeeper_packetssent gauge

zookeeper_packetssent 17358.0

HELP zookeeper_nonmtlsremoteconncount NonMTLSRemoteConnCount (org.apache.ZooKeeperService<name0=StandaloneServer_port2181><>NonMTLSRemoteConnCount)

TYPE zookeeper_nonmtlsremoteconncount gauge

zookeeper_nonmtlsremoteconncount 0.0

HELP zookeeper_connections_minlatency MinLatency (org.apache.ZooKeeperService<name0=StandaloneServer_port2181, name1=Connections, name2=172.20.0.9, name3=0x10000453ee7000a><>MinLatency)

TYPE zookeeper_connections_minlatency untyped

zookeeper_connections_minlatency{client_address="172.20.0.9",connection_id="0x10000453ee7000a",server_name="StandaloneServer_port2181",} 0.0

HELP zookeeper_ticktime TickTime (org.apache.ZooKeeperService<name0=StandaloneServer_port2181><>TickTime)

TYPE zookeeper_ticktime gauge

zookeeper_ticktime 2000.0

HELP zookeeper_minclientresponsesize MinClientResponseSize (org.apache.ZooKeeperService<name0=StandaloneServer_port2181><>MinClientResponseSize)

TYPE zookeeper_minclientresponsesize gauge

zookeeper_minclientresponsesize 16.0

HELP zookeeper_maxrequestlatency MaxRequestLatency (org.apache.ZooKeeperService<name0=StandaloneServer_port2181><>MaxRequestLatency)

TYPE zookeeper_maxrequestlatency gauge

zookeeper_maxrequestlatency 288.0

HELP zookeeper_maxsessiontimeout MaxSessionTimeout (org.apache.ZooKeeperService<name0=StandaloneServer_port2181><>MaxSessionTimeout)

TYPE zookeeper_maxsessiontimeout gauge

zookeeper_maxsessiontimeout 40000.0

HELP zookeeper_requestthrottlestalltime RequestThrottleStallTime (org.apache.ZooKeeperService<name0=StandaloneServer_port2181><>RequestThrottleStallTime)

TYPE zookeeper_requestthrottlestalltime gauge

zookeeper_requestthrottlestalltime 100.0

HELP zookeeper_requestthrottlelimit RequestThrottleLimit (org.apache.ZooKeeperService<name0=StandaloneServer_port2181><>RequestThrottleLimit)

TYPE zookeeper_requestthrottlelimit gauge

zookeeper_requestthrottlelimit 0.0

HELP zookeeper_version Version (org.apache.ZooKeeperService<name0=StandaloneServer_port2181><>Version)

TYPE zookeeper_version untyped

zookeeper_version{version="3.6.3--6401e4ad2087061bc6b9f80dec2d69f2e3c8660a, built on 04/08/2021 16:35 GMT",server_name="StandaloneServer_port2181",} 1.0

HELP zookeeper_minsessiontimeout MinSessionTimeout (org.apache.ZooKeeperService<name0=StandaloneServer_port2181><>MinSessionTimeout)

TYPE zookeeper_minsessiontimeout gauge

zookeeper_minsessiontimeout 4000.0

HELP zookeeper_numaliveconnections NumAliveConnections (org.apache.ZooKeeperService<name0=StandaloneServer_port2181><>NumAliveConnections)

TYPE zookeeper_numaliveconnections gauge

zookeeper_numaliveconnections 1.0

HELP zookeeper_requeststalelatencycheck RequestStaleLatencyCheck (org.apache.ZooKeeperService<name0=StandaloneServer_port2181><>RequestStaleLatencyCheck)

TYPE zookeeper_requeststalelatencycheck gauge

zookeeper_requeststalelatencycheck 0.0

HELP zookeeper_starttime StartTime (org.apache.ZooKeeperService<name0=StandaloneServer_port2181><>StartTime)

TYPE zookeeper_starttime untyped

zookeeper_starttime{starttime="Tue Mar 01 20:52:48 GMT 2022",server_name="StandaloneServer_port2181",} 1.0

HELP zookeeper_connections_outstandingrequests OutstandingRequests (org.apache.ZooKeeperService<name0=StandaloneServer_port2181, name1=Connections, name2=172.20.0.9, name3=0x10000453ee7000a><>OutstandingRequests)

TYPE zookeeper_connections_outstandingrequests untyped

zookeeper_connections_outstandingrequests{client_address="172.20.0.9",connection_id="0x10000453ee7000a",server_name="StandaloneServer_port2181",} 0.0

HELP zookeeper_connections_avglatency AvgLatency (org.apache.ZooKeeperService<name0=StandaloneServer_port2181, name1=Connections, name2=172.20.0.9, name3=0x10000453ee7000a><>AvgLatency)

TYPE zookeeper_connections_avglatency untyped

zookeeper_connections_avglatency{client_address="172.20.0.9",connection_id="0x10000453ee7000a",server_name="StandaloneServer_port2181",} 0.0

HELP zookeeper_requeststaleconnectioncheck RequestStaleConnectionCheck (org.apache.ZooKeeperService<name0=StandaloneServer_port2181><>RequestStaleConnectionCheck)

TYPE zookeeper_requeststaleconnectioncheck gauge

zookeeper_requeststaleconnectioncheck 1.0

HELP zookeeper_nonmtlslocalconncount NonMTLSLocalConnCount (org.apache.ZooKeeperService<name0=StandaloneServer_port2181><>NonMTLSLocalConnCount)

TYPE zookeeper_nonmtlslocalconncount gauge

zookeeper_nonmtlslocalconncount 0.0

HELP zookeeper_inmemorydatatree_watchcount WatchCount (org.apache.ZooKeeperService<name0=StandaloneServer_port2181, name1=InMemoryDataTree><>WatchCount)

TYPE zookeeper_inmemorydatatree_watchcount gauge

zookeeper_inmemorydatatree_watchcount{server_id="1",server_name="StandaloneServer_port2181",} 16.0

HELP zookeeper_maxcnxns MaxCnxns (org.apache.ZooKeeperService<name0=StandaloneServer_port2181><>MaxCnxns)

TYPE zookeeper_maxcnxns gauge

zookeeper_maxcnxns 0.0

HELP zookeeper_connectiondropincrease ConnectionDropIncrease (org.apache.ZooKeeperService<name0=StandaloneServer_port2181><>ConnectionDropIncrease)

TYPE zookeeper_connectiondropincrease gauge

zookeeper_connectiondropincrease 0.02

HELP zookeeper_connections_lastlatency LastLatency (org.apache.ZooKeeperService<name0=StandaloneServer_port2181, name1=Connections, name2=172.20.0.9, name3=0x10000453ee7000a><>LastLatency)

TYPE zookeeper_connections_lastlatency untyped

zookeeper_connections_lastlatency{client_address="172.20.0.9",connection_id="0x10000453ee7000a",server_name="StandaloneServer_port2181",} 0.0

HELP zookeeper_clientport ClientPort (org.apache.ZooKeeperService<name0=StandaloneServer_port2181><>ClientPort)

TYPE zookeeper_clientport untyped

zookeeper_clientport{clientport="2181",server_name="StandaloneServer_port2181",} 1.0

HELP zookeeper_requestthrottledropstale RequestThrottleDropStale (org.apache.ZooKeeperService<name0=StandaloneServer_port2181><>RequestThrottleDropStale)

TYPE zookeeper_requestthrottledropstale gauge

zookeeper_requestthrottledropstale 1.0

HELP zookeeper_outstandingrequests OutstandingRequests (org.apache.ZooKeeperService<name0=StandaloneServer_port2181><>OutstandingRequests)

TYPE zookeeper_outstandingrequests gauge

zookeeper_outstandingrequests 0.0

HELP zookeeper_inmemorydatatree_nodecount NodeCount (org.apache.ZooKeeperService<name0=StandaloneServer_port2181, name1=InMemoryDataTree><>NodeCount)

TYPE zookeeper_inmemorydatatree_nodecount gauge

zookeeper_inmemorydatatree_nodecount{server_id="1",server_name="StandaloneServer_port2181",} 211.0

HELP zookeeper_lastclientresponsesize LastClientResponseSize (org.apache.ZooKeeperService<name0=StandaloneServer_port2181><>LastClientResponseSize)

TYPE zookeeper_lastclientresponsesize gauge

zookeeper_lastclientresponsesize 16.0

HELP zookeeper_connectionmaxtokens ConnectionMaxTokens (org.apache.ZooKeeperService<name0=StandaloneServer_port2181><>ConnectionMaxTokens)

TYPE zookeeper_connectionmaxtokens gauge

zookeeper_connectionmaxtokens 0.0

HELP zookeeper_fsyncthresholdexceedcount FsyncThresholdExceedCount (org.apache.ZooKeeperService<name0=StandaloneServer_port2181><>FsyncThresholdExceedCount)

TYPE zookeeper_fsyncthresholdexceedcount gauge

zookeeper_fsyncthresholdexceedcount 0.0

HELP zookeeper_connections_packetssent PacketsSent (org.apache.ZooKeeperService<name0=StandaloneServer_port2181, name1=Connections, name2=172.20.0.9, name3=0x10000453ee7000a><>PacketsSent)

TYPE zookeeper_connections_packetssent untyped

zookeeper_connections_packetssent{client_address="172.20.0.9",connection_id="0x10000453ee7000a",server_name="StandaloneServer_port2181",} 2193.0

HELP zookeeper_connectiontokenfilltime ConnectionTokenFillTime (org.apache.ZooKeeperService<name0=StandaloneServer_port2181><>ConnectionTokenFillTime)

TYPE zookeeper_connectiontokenfilltime gauge

zookeeper_connectiontokenfilltime 1.0

HELP zookeeper_txnlogelapsedsynctime TxnLogElapsedSyncTime (org.apache.ZooKeeperService<name0=StandaloneServer_port2181><>TxnLogElapsedSyncTime)

TYPE zookeeper_txnlogelapsedsynctime gauge

zookeeper_txnlogelapsedsynctime 22.0

HELP zookeeper_maxbatchsize MaxBatchSize (org.apache.ZooKeeperService<name0=StandaloneServer_port2181><>MaxBatchSize)

TYPE zookeeper_maxbatchsize gauge

zookeeper_maxbatchsize 1000.0

HELP zookeeper_commitprocmaxcommitbatchsize CommitProcMaxCommitBatchSize (org.apache.ZooKeeperService<name0=StandaloneServer_port2181><>CommitProcMaxCommitBatchSize)

TYPE zookeeper_commitprocmaxcommitbatchsize gauge

zookeeper_commitprocmaxcommitbatchsize 0.0

HELP zookeeper_flushdelay FlushDelay (org.apache.ZooKeeperService<name0=StandaloneServer_port2181><>FlushDelay)

TYPE zookeeper_flushdelay gauge

zookeeper_flushdelay 0.0

HELP zookeeper_avgrequestlatency AvgRequestLatency (org.apache.ZooKeeperService<name0=StandaloneServer_port2181><>AvgRequestLatency)

TYPE zookeeper_avgrequestlatency gauge

zookeeper_avgrequestlatency 0.4071

HELP zookeeper_datadirsize DataDirSize (org.apache.ZooKeeperService<name0=StandaloneServer_port2181><>DataDirSize)

TYPE zookeeper_datadirsize gauge

zookeeper_datadirsize 8.7241544E8

HELP zookeeper_connectionfreezetime ConnectionFreezeTime (org.apache.ZooKeeperService<name0=StandaloneServer_port2181><>ConnectionFreezeTime)

TYPE zookeeper_connectionfreezetime gauge

zookeeper_connectionfreezetime -1.0

HELP zookeeper_maxwritequeuepolltime MaxWriteQueuePollTime (org.apache.ZooKeeperService<name0=StandaloneServer_port2181><>MaxWriteQueuePollTime)

TYPE zookeeper_maxwritequeuepolltime gauge

zookeeper_maxwritequeuepolltime 0.0

HELP zookeeper_minrequestlatency MinRequestLatency (org.apache.ZooKeeperService<name0=StandaloneServer_port2181><>MinRequestLatency)

TYPE zookeeper_minrequestlatency gauge

zookeeper_minrequestlatency 0.0

HELP zookeeper_packetsreceived PacketsReceived (org.apache.ZooKeeperService<name0=StandaloneServer_port2181><>PacketsReceived)

TYPE zookeeper_packetsreceived gauge

zookeeper_packetsreceived 17341.0

HELP zookeeper_maxclientresponsesize MaxClientResponseSize (org.apache.ZooKeeperService<name0=StandaloneServer_port2181><>MaxClientResponseSize)

TYPE zookeeper_maxclientresponsesize gauge

zookeeper_maxclientresponsesize 636.0

HELP zookeeper_logdirsize LogDirSize (org.apache.ZooKeeperService<name0=StandaloneServer_port2181><>LogDirSize)

TYPE zookeeper_logdirsize gauge

zookeeper_logdirsize 429158.0

HELP zookeeper_largerequestmaxbytes LargeRequestMaxBytes (org.apache.ZooKeeperService<name0=StandaloneServer_port2181><>LargeRequestMaxBytes)

TYPE zookeeper_largerequestmaxbytes gauge

zookeeper_largerequestmaxbytes 1.048576E8

HELP zookeeper_responsecachingenabled ResponseCachingEnabled (org.apache.ZooKeeperService<name0=StandaloneServer_port2181><>ResponseCachingEnabled)

TYPE zookeeper_responsecachingenabled gauge

zookeeper_responsecachingenabled 1.0

HELP zookeeper_largerequestthreshold LargeRequestThreshold (org.apache.ZooKeeperService<name0=StandaloneServer_port2181><>LargeRequestThreshold)

TYPE zookeeper_largerequestthreshold gauge

zookeeper_largerequestthreshold -1.0

HELP zookeeper_maxclientcnxnsperhost MaxClientCnxnsPerHost (org.apache.ZooKeeperService<name0=StandaloneServer_port2181><>MaxClientCnxnsPerHost)

TYPE zookeeper_maxclientcnxnsperhost gauge

zookeeper_maxclientcnxnsperhost 60.0

HELP zookeeper_connectiondecreaseratio ConnectionDecreaseRatio (org.apache.ZooKeeperService<name0=StandaloneServer_port2181><>ConnectionDecreaseRatio)

TYPE zookeeper_connectiondecreaseratio gauge

zookeeper_connectiondecreaseratio 0.0

HELP jmx_scrape_duration_seconds Time this JMX scrape took, in seconds.

TYPE jmx_scrape_duration_seconds gauge

jmx_scrape_duration_seconds 0.003229324

HELP jmx_scrape_error Non-zero if this scrape failed.

TYPE jmx_scrape_error gauge

jmx_scrape_error 0.0

HELP jmx_scrape_cached_beans Number of beans with their matching rule cached

TYPE jmx_scrape_cached_beans gauge

jmx_scrape_cached_beans 0.0

HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.

TYPE process_cpu_seconds_total counter

process_cpu_seconds_total 174.52

HELP process_start_time_seconds Start time of the process since unix epoch in seconds.

TYPE process_start_time_seconds gauge

process_start_time_seconds 1.646167967675E9

HELP process_open_fds Number of open file descriptors.

TYPE process_open_fds gauge

process_open_fds 128.0

HELP process_max_fds Maximum number of open file descriptors.

TYPE process_max_fds gauge

process_max_fds 1048576.0

HELP process_virtual_memory_bytes Virtual memory size in bytes.

TYPE process_virtual_memory_bytes gauge

process_virtual_memory_bytes 3.220774912E9

HELP process_resident_memory_bytes Resident memory size in bytes.

TYPE process_resident_memory_bytes gauge

process_resident_memory_bytes 4.79141888E8

HELP jvm_info VM version info

TYPE jvm_info gauge

jvm_info{runtime="OpenJDK Runtime Environment",vendor="Azul Systems, Inc.",version="11.0.13+8-LTS",} 1.0

HELP jmx_config_reload_failure_created Number of times configuration have failed to be reloaded.

TYPE jmx_config_reload_failure_created gauge

jmx_config_reload_failure_created 1.646167967908E9

HELP jmx_config_reload_success_created Number of times configuration have successfully been reloaded.

TYPE jmx_config_reload_success_created gauge

jmx_config_reload_success_created 1.646167967907E9

HELP jvm_memory_pool_allocated_bytes_created Total bytes allocated in a given JVM memory pool. Only updated after GC, not continuously.

TYPE jvm_memory_pool_allocated_bytes_created gauge

jvm_memory_pool_allocated_bytes_created{pool="CodeHeap 'profiled nmethods'",} 1.646167968416E9
jvm_memory_pool_allocated_bytes_created{pool="G1 Old Gen",} 1.646167968418E9
jvm_memory_pool_allocated_bytes_created{pool="G1 Eden Space",} 1.646167968418E9
jvm_memory_pool_allocated_bytes_created{pool="CodeHeap 'non-profiled nmethods'",} 1.646167968418E9
jvm_memory_pool_allocated_bytes_created{pool="G1 Survivor Space",} 1.646167968418E9
jvm_memory_pool_allocated_bytes_created{pool="Compressed Class Space",} 1.646167968418E9
jvm_memory_pool_allocated_bytes_created{pool="Metaspace",} 1.646167968418E9
jvm_memory_pool_allocated_bytes_created{pool="CodeHeap 'non-nmethods'",} 1.646167968418E9`

Graphite+Grafana Dashboard

Customer is interested in an example Graphite+Grafana implementation as they love the Prom+Grafana one but are using Graphite and want to replicate.

Missing metrics on kafka connect

I've tried to import the kafka connect grafana+prometheus dashboard and use it with the kafka connect installed by the helm chart however all data is missing.

I can see for example the dashboard uses kafka_connect_connect_worker_metrics_connector_total_task_count for the total tasks but the jmx exporter uses cp_kafka_connect_connect_worker_metrics_task_count, am I missing something?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.