Code Monkey home page Code Monkey logo

nginx-prometheus-exporter's Introduction

OpenSSFScorecard CI FOSSA Status Go Report Card codecov GitHub all releases GitHub release (latest by SemVer) GitHub release (latest SemVer) nginx-prometheus-exporter GitHub go.mod Go version Docker Pulls Docker Image Size (latest semver) Slack Project Status: Active – The project has reached a stable, usable state and is being actively developed.

NGINX Prometheus Exporter

NGINX Prometheus exporter makes it possible to monitor NGINX or NGINX Plus using Prometheus.

Table of Contents

Overview

NGINX exposes a handful of metrics via the stub_status page. NGINX Plus provides a richer set of metrics via the API and the monitoring dashboard. NGINX Prometheus exporter fetches the metrics from a single NGINX or NGINX Plus, converts the metrics into appropriate Prometheus metrics types and finally exposes them via an HTTP server to be collected by Prometheus.

Getting Started

In this section, we show how to quickly run NGINX Prometheus Exporter for NGINX or NGINX Plus.

A Note about NGINX Ingress Controller

If you’d like to use the NGINX Prometheus Exporter with NGINX Ingress Controller for Kubernetes, see this doc for the installation instructions.

Prerequisites

We assume that you have already installed Prometheus and NGINX or NGINX Plus. Additionally, you need to:

  • Expose the built-in metrics in NGINX/NGINX Plus:
    • For NGINX, expose the stub_status page at /stub_status on port 8080.
    • For NGINX Plus, expose the API at /api on port 8080.
  • Configure Prometheus to scrape metrics from the server with the exporter. Note that the default scrape port of the exporter is 9113 and the default metrics path -- /metrics.

Running the Exporter in a Docker Container

To start the exporter we use the docker run command.

  • To export NGINX metrics, run:

    docker run -p 9113:9113 nginx/nginx-prometheus-exporter:1.1.0 --nginx.scrape-uri=http://<nginx>:8080/stub_status

    where <nginx> is the IP address/DNS name, through which NGINX is available.

  • To export NGINX Plus metrics, run:

    docker run -p 9113:9113 nginx/nginx-prometheus-exporter:1.1.0 --nginx.plus --nginx.scrape-uri=http://<nginx-plus>:8080/api

    where <nginx-plus> is the IP address/DNS name, through which NGINX Plus is available.

Running the Exporter Binary

  • To export NGINX metrics, run:

    nginx-prometheus-exporter --nginx.scrape-uri=http://<nginx>:8080/stub_status

    where <nginx> is the IP address/DNS name, through which NGINX is available.

  • To export NGINX Plus metrics:

    nginx-prometheus-exporter --nginx.plus --nginx.scrape-uri=http://<nginx-plus>:8080/api

    where <nginx-plus> is the IP address/DNS name, through which NGINX Plus is available.

  • To scrape NGINX metrics with unix domain sockets, run:

    nginx-prometheus-exporter --nginx.scrape-uri=unix:<nginx>:/stub_status

    where <nginx> is the path to unix domain socket, through which NGINX stub status is available.

Note. The nginx-prometheus-exporter is not a daemon. To run the exporter as a system service (daemon), you can follow the example in examples/systemd. Alternatively, you can run the exporter in a Docker container.

Usage

Command-line Arguments

usage: nginx-prometheus-exporter [<flags>]


Flags:
  -h, --[no-]help                Show context-sensitive help (also try --help-long and --help-man).
      --[no-]web.systemd-socket  Use systemd socket activation listeners instead
                                 of port listeners (Linux only). ($SYSTEMD_SOCKET)
      --web.listen-address=:9113 ...
                                 Addresses on which to expose metrics and web interface. Repeatable for multiple addresses. ($LISTEN_ADDRESS)
      --web.config.file=""       Path to configuration file that can enable TLS or authentication. See: https://github.com/prometheus/exporter-toolkit/blob/master/docs/web-configuration.md ($CONFIG_FILE)
      --web.telemetry-path="/metrics"
                                 Path under which to expose metrics. ($TELEMETRY_PATH)
      --[no-]nginx.plus          Start the exporter for NGINX Plus. By default, the exporter is started for NGINX. ($NGINX_PLUS)
      --nginx.scrape-uri=http://127.0.0.1:8080/stub_status ...
                                 A URI or unix domain socket path for scraping NGINX or NGINX Plus metrics. For NGINX, the stub_status page must be available through the URI. For NGINX Plus -- the API. Repeatable for multiple URIs. ($SCRAPE_URI)
      --[no-]nginx.ssl-verify    Perform SSL certificate verification. ($SSL_VERIFY)
      --nginx.ssl-ca-cert=""     Path to the PEM encoded CA certificate file used to validate the servers SSL certificate. ($SSL_CA_CERT)
      --nginx.ssl-client-cert=""
                                 Path to the PEM encoded client certificate file to use when connecting to the server. ($SSL_CLIENT_CERT)
      --nginx.ssl-client-key=""  Path to the PEM encoded client certificate key file to use when connecting to the server. ($SSL_CLIENT_KEY)
      --nginx.timeout=5s         A timeout for scraping metrics from NGINX or NGINX Plus. ($TIMEOUT)
      --prometheus.const-label=PROMETHEUS.CONST-LABEL ...
                                 Label that will be used in every metric. Format is label=value. It can be repeated multiple times. ($CONST_LABELS)
      --log.level=info           Only log messages with the given severity or above. One of: [debug, info, warn, error]
      --log.format=logfmt        Output format of log messages. One of: [logfmt, json]
      --[no-]version             Show application version.

Exported Metrics

Common metrics

Name Type Description Labels
nginx_exporter_build_info Gauge Shows the exporter build information. branch, goarch, goos, goversion, revision, tags and version
promhttp_metric_handler_requests_total Counter Total number of scrapes by HTTP status code. code (the HTTP status code)
promhttp_metric_handler_requests_in_flight Gauge Current number of scrapes being served. []
go_* Multiple Go runtime metrics. []

Metrics for NGINX OSS

Name Type Description Labels
nginx_up Gauge Shows the status of the last metric scrape: 1 for a successful scrape and 0 for a failed one []
Name Type Description Labels
nginx_connections_accepted Counter Accepted client connections. []
nginx_connections_active Gauge Active client connections. []
nginx_connections_handled Counter Handled client connections. []
nginx_connections_reading Gauge Connections where NGINX is reading the request header. []
nginx_connections_waiting Gauge Idle client connections. []
nginx_connections_writing Gauge Connections where NGINX is writing the response back to the client. []
nginx_http_requests_total Counter Total http requests. []

Metrics for NGINX Plus

Name Type Description Labels
nginxplus_up Gauge Shows the status of the last metric scrape: 1 for a successful scrape and 0 for a failed one []
Name Type Description Labels
nginxplus_connections_accepted Counter Accepted client connections []
nginxplus_connections_active Gauge Active client connections []
nginxplus_connections_dropped Counter Dropped client connections dropped []
nginxplus_connections_idle Gauge Idle client connections []
Name Type Description Labels
nginxplus_http_requests_total Counter Total http requests []
nginxplus_http_requests_current Gauge Current http requests []
Name Type Description Labels
nginxplus_ssl_handshakes Counter Successful SSL handshakes []
nginxplus_ssl_handshakes_failed Counter Failed SSL handshakes []
nginxplus_ssl_session_reuses Counter Session reuses during SSL handshake []
Name Type Description Labels
nginxplus_server_zone_processing Gauge Client requests that are currently being processed server_zone
nginxplus_server_zone_requests Counter Total client requests server_zone
nginxplus_server_zone_responses Counter Total responses sent to clients code (the response status code. The values are: 1xx, 2xx, 3xx, 4xx and 5xx), server_zone
nginxplus_server_zone_responses_codes Counter Total responses sent to clients by code code (the response status code. The possible values are here), server_zone
nginxplus_server_zone_discarded Counter Requests completed without sending a response server_zone
nginxplus_server_zone_received Counter Bytes received from clients server_zone
nginxplus_server_zone_sent Counter Bytes sent to clients server_zone
nginxplus_server_ssl_handshakes Counter Successful SSL handshakes server_zone
nginxplus_server_ssl_handshakes_failed Counter Failed SSL handshakes server_zone
nginxplus_server_ssl_session_reuses Counter Session reuses during SSL handshake server_zone
Name Type Description Labels
nginxplus_stream_server_zone_processing Gauge Client connections that are currently being processed server_zone
nginxplus_stream_server_zone_connections Counter Total connections server_zone
nginxplus_stream_server_zone_sessions Counter Total sessions completed code (the response status code. The values are: 2xx, 4xx, and 5xx), server_zone
nginxplus_stream_server_zone_discarded Counter Connections completed without creating a session server_zone
nginxplus_stream_server_zone_received Counter Bytes received from clients server_zone
nginxplus_stream_server_zone_sent Counter Bytes sent to clients server_zone
nginxplus_stream_server_ssl_handshakes Counter Successful SSL handshakes server_zone
nginxplus_stream_server_ssl_handshakes_failed Counter Failed SSL handshakes server_zone
nginxplus_stream_server_ssl_session_reuses Counter Session reuses during SSL handshake server_zone

Note: for the state metric, the string values are converted to float64 using the following rule: "up" -> 1.0, "draining" -> 2.0, "down" -> 3.0, "unavail" –> 4.0, "checking" –> 5.0, "unhealthy" -> 6.0.

Name Type Description Labels
nginxplus_upstream_server_state Gauge Current state server, upstream
nginxplus_upstream_server_active Gauge Active connections server, upstream
nginxplus_upstream_server_limit Gauge Limit for connections which corresponds to the max_conns parameter of the upstream server. Zero value means there is no limit server, upstream
nginxplus_upstream_server_requests Counter Total client requests server, upstream
nginxplus_upstream_server_responses Counter Total responses sent to clients code (the response status code. The values are: 1xx, 2xx, 3xx, 4xx and 5xx), server, upstream
nginxplus_upstream_server_responses_codes Counter Total responses sent to clients by code code (the response status code. The possible values are here), server, upstream
nginxplus_upstream_server_sent` Counter Bytes sent to this server server, upstream
nginxplus_upstream_server_received Counter Bytes received to this server server, upstream
nginxplus_upstream_server_fails Counter Number of unsuccessful attempts to communicate with the server server, upstream
nginxplus_upstream_server_unavail Counter How many times the server became unavailable for client requests (state 'unavail') due to the number of unsuccessful attempts reaching the max_fails threshold server, upstream
nginxplus_upstream_server_header_time Gauge Average time to get the response header from the server server, upstream
nginxplus_upstream_server_response_time Gauge Average time to get the full response from the server server, upstream
nginxplus_upstream_server_health_checks_checks Counter Total health check requests server, upstream
nginxplus_upstream_server_health_checks_fails Counter Failed health checks server, upstream
nginxplus_upstream_server_health_checks_unhealthy Counter How many times the server became unhealthy (state 'unhealthy') server, upstream
nginxplus_upstream_server_ssl_handshakes Counter Successful SSL handshakes server, upstream
nginxplus_upstream_server_ssl_handshakes_failed Counter Failed SSL handshakes server, upstream
nginxplus_upstream_server_ssl_session_reuses Counter Session reuses during SSL handshake server, upstream
nginxplus_upstream_keepalive Gauge Idle keepalive connections upstream
nginxplus_upstream_zombies Gauge Servers removed from the group but still processing active client requests upstream

Note: for the state metric, the string values are converted to float64 using the following rule: "up" -> 1.0, "down" -> 3.0, "unavail" –> 4.0, "checking" –> 5.0, "unhealthy" -> 6.0.

Name Type Description Labels
nginxplus_stream_upstream_server_state Gauge Current state server, upstream
nginxplus_stream_upstream_server_active Gauge Active connections server , upstream
nginxplus_stream_upstream_server_limit Gauge Limit for connections which corresponds to the max_conns parameter of the upstream server. Zero value means there is no limit server , upstream
nginxplus_stream_upstream_server_connections Counter Total number of client connections forwarded to this server server, upstream
nginxplus_stream_upstream_server_connect_time Gauge Average time to connect to the upstream server server, upstream
nginxplus_stream_upstream_server_first_byte_time Gauge Average time to receive the first byte of data server, upstream
nginxplus_stream_upstream_server_response_time Gauge Average time to receive the last byte of data server, upstream
nginxplus_stream_upstream_server_sent Counter Bytes sent to this server server, upstream
nginxplus_stream_upstream_server_received Counter Bytes received from this server server, upstream
nginxplus_stream_upstream_server_fails Counter Number of unsuccessful attempts to communicate with the server server, upstream
nginxplus_stream_upstream_server_unavail Counter How many times the server became unavailable for client connections (state 'unavail') due to the number of unsuccessful attempts reaching the max_fails threshold server, upstream
nginxplus_stream_upstream_server_health_checks_checks Counter Total health check requests server, upstream
nginxplus_stream_upstream_server_health_checks_fails Counter Failed health checks server, upstream
nginxplus_stream_upstream_server_health_checks_unhealthy Counter How many times the server became unhealthy (state 'unhealthy') server, upstream
nginxplus_stream_upstream_server_ssl_handshakes Counter Successful SSL handshakes server, upstream
nginxplus_stream_upstream_server_ssl_handshakes_failed Counter Failed SSL handshakes server, upstream
nginxplus_stream_upstream_server_ssl_session_reuses Counter Session reuses during SSL handshake server, upstream
nginxplus_stream_upstream_zombies Gauge Servers removed from the group but still processing active client connections upstream
Name Type Description Labels
nginxplus_stream_zone_sync_zone_records_pending Gauge The number of records that need to be sent to the cluster zone
nginxplus_stream_zone_sync_zone_records_total Gauge The total number of records stored in the shared memory zone zone
nginxplus_stream_zone_sync_zone_bytes_in Counter Bytes received by this node []
nginxplus_stream_zone_sync_zone_bytes_out Counter Bytes sent by this node []
nginxplus_stream_zone_sync_zone_msgs_in Counter Total messages received by this node []
nginxplus_stream_zone_sync_zone_msgs_out Counter Total messages sent by this node []
nginxplus_stream_zone_sync_zone_nodes_online Gauge Number of peers this node is connected to []
Name Type Description Labels
nginxplus_location_zone_requests Counter Total client requests location_zone
nginxplus_location_zone_responses Counter Total responses sent to clients code (the response status code. The values are: 1xx, 2xx, 3xx, 4xx and 5xx), location_zone
nginxplus_location_zone_responses_codes Counter Total responses sent to clients by code code (the response status code. The possible values are here), location_zone
nginxplus_location_zone_discarded Counter Requests completed without sending a response location_zone
nginxplus_location_zone_received Counter Bytes received from clients location_zone
nginxplus_location_zone_sent Counter Bytes sent to clients location_zone
Name Type Description Labels
nginxplus_resolver_name Counter Total requests to resolve names to addresses resolver
nginxplus_resolver_srv Counter Total requests to resolve SRV records resolver
nginxplus_resolver_addr Counter Total requests to resolve addresses to names resolver
nginxplus_resolver_noerror Counter Total number of successful responses resolver
nginxplus_resolver_formerr Counter Total number of FORMERR responses resolver
nginxplus_resolver_servfail Counter Total number of SERVFAIL responses resolver
nginxplus_resolver_nxdomain Counter Total number of NXDOMAIN responses resolver
nginxplus_resolver_notimp Counter Total number of NOTIMP responses resolver
nginxplus_resolver_refused Counter Total number of REFUSED responses resolver
nginxplus_resolver_timedout Counter Total number of timed out request resolver
nginxplus_resolver_unknown Counter Total requests completed with an unknown error resolver
Name Type Description Labels
nginxplus_limit_request_passed Counter Total number of requests that were neither limited nor accounted as limited zone
nginxplus_limit_request_rejected Counter Total number of requests that were rejected zone
nginxplus_limit_request_delayed Counter Total number of requests that were delayed zone
nginxplus_limit_request_rejected_dry_run Counter Total number of requests accounted as rejected in the dry run mode zone
nginxplus_limit_request_delayed_dry_run Counter Total number of requests accounted as delayed in the dry run mode zone
Name Type Description Labels
nginxplus_limit_connection_passed Counter Total number of connections that were neither limited nor accounted as limited zone
nginxplus_limit_connection_rejected Counter Total number of connections that were rejected zone
nginxplus_limit_connection_rejected_dry_run Counter Total number of connections accounted as rejected in the dry run mode zone
Name Type Description Labels
nginxplus_stream_limit_connection_passed Counter Total number of connections that were neither limited nor accounted as limited zone
nginxplus_stream_limit_connection_rejected Counter Total number of connections that were rejected zone
nginxplus_stream_limit_connection_rejected_dry_run Counter Total number of connections accounted as rejected in the dry run mode zone
Name Type Description Labels
nginxplus_cache_size Gauge Total size of the cache cache
nginxplus_cache_max_size Gauge Maximum size of the cache cache
nginxplus_cache_cold Gauge Is the cache considered cold cache
nginxplus_cache_hit_responses Counter Total number of cache hits cache
nginxplus_cache_hit_bytes Counter Total number of bytes returned from cache hits cache
nginxplus_cache_stale_responses Counter Total number of stale cache hits cache
nginxplus_cache_stale_bytes Counter Total number of bytes returned from stale cache hits cache
nginxplus_cache_updating_responses Counter Total number of cache hits while cache is updating cache
nginxplus_cache_updating_bytes Counter Total number of bytes returned from cache while cache is updating cache
nginxplus_cache_revalidated_responses Counter Total number of cache revalidations cache
nginxplus_cache_revalidated_bytes Counter Total number of bytes returned from cache revalidations cache
nginxplus_cache_miss_responses Counter Total number of cache misses cache
nginxplus_cache_miss_bytes Counter Total number of bytes returned from cache misses cache
nginxplus_cache_expired_responses Counter Total number of cache hits with expired TTL cache
nginxplus_cache_expired_bytes Counter Total number of bytes returned from cache hits with expired TTL cache
nginxplus_cache_expired_responses_written Counter Total number of cache hits with expired TTL written to cache cache
nginxplus_cache_expired_bytes_written Counter Total number of bytes written to cache from cache hits with expired TTL cache
nginxplus_cache_bypass_responses Counter Total number of cache bypasses cache
nginxplus_cache_bypass_bytes Counter Total number of bytes returned from cache bypasses cache
nginxplus_cache_bypass_responses_written Counter Total number of cache bypasses written to cache cache
nginxplus_cache_bypass_bytes_written Counter Total number of bytes written to cache from cache bypasses cache
Name Type Description Labels
nginxplus_worker_connection_accepted Counter The total number of accepted client connections id, pid
nginxplus_worker_connection_dropped Counter The total number of dropped client connections id, pid
nginxplus_worker_connection_active Gauge The current number of active client connections id, pid
nginxplus_worker_connection_idle Gauge The current number of idle client connection id, pid
nginxplus_worker_http_requests_total Counter The total number of client requests received id, pid
nginxplus_worker_http_requests_current Gauge The current number of client requests that are currently being processed id, pid

Connect to the /metrics page of the running exporter to see the complete list of metrics along with their descriptions. Note: to see server zones related metrics you must configure status zones and to see upstream related metrics you must configure upstreams with a shared memory zone.

Troubleshooting

The exporter logs errors to the standard output. When using Docker, if the exporter doesn’t work as expected, check its logs using docker logs command.

Releases

Docker images

We publish the Docker image on DockerHub, GitHub Container, Amazon ECR Public Gallery and Quay.io.

As an alternative, you can choose the edge version built from the latest commit from the main branch. The edge version is useful for experimenting with new features that are not yet published in a stable release.

Binaries

We publish the binaries for multiple Operating Systems and architectures on the GitHub releases page.

Homebrew

You can add the NGINX homebrew tap with

brew tap nginxinc/tap

and then install the formula with

brew install nginx-prometheus-exporter

Snap

You can install the NGINX Prometheus Exporter from the Snap Store.

snap install nginx-prometheus-exporter

Building the Exporter

You can build the exporter using the provided Makefile. Before building the exporter, make sure the following software is installed on your machine:

  • make
  • git
  • Docker for building the container image
  • Go for building the binary

Building the Docker Image

To build the Docker image with the exporter, run:

make container

Note: go is not required, as the exporter binary is built in a Docker container. See the Dockerfile.

Building the Binary

To build the binary, run:

make

Note: the binary is built for the OS/arch of your machine. To build binaries for other platforms, see the Makefile.

The binary is built with the name nginx-prometheus-exporter.

Grafana Dashboard

The official Grafana dashboard is provided with the exporter for NGINX. Check the Grafana Dashboard documentation for more information.

SBOM (Software Bill of Materials)

We generate SBOMs for the binaries and the Docker image.

Binaries

The SBOMs for the binaries are available in the releases page. The SBOMs are generated using syft and are available in SPDX format.

Docker Image

The SBOM for the Docker image is available in the DockerHub, GitHub Container registry, Amazon ECR Public Gallery and Quay.io repositories. The SBOMs are generated using syft and stored as an attestation in the image manifest.

For example to retrieve the SBOM for linux/amd64 from Docker Hub and analyze it using grype you can run the following command:

docker buildx imagetools inspect nginx/nginx-prometheus-exporter:edge --format '{{ json (index .SBOM "linux/amd64").SPDX }}' | grype

Provenance

We generate provenance for the Docker image and it's available in the DockerHub, GitHub Container registry, Amazon ECR Public Gallery and Quay.io repositories, stored as an attestation in the image manifest.

For example to retrieve the provenance for linux/amd64 from Docker Hub you can run the following command:

docker buildx imagetools inspect nginx/nginx-prometheus-exporter:edge --format '{{ json (index .Provenance "linux/amd64").SLSA }}'

Contacts

We’d like to hear your feedback! If you have any suggestions or experience issues with the NGINX Prometheus Exporter, please create an issue or send a pull request on GitHub. You can contact us directly via [email protected] or on the NGINX Community Slack in the #nginx-prometheus-exporter channel.

Contributing

If you'd like to contribute to the project, please read our Contributing guide.

Support

The commercial support is available for NGINX Plus customers when the NGINX Prometheus Exporter is used with NGINX Ingress Controller.

License

Apache License, Version 2.0.

nginx-prometheus-exporter's People

Contributors

aerialls avatar ampant avatar asherf avatar ciarams87 avatar dean-coakley avatar dependabot[bot] avatar eric-hc avatar fluepke avatar greut avatar haywoodsh avatar inosato avatar isserrano avatar jjngx avatar jongwooo avatar kkirsche avatar lorcanmcveigh avatar lucacome avatar marcosdotps avatar martialonline avatar mobeigi avatar nabokihms avatar oseoin avatar pdabelf5 avatar perflyst avatar pleshakov avatar pre-commit-ci[bot] avatar rwenz3l avatar sheharyaar avatar step-security-bot avatar tasooneasia avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nginx-prometheus-exporter's Issues

Support multiple architectures (ie arm)

Is your feature request related to a problem? Please describe.

The current automated releases are only for x86 architecture.

Describe the solution you'd like
Updating the current CI to build for multiple architectures (ie docker buildx)

outdated alpine base image

Describe the bug
Most recent image on DockerHub (0.5.0, ef5810d4ce30) is based on outdated alpine image with security vulnerabilities.

To reproduce

$ docker pull nginx/nginx-prometheus-exporter:0.5.0
$ docker run -it --entrypoint=/bin/sh nginx/nginx-prometheus-exporter:0.5.0
$ cat /etc/alpine-release 
3.9.4
$ apk upgrade -s
(1/4) Upgrading musl (1.1.20-r4 -> 1.1.20-r5)
(2/4) Upgrading libcrypto1.1 (1.1.1b-r1 -> 1.1.1d-r2)
(3/4) Upgrading libssl1.1 (1.1.1b-r1 -> 1.1.1d-r2)
(4/4) Upgrading musl-utils (1.1.20-r4 -> 1.1.20-r5)
OK: 6 MiB in 14 packages

Base image is from June 2019...

Expected behavior
Image should be based on a recent base image that does not contain any security vulnerabilities.

Your environment
Docker image 0.5.0 - ef5810d4ce30

Additional context
Perhaps one can clarify, if this is intended to be a production grade image or just for development purposes only.

CONST_LABELS and -prometheus.const-labels are not working

Describe the bug
When I set CONST_LABELS, it does not make any effect . When I try to set -prometheus.const-labels, it produces an error.

To reproduce
Steps to reproduce the behavior:

  1. docker run -e CONST_LABELS=blabla=blabla -it nginx/nginx-prometheus-exporter:0.5.0 -nginx.scrape-uri http://hostname/basic_status
  2. Go to container_ip:9113/metrics
  3. See metrics without label(s) passed before

OR

  1. docker run -it nginx/nginx-prometheus-exporter:0.5.0 -nginx.scrape-uri http://hostname/basic_status -prometheus.const-labels blabla=blabla
  2. See error: flag provided but not defined: -prometheus.const-labels

Expected behavior
Metrics are shown with labels.

Your environment

  • Version of the Prometheus exporter: gitCommit="c3dd65c",version="0.5.0"
  • Version of Docker/Kubernetes: Docker version 18.09.6, build 481bc77
  • Using NGINX

A few development ideas

Hello.

Just took a quick look on it and deployed over my hosts. There's a few things that might add to the value:

  1. It would be nice to have _build_info. Currently I can't get exporter info on the board to plan tiered upgrades.

  2. It would be nice to be able to scrape multiple hosts. With HTTPS/Auth nginx is already an external service that can be used to transfer data. This will also make it possible to transport data securely over network. For example, if I have a few hosts in different DCs I need to proxy exporter via nginx for example, if exporter would support scraping more then one host, authorization and HTTPS it would be possible to expose /stub_status on nginx externally in a restricted location. This will make provisioning easier.

Nginx /stub_status instead of /api

Prerequisites
We assume that you have already installed Prometheus and NGINX or NGINX Plus. Additionally, you need to:

Expose the built-in metrics in NGINX/NGINX Plus:
For NGINX, expose the stub_status page at /stub_status on port 8080.

Running the Exporter in a Container
To start the exporter we use the docker run command.

To export NGINX metrics, run:

$ docker run -p 9113:9113 nginx/nginx-prometheus-exporter:0.1.0 -nginx.scrape-uri http://:8080**/api**

Shouldn't it be $ docker run -p 9113:9113 nginx/nginx-prometheus-exporter:0.1.0 -nginx.scrape-uri http://:8080**/stub_status** ?

how to attach it to a bridge network so that Prometheus container has access to it

I have a nginx container running on localhost:70/nginx_status
I have a prometheus container running on localhost:9090

when i run
docker run -it -p 9113:9113 --network bridge nginx/nginx-prometheus-exporter:0.1.0 -nginx.scrape-uri http://localhost:70/nginx_status

error
Could not create Nginx Client: Failed to create NginxClient: failed to get http://localhost:70/nginx_status: Get http://localhost:70/nginx_status: dial tcp 127.0.0.1:70: connect: connection refused

docker ps -a shows that nginx-prometheus(stoic_goldberg) exited

how do i get this container to be on the same default network(bridge)
the connection failed error is because it does not have access to http://localhost:70/nginx_status because it is not on the same "bridge" network

Parameter to avoid service stop in case the scrape target is not reachable

Hi,

I would like to use your nginx-prometheus-exporter as a sidecar container in kubernetes to collect metrics from a nginx container running in the same Pod.
Unfortunately the nginx container needs ~1-2 seconds to start. So the scrape target for the nginx-prometheus-exporter is not reachable when it starts. Therefore the exporter stops and the container has to be restarted from kubernetes. After the container restart, the exporter is working fine because the nginx process was able to start in the meantime and the scrape target is reachable then.

So I would like to ask if it would be possible to add a parameter to the exporter to avoid the default behavior of stopping. A simple sleep and retry after a specific time (maybe 5s) would be very useful.

The related error log output:

root@k8s [dc]:~# kubectl logs $pod -c prometheus-exporter  -p

2019/02/20 11:42:51 Starting NGINX Prometheus Exporter Version=0.3.0 GitCommit=6570275
2019/02/20 11:42:51 Could not create Nginx Client: Failed to create NginxClient: failed to get http://127.0.0.1:8080/stub_status: Get http://127.0.0.1:8080/stub_status: dial tcp 127.0.0.1:8080: connect: connection refused

Thank you.

Error while making the nginx-prometheus-exporter build file

I am trying to install nginx-prometheus-exporter on my system. While applying the "make" command in my $GOPATH/src/github.com/nginxinc/nginx-prometheus-exporter directory, I get an error stating.

flag provided but not defined: -mod

What could be the possible error of that? Thank you

when i start exporter ,i got this

nginx-prometheus-exporter -nginx.scrape-uri http://127.0.0.1:6666/stub_status

2020/09/07 02:13:14 Starting NGINX Prometheus Exporter Version=0.8.0 GitCommit=de15093
2020/09/07 02:13:14 Could not create Nginx Client: failed to parse response body "Active connections: 1 \nserver accepts handled requests request_time\n 55 55 55 175255\nReading: 0 Writing: 1 Waiting: 0 \n": invalid input for connections and requests " 55 55 55 175255"

NGINX Plus upstream state mapping doesn't conform with Prometheus conventions

When using the nginxplus_upstream_server_state it is frustrating, that the different states are integers. This makes it hard to do a count(nginxplus_upstream_server_state) group by (upstream, state) query(e.g. for multiple loadbalancers) in prometheus.

Describe the solution you'd like
It is far more common in prometheus, to have different timeseries for different statuses as long as they have an upper bound. This is also the way upstream error codes are currently handled.

Additional context
upstreamServerStates mapping:

var upstreamServerStates = map[string]float64{

How status codes are handled:

ch <- prometheus.MustNewConstMetric(c.serverZoneMetrics["responses_1xx"],

Is there a reason such a mapping to integers was chosen?

Http codes

How can I get http errors? 2xx 4xx 5xx

Thank you

Non-root container

Is your feature request related to a problem? Please describe.
Using nginx-prometheus-exporter in security-enhanced (like active PodSecurityPolicy) Kubernetes (or OpenShift) cluster requires non-root containers. It is common to use scratch image to reduce attack surface and get a smaller final image.

Describe the solution you'd like

  • Non-root container - USER in Dockerfile
  • Use scratch image

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
nginx-prometheus-exporter is written in Go, thus alpine:3.11 is not needed to run it.

Metrics was collected before with the same name and label values

Describe the bug
When running the docker container locally pointing it against our nginx lb api endpoint exposed on port 8888 I can see that errors have occurred when gathering the metrics:

An error has occurred during metrics gathering:
490 error(s) occurred:
* collected metric nginxplus_upstream_server_state label:<name:"server" value:"127.0.0.1:80" > label:<name:"upstream" value:"gru_location_red" > gauge:<value:3 >  was collected before with the same name and label values
* collected metric nginxplus_upstream_server_active label:<name:"server" value:"127.0.0.1:80" > label:<name:"upstream" value:"gru_location_red" > gauge:<value:0 >  was collected before with the same name and label values
* collected metric nginxplus_upstream_server_requests label:<name:"server" value:"127.0.0.1:80" > label:<name:"upstream" value:"gru_location_red" > counter:<value:0 >  was collected before with the same name and label values
* collected metric nginxplus_upstream_server_responses label:<name:"code" value:"1xx" > label:<name:"server" value:"127.0.0.1:80" > label:<name:"upstream" value:"gru_location_red" > counter:<value:0 >  was collected before with the same name and label values
* collected metric nginxplus_upstream_server_responses label:<name:"code" value:"2xx" > label:<name:"server" value:"127.0.0.1:80" > label:<name:"upstream" value:"gru_location_red" > counter:<value:0 >  was collected before with the same name and label values
* collected metric nginxplus_upstream_server_responses label:<name:"code" value:"3xx" > label:<name:"server" value:"127.0.0.1:80" > label:<name:"upstream" value:"gru_location_red" > counter:<value:0 >  was collected before with the same name and label values
* collected metric nginxplus_upstream_server_responses label:<name:"code" value:"4xx" > label:<name:"server" value:"127.0.0.1:80" > label:<name:"upstream" value:"gru_location_red" > counter:<value:0 >  was collected before with the same name and label values
* collected metric nginxplus_upstream_server_responses label:<name:"code" value:"5xx" > label:<name:"server" value:"127.0.0.1:80" > label:<name:"upstream" value:"gru_location_red" > counter:<value:0 >  was collected before with the same name and label values
...
...
...
And the list goes on and on.

Our Prometheus can't gather the metrics because of these errors.

To reproduce
Steps to reproduce the behavior:

  1. Deploy using docker run -p 9113:9113 nginx/nginx-prometheus-exporter:0.1.0 -nginx.plus -nginx.scrape-uri http://**lb_address**:8888/api
  2. View http://localhost:9113/metrics
  3. Observe the errors

Expected behavior
Metrics should be gathered successfully, allowing us and prometheus to pick these up.
If it's something we're doing wrong from the lb side it should be mentioned.

Your environment

  • Version of the Prometheus exporter - 0.1.0
  • Version of Docker/Kubernetes - Docker version 18.06.1-ce, build e68fc7a
  • [if applicable] Kubernetes platform (e.g. Mini-kube or GCP)
  • Using NGINX or NGINX Plus - NGINX Plus

Additional context
Add any other context about the problem here. Any log files you want to share.

default port

Can you change the configuration instead of port 8080 that listens on port 8888 ???

Support prometheus labels for metrics

Is your feature request related to a problem? Please describe.
I want to monitor multiple nginx instances across a fleet of services with the same prometheus instance. This doesn't work well since each instance of this exporter uses the same metric names and will clobber each other. I'd like to add support for labels in the metric so we can annotate the name of the service to support grouping by those.
I'm a novice prometheus user so let me know if this is already somehow possible.

Describe the solution you'd like
Allow the exported metrics to have labels. Eg: nginx_connections_handled{name="app1"} 2918

Describe alternatives you've considered
I cannot think of another way. Is it possible to change the name or prefix the exported metrics? That would work as well but I think labels are probably nicer.

Additional context
N/A

missing zone_sync states

Hi,
in your actually version of the exporter, the result of "curl -s 127.0.0.1/api/3/stream/zone_sync" is missing. This is a cluster configuration part in nginx+ which ist activated for sticky learn to sync all cookie in an active/active cluster. Could you please add this values?

e.g. of result:

{ "status": { "nodes_online": 1, "msgs_in": 316063, "msgs_out": 312921, "bytes_in": 619285360, "bytes_out": 616452834 }, "zones": { "example_zone_one": { "records_total": 153746, "records_pending": 11 }, "example_zone_two": { "records_total": 10940, "records_pending": 18 } } }

Empty labels with version in nginxexporter_build_info in 0.6.0

Describe the bug
Metric with name nginxexporter_build_info is empty while at 0.4.2 it contains correct info.

To reproduce
Steps to reproduce the behavior:

  1. Deploy using 0.6.0 version
  2. Check metrics
  3. Example output:
# HELP nginxexporter_build_info Exporter build information
# TYPE nginxexporter_build_info gauge
nginxexporter_build_info{gitCommit="",version=""} 1

Expected behavior
Version and commit should appear

Your environment

  • Version of the Prometheus exporter - 0.6.0
  • Version of Docker/Kubernetes: 1.15.11-gke1
  • Kubernetes platform: GCP
  • NGINX 1.16.1-1

Relase a version

Hi, any chance of creating a release with the recent changes ?
I would love to be able to use a newer version in my environment.
@Rulox

Support Probing Multiple Nginx Servers

Would it be possible to mimic how the blackbox exporter behaves and allow probing multiple nginx servers? I'd like to avoid running one nginx-prometheus-exporter per nginx server when we have hundreds of nginx servers. The official blackbox exporter allows you to make an API call to it with the target in the URL. For example:

http://nginx-prometheus-exporter-container:9115/probe?target=nginx-server01.domain.local
http://nginx-prometheus-exporter-container:9115/probe?target=nginx-server02.domain.local

Thanks for the consideration!

New behavior of NGINX Plus r18 API module breaks prometheus

Since NGINX Plus R18 the API module provides the list of fields according to the configuration instead of using the static list of fields.

In previous versions, the api returned the static list of fields: "nginx","processes","connections","slabs","http","stream","ssl"

Since r18, this list can be different for diffrent configurations. For example, if there are no configured stream{}, then the list of fields will be:
"nginx","processes","connections","slabs","http""ssl"

To reproduce
Steps to reproduce the behavior:

  1. Upgrade NGINX Plus up to the latest r18 release
  2. Comment out the whole stream {} configuration
  3. See error

Expected behavior
The prometheus exporter should request /api/ first, to get the number of latest available API version. After that, request /api/ to get the list of configured fields.

scape metrics from multiple containers in docker swarm

I have a little swarm cluster with 6 nodes and I deploy 2 replicas of nginx on it:

nginx:
  image: registry:5000/nginx:2
  ports:
    - "80:80"
    - "443:443"
  deploy:
    replicas: 2

Is it possible to scape metrics from every replica via nginx-exporter ?

last reload

Hi,

could you please add a value which has timestamp last reload Nginx? like:

curl -s '127.0.0.1/api/4/nginx' | jq
{
"version": "1.15.10",
"build": "nginx-plus-r18",
"address": "127.0.0.1",
"generation": 7,
"load_timestamp": "2019-05-30T11:15:03.206Z",
"timestamp": "2019-05-30T11:44:16.317Z",
"pid": 60024,
"ppid": 31797
}

Also I can use the rest of values like version, build etc.

Tanks a lot

Feature Request Upstream Count Up / Down

We are already using your exporter but unfortunately there is one thing we are missing to migrate from our zabbix based nginx plus monitoring to grafana / prometheus. We need somehow the number of upstreams that have been discovered and those who are up and down.

As my knowlege of the Go Language is limited i cannot contribute it myself.

Regards,

Mike

work with tengine

hi I trued to use nginx-prometheus-exporter with tengine (fork of ngixn) and faced with problem
after start of the exporter i see logs like

2020/08/17 13:53:44 Starting NGINX Prometheus Exporter Version= GitCommit=
2020/08/17 13:53:44 Could not create Nginx Client: failed to parse response body "Active connections: 4 \nserver accepts handled requests request_time\n 9055 9055 65927 271898450\nReading: 0 Writing: 2 Waiting: 2 \n": invalid input for connections and requests " 9055 9055 65927 271898450"

is it possible to work with the web server? or only support official versions of nginx ?

Metric for cache

Can you add metrics for nginx cache? I see that metrics already exists via "/api/4/http/caches" nginx url.

ConstLabels not included in upMetric

Describe the bug
Not sure this is a bug or intentionally omitted, but const labels are not attached to the upMetric while it is present on all other metrics.
For the use case of my team we want to show, in Grafana, the number of pods that are up and running and also the version of our build being served by the Nginx in those pods. The number of pods we get by just summing the upMetric but we don't get the version because that is not included in the upMetric

To reproduce
Steps to reproduce the behavior:

  1. Run the nginx-prometheus-exporter docker container with CONST_LABELS env. variable set to version=1.2.1
  2. Visit the /metrics url to get a list of all the metrics
  3. Compare one of the stub_stats metrics with the upMetric and verify that version=1.2.1 is returned by the stub_stats metrics but not the upMetric

Expected behavior
I'd like the upMetric to include const labels just lite the stub_stats metrics

Your environment

  • Version of the Prometheus exporter: 0.6.0 release
  • Version of Kubernetes: v1.11.0+d4cacc0
  • [if applicable] Kubernetes platform: OpenShift v3.11.135
  • Using NGINX

Additional context
image

exporter exits when prometheus process is reloaded

Hello.

I was playing with different exporters and noticed that this exporter is quitting when I'm pkill -1 prometheus to renew configuration. It's not dying, I see no cores or messages about that, it just exits when connection from prometheus is closed.

Feature request: HTTP Response latency buckets

Is your feature request related to a problem? Please describe.
Many teams want to define a latency SLO for their HTTP services. An average request latency is not the best choice, because it is not possible to know if there was a certain number of requests that took too long.

Describe the solution you'd like
Nginx exporter should expose a response latency bucket metric, like described in the prometheus documentation: https://prometheus.io/docs/practices/histograms/#quantiles

Describe alternatives you've considered
There is no alternative, except for using a different exporter or calculating the response time buckets in some other way. The average is not enough for calculation of percentiles.

Additional context
https://landing.google.com/sre/sre-book/chapters/service-level-objectives/

Release schedule

Do you know when the next release will be? I'm particularly interested in a Docker image containing #103 being released

Thanks!

Filter access.log by host name

I have one access.log for all virtual hosts on the machine. With a log format like this:

    log_format main '$remote_addr - $remote_user [$time_local] '
            '"$host" "$request" $status $body_bytes_sent '
            '$request_time $upstream_response_time '
            '"$http_referer" "$http_user_agent"';

Is there any way I can filter metrics just for one host?
Or can I group metrics by host name?

Thanks!

Problem with flag -nginx.ssl-ca-cert

Hi,

I'm trying to run nginx_exporter with few flags and I'm still getting this message

flag provided but not defined: -nginx.ssl-ca-cert

here is how I run it
/usr/local/bin/nginx_exporter -nginx.scrape-uri "https://127.0.0.1:443/status" -nginx.ssl-ca-cert "/etc/pki/tls/cert.pem"
I tried without quotations and get the same results.

I'm using nginx_exporter versions 0.6.0

Any ideas ?

Error getting stats: request canceled (Client.Timeout exceeded while awaiting headers)

Exporter was throwing some errors and metrics are missing at the same time.
It happened randomly and recovered later. Here is logs

-- Logs begin at Tue 2020-12-01 04:59:14 UTC. --
Dec 02 00:21:59 instance-name nginx-prometheus-exporter[3550]: 2020/12/02 00:21:59 Error getting stats: failed to get http://127.0.0.1:8080/stub_status: Get "http://127.0.0.1:8080/stub_status": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Dec 02 00:22:04 instance-name nginx-prometheus-exporter[3550]: 2020/12/02 00:22:04 Error getting stats: failed to get http://127.0.0.1:8080/stub_status: Get "http://127.0.0.1:8080/stub_status": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Dec 02 00:22:09 instance-name nginx-prometheus-exporter[3550]: 2020/12/02 00:22:09 Error getting stats: failed to get http://127.0.0.1:8080/stub_status: Get "http://127.0.0.1:8080/stub_status": context deadline exceeded
Dec 02 00:22:14 instance-name nginx-prometheus-exporter[3550]: 2020/12/02 00:22:14 Error getting stats: failed to get http://127.0.0.1:8080/stub_status: Get "http://127.0.0.1:8080/stub_status": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Dec 02 00:22:19 instance-name nginx-prometheus-exporter[3550]: 2020/12/02 00:22:19 Error getting stats: failed to get http://127.0.0.1:8080/stub_status: Get "http://127.0.0.1:8080/stub_status": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Dec 02 00:22:24 instance-name nginx-prometheus-exporter[3550]: 2020/12/02 00:22:24 Error getting stats: failed to get http://127.0.0.1:8080/stub_status: Get "http://127.0.0.1:8080/stub_status": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Dec 02 00:22:59 instance-name nginx-prometheus-exporter[3550]: 2020/12/02 00:22:59 Error getting stats: failed to get http://127.0.0.1:8080/stub_status: Get "http://127.0.0.1:8080/stub_status": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Dec 02 00:23:19 instance-name nginx-prometheus-exporter[3550]: 2020/12/02 00:23:19 Error getting stats: failed to get http://127.0.0.1:8080/stub_status: Get "http://127.0.0.1:8080/stub_status": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Dec 02 00:23:29 instance-name nginx-prometheus-exporter[3550]: 2020/12/02 00:23:29 Error getting stats: failed to get http://127.0.0.1:8080/stub_status: Get "http://127.0.0.1:8080/stub_status": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Dec 02 00:27:19 instance-name nginx-prometheus-exporter[3550]: 2020/12/02 00:27:19 Error getting stats: failed to get http://127.0.0.1:8080/stub_status: Get "http://127.0.0.1:8080/stub_status": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

Environment

  • Version of the Prometheus exporter
 Starting NGINX Prometheus Exporter Version=0.8.0 GitCommit=de15093
  • Version of VM
Distributor ID: Ubuntu
Description:    Ubuntu 16.04.3 LTS
Release:        16.04
Codename:       xenial
  • Using NGINX or NGINX Plus
nginx version: nginx/1.10.3 (Ubuntu)

There is no nginx logs as well in the same time

root@instance-name:/home/thamaraiselvam# journalctl -fu nginx
-- Logs begin at Tue 2020-12-01 04:59:14 UTC. --
Dec 01 05:19:21 instance-name systemd[1]: Stopping A high performance web server and a reverse proxy server...
Dec 01 05:19:21 instance-name systemd[1]: Stopped A high performance web server and a reverse proxy server.
Dec 01 05:19:21 instance-name systemd[1]: Starting A high performance web server and a reverse proxy server...
Dec 01 05:19:21 instance-name systemd[1]: nginx.service: Failed to read PID from file /run/nginx.pid: Invalid argument
Dec 01 05:19:21 instance-name systemd[1]: Started A high performance web server and a reverse proxy server.
Dec 01 05:32:08 instance-name systemd[1]: Stopping A high performance web server and a reverse proxy server...
Dec 01 05:32:08 instance-name systemd[1]: Stopped A high performance web server and a reverse proxy server.
Dec 01 05:32:08 instance-name systemd[1]: Starting A high performance web server and a reverse proxy server...
Dec 01 05:32:09 instance-name systemd[1]: nginx.service: Failed to read PID from file /run/nginx.pid: Invalid argument
Dec 01 05:32:09 instance-name systemd[1]: Started A high performance web server and a reverse proxy server.

Export nginx[plus]_up metric

Is your feature request related to a problem? Please describe.
I wouldn't call it a problem, but there is no _up metric exported.

Describe the solution you'd like
I would like to have _up metric exported so it's easier to write an alert that NGINX is down.

Describe alternatives you've considered
None yet.

Additional context
Maybe there is another way to write the above alert with the data that's already exported but I am not aware of it yet.

Thank you.

Error when defining scrape uri

When running

the README states

http://127.0.0.1:8080/stub_status is the default scrape URI. Running it without changing it seems to confirm.

2018/08/11 00:12:14 Could not create Nginx Client: Failed to create NginxClient: failed to get http://127.0.0.1:8080/stub_status: Get http://127.0.0.1:8080/stub_status: dial tcp 127.0.0.1:8080: connect: connection refused

However, if I try to define it as that via -nginx.scrape-uri="http://127.0.0.1:8080/stub_status", when I run it I get:

2018/08/11 00:11:20 Could not create Nginx Client: Failed to create NginxClient: failed to get "http://127.0.0.1:8080/stub_status": parse "http://127.0.0.1:8080/stub_status": first path segment in URL cannot contain colon

Strongly hope to support obtaining indicators through k8s Service name

The output is as follows

2020/06/12 13:27:32 Starting NGINX Prometheus Exporter Version=0.7.0 GitCommit=a2910f1
2020/06/12 13:27:32 Could not create Nginx Client: failed to get http://myyyy:80/nginx_status: Get http://myyyy:80/nginx_status: dial tcp: lookup myyyy on 172.21.0.10:53: no such host

”myyyy“ is a service in my k8s
but it don't work!!!

No -nginx.ssl-verify argument

Hi.

Describe the bug
There is no argument nginx.ssl-verify

To reproduce
Steps to reproduce the behavior:

  1. docker run -p 9113:9113 nginx/nginx-prometheus-exporter:0.1.0 -nginx.scrape-uri https://site.com/basic_status
    2018/10/05 16:30:56 Starting NGINX Prometheus Exporter Version=0.1.0 GitCommit=8d90a86
    2018/10/05 16:30:56 Could not create Nginx Client: Failed to create NginxClient: failed to get https://site.com/basic_status: Get https://site.com/basic_status: x509: failed to load system roots and no roots provided
  2. docker run -p 9113:9113 nginx/nginx-prometheus-exporter:0.1.0 -nginx.scrape-uri https://site.com/basic_status -nginx.ssl-verify false
    flag provided but not defined: -nginx.ssl-verify
    Usage of /usr/bin/exporter:
    -nginx.plus
    Start the exporter for NGINX Plus. By default, the exporter is started for NGINX.
    -nginx.scrape-uri string
    A URI for scraping NGINX or NGINX Plus metrics.
    For NGINX, the stub_status page must be available through the URI. For NGINX Plus -- the API. (default "http://127.0.0.1:8080/stub_status")
    -web.listen-address string
    An address to listen on for web interface and telemetry. (default ":9113")
    -web.telemetry-path string
    A path under which to expose metrics. (default "/metrics")

How to use this exporter for Prometheus Metrics ?

I can see Nginx throws 2 types of Monitoring Logs:

Only Stub Status is being considered here. I can setup 2 services to get those in 1 Prometheus Server, but I still have some following doutbts:

  • Why Prometheus Metrics are not considered ?
  • Why Nginx is throwing 2 different metrics ?
  • How to combine them together ?
  • Are there any future plan to include Prometheus Metrics in this too ?

Random data displayed in Grafana dashboard after import

After importing dashboard all panels of the dashboard is filled with "Test data: random walk" and Query is "default" not Prometheus as selected during import. Some more details can be found here.

Steps to reproduce the behavior:

  1. Import dashboard with one of the latest version of Grafana (I'm using 6.4.3)
  2. See all panels are filled with random data.

Expected behavior
Panels displaying actual data, datasource is correctly set.

Environment

  • NGINX Prometheus Exporter Version=0.4.2 GitCommit=f017367
  • Grafana 6.4.3
  • NGINX

Additional context
I suppose problem can be fixed by re-exporting dashboard with Grafana 6.4+. Or by manually adding "datasource": "${DS_PROMETHEUS}", into each panel in JSON data, this is how I fixed it for myself.

scrape from multiple servers/domains?

how to scrape from multiple domains/servers?

i tried args like that

- "--nginx.scrape-uri=https://domain1.kz/basic_status, https://domain2.kz/basic_status"

but prometheus showing only one.

docker compose doesnt running the container with adding scrape-uri param

Describe the bug
im running nginx exporter using docker compose
i have this compose file

`
version: '2'
services:

prometheus:
image: nginx/nginx-prometheus-exporter:0.2.0
user: root
volumes:
- /monitoring:/monitoring
command:
- '-nginx.scrape-uri http://x.x.x.x:80/nginx_status'
ports:
- "9113:9113"
restart: unless-stopped

`
when running this command docker-compose up i got this error

flag provided but not defined: -nginx.scrape-uri http://x.x.x.x:80/nginx_status

Error getting stats: failed to get stats

Describe the bug
Nginx exporter fails to export metrics

Exporter was failing on 0.3.0 to export metrics with path not found error after upgrade to 0.4.2. Some of the hosts are able to export metrics. But there is one host still has same error.

Expected behavior
Export metrics without failure.

Your environment
Centos 3.10.0-957.1.3.el7.x86_64

nginx version: nginx/1.15.10 (nginx-plus-r18-p1)
built by gcc 4.8.5 20150623 (Red Hat 4.8.5-36) (GCC)
built with OpenSSL 1.0.2k-fips 26 Jan 2017

nginx exporter version 0.4.2

Additional context

# Systemd command
nginx-prometheus-exporter -web.listen-address :9113 -nginx.plus -nginx.scrape-uri http://127.0.0.1:8888/api -nginx.ssl-verify false
# Error log
nginx-prometheus-exporter[13565]: 2019/09/26 21:17:14 Error getting stats: failed to get stats: failed to get stream server zones: expected 200 response, got 404. path=; method=; error.status=404; error.text=path not found; error.code=PathNotFound; request_id=10e67745e5d4466f570cb834aa9c17e0; href=https://nginx.org/en/docs/http/ngx_http_api_module.html

Is is good to monitor the number of dropped Nginx connection?

Describe the solution you'd like
It is worthwhile to know the number/graph of dropped connections for Nginx e.g. nginx_connections_dropped

Describe alternatives you've considered
This can be retrieved from (accepted-handled) with PromQL in Prometheus/Grafana.

how to initialize via docker using localhost

Hello,

Could you help me with the problem in using docker for EC2 (AWS) instances?
It happens that since I have instances configured in auto-scalling, for me it is much easier if the initialization string accepts something like localhost and / or 127.0.0.1.
But when I try to boot using them the following failures occur:

docker run -p 9113:9113 nginx/nginx-prometheus-exporter:0.2.0
2018/12/05 20:24:51 Starting NGINX Prometheus Exporter Version=0.2.0 GitCommit=ad0a472
2018/12/05 20:24:51 Could not create Nginx Client: Failed to create NginxClient: failed to get http://127.0.0.1:8080/stub_status: Get http://127.0.0.1:8080/stub_status: dial tcp 127.0.0.1:8080: connect: connection refused

OR

docker run -it -p 9113:9113 nginx/nginx-prometheus-exporter:0.2.0 -nginx.scrape-uri http://127.0.0.1:8080/stub_status
2018/12/05 20:32:15 Starting NGINX Prometheus Exporter Version=0.2.0 GitCommit=ad0a472
2018/12/05 20:32:15 Could not create Nginx Client: Failed to create NginxClient: failed to get http://127.0.0.1:8080/stub_status: Get http://127.0.0.1:8080/stub_status: dial tcp 127.0.0.1:8080: connect: connection refused

It only works if I put the external ip of the server, but how would it be automated?

Thank you in advance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.