Code Monkey home page Code Monkey logo

mongodb-query-exporter's Introduction

Prometheus MongoDB query exporter

release Go Report Card OpenSSF Scorecard Coverage Status Artifact Hub

MongoDB aggregation query exporter for Prometheus.

Features

  • Support for gauge metrics
  • Pull and Push (Push is only supported for MongoDB >= 3.6)
  • Supports multiple MongoDB servers
  • Metric caching support

Note that this is not designed to be a replacement for the MongoDB exporter to instrument MongoDB internals. This application exports custom MongoDB metrics in the prometheus format based on the queries (aggregations) you want.

Installation

Get Prometheus MongoDB aggregation query exporter, either as a binaray from the latest release or packaged as a Docker image.

Helm Chart

For kubernetes users there is an official helm chart for the MongoDB query exporter. Please read the installation instructions here.

Docker

You can run the exporter using docker (This will start it using the example config provided in the example folder):

docker run -e MDBEXPORTER_CONFIG=/config/configv3.yaml -v $(pwd)/example:/config ghcr.io/raffis/mongodb-query-exporter:latest

Usage

$ mongodb-query-exporter

Use the -help flag to get help information.

If you use MongoDB Authorization, best practices is to create a dedicated readonly user with access to all databases/collections required:

  1. Create a user with 'read' on your database, like the following (replace username/password/db!):
db.getSiblingDB("admin").createUser({
    user: "mongodb_query_exporter",
    pwd: "secret",
    roles: [
        { role: "read", db: "mydb" }
    ]
})
  1. Set environment variable MONGODB_URI before starting the exporter:
export MDBEXPORTER_MONGODB_URI=mongodb://mongodb_query_exporter:secret@localhost:27017

Note: The URI is substituted using env variables ${MY_ENV}, given that you may also pass credentials from other env variables. See the example bellow.

If you use x.509 Certificates to Authenticate Clients, pass in username and authMechanism via connection options to the MongoDB uri. Eg:

mongodb://CN=myName,OU=myOrgUnit,O=myOrg,L=myLocality,ST=myState,C=myCountry@localhost:27017/?authMechanism=MONGODB-X509

Credentials from env variables

You can pass in credentials from env variables.

Given the following URI the exporter will look for the ENV variables called MY_USERNAME and MY_PASSWORD and automatically use them at the referenced position within the URI.

export MY_USERNAME=mongodb_query_exporter
export MY_PASSWORD=secret
export MDBEXPORTER_MONGODB_URI=mongodb://${MY_USERNAME}:${MY_PASSWORD}@localhost:27017

Access metrics

The metrics are by default exposed at /metrics.

curl localhost:9412/metrics

Exporter configuration

The exporter is looking for a configuration in ~/.mongodb_query_exporter/config.yaml and /etc/mongodb_query_exporter/config.yaml or if set the path from the env MDBEXPORTER_CONFIG.

You may also use env variables to configure the exporter:

Env variable Description Default
MDBEXPORTER_CONFIG Custom path for the configuration ~/.mongodb_query_exporter/config.yaml or /etc/mongodb_query_exporter/config.yaml
MDBEXPORTER_MONGODB_URI The MongoDB connection URI mongodb://localhost:27017
MDBEXPORTER_MONGODB_QUERY_TIMEOUT Timeout until a MongoDB operations gets aborted 10
MDBEXPORTER_LOG_LEVEL Log level warning
MDBEXPORTER_LOG_ENCODING Log format json
MDBEXPORTER_BIND Bind address for the HTTP server :9412
MDBEXPORTER_METRICSPATH Metrics endpoint /metrics

Note if you have multiple MongoDB servers you can inject an env variable for each instead using MDBEXPORTER_MONGODB_URI:

  1. MDBEXPORTER_SERVER_0_MONGODB_URI=mongodb://srv1:27017
  2. MDBEXPORTER_SERVER_1_MONGODB_URI=mongodb://srv2:27017
  3. ...

Configure metrics

Since the v1.0.0 release you should use the config version v3.0 to profit from the latest features. See the configuration version matrix bellow.

Example:

version: 3.0
bind: 0.0.0.0:9412
log:
  encoding: json
  level: info
  development: false
  disableCaller: false
global:
  queryTimeout: 3s
  maxConnection: 3
  defaultCache: 0
servers:
- name: main
  uri: mongodb://localhost:27017
aggregations:
- database: mydb
  collection: objects
  servers: [main] #Can also be empty, if empty the metric will be used for every server defined
  metrics:
  - name: myapp_example_simplevalue_total
    type: gauge #Can also be empty, the default is gauge
    help: 'Simple gauge metric'
    value: total
    overrideEmpty: true # if an empty result set is returned..
    emptyValue: 0       # create a metric with value 0
    labels: []
    constLabels:
      region: eu-central-1
  cache: 0
  mode: pull
  pipeline: |
    [
      {"$count":"total"}
    ]
- database: mydb
  collection: queue
  metrics:
  - name: myapp_example_processes_total
    type: gauge
    help: 'The total number of processes in a job queue'
    value: total
    labels: [type,status]
    constLabels: {}
  mode: pull
  pipeline: |
    [
      {"$group": {
        "_id":{"status":"$status","name":"$class"},
        "total":{"$sum":1}
      }},
      {"$project":{
        "_id":0,
        "type":"$_id.name",
        "total":"$total",
        "status": {
          "$switch": {
              "branches": [
                 { "case": { "$eq": ["$_id.status", 0] }, "then": "waiting" },
                 { "case": { "$eq": ["$_id.status", 1] }, "then": "postponed" },
                 { "case": { "$eq": ["$_id.status", 2] }, "then": "processing" },
                 { "case": { "$eq": ["$_id.status", 3] }, "then": "done" },
                 { "case": { "$eq": ["$_id.status", 4] }, "then": "failed" },
                 { "case": { "$eq": ["$_id.status", 5] }, "then": "canceled" },
                 { "case": { "$eq": ["$_id.status", 6] }, "then": "timeout" }
              ],
              "default": "unknown"
          }}
      }}
    ]
- database: mydb
  collection: events
  metrics:
  - name: myapp_events_total
    type: gauge
    help: 'The total number of events (created 1h ago or newer)'
    value: count
    labels: [type]
    constLabels: {}
  mode: pull
  # Note $$NOW is only supported in MongoDB >= 4.2
  pipeline: |
    [
      { "$sort": { "created": -1 }},
      {"$limit": 100000},
      {"$match":{
        "$expr": {
          "$gte": [
            "$created",
            {
              "$subtract": ["$$NOW", 3600000]
            }
          ]
        }
      }},
      {"$group": {
        "_id":{"type":"$type"},
        "count":{"$sum":1}
      }},
      {"$project":{
        "_id":0,
        "type":"$_id.type",
        "count":"$count"
      }}
    ]

See more examples in the /example folder.

Info metrics

By defining no actual value field but set overrideEmpty to true a metric can sill be exported with labels from the aggregation pipeline but the value is set to a static value taken from emptyValue. This is useful for exporting info metrics which can later be used for join queries.

servers:
- name: main
  uri: mongodb://localhost:27017
aggregations:
- database: mydb
  collection: objects
  metrics:
  - name: myapp_info
    help: 'Info metric'
    overrideEmpty: true
    emptyValue: 1
    labels:
    - mylabel1
    - mylabel2
    constLabels:
      region: eu-central-1
  cache: 0
  mode: pull
  pipeline: `...`

Supported config versions

Config version Supported since
v3.0 v1.0.0
v2.0 v1.0.0-beta5
v1.0 v1.0.0-beta1

Cache & Push

Prometheus is designed to scrape metrics. During each scrape the mongodb-query-exporter will evaluate all configured metrics. If you have expensive queries there is an option to cache the aggregation result by setting a cache ttl. However it is more effective to avoid cache and design good aggregation pipelines. In some cases a different scrape interval might also be a solution. For individual aggregations and/or MongoDB servers older than 3.6 it might still be a good option though.

A better approach is using push instead a static cache, see bellow.

Example:

aggregations:
- metrics:
  - name: myapp_example_simplevalue_total
    help: 'Simple gauge metric which is cached for 5min'
    value: total
  servers: [main]
  mode: pull
  cache: 5m
  database: mydb
  collection: objects
  pipeline: |
    [
      {"$count":"total"}
    ]

To reduce load on the MongoDB server (and also scrape time) there is a push mode. Push automatically caches the metric at scrape time preferred (If no cache ttl is set). However the cache for a metric with mode push will be invalidated automatically if anything changes within the configured MongoDB collection. Meaning the aggregation will only be executed if there have been changes during scrape intervals.

Note: This requires at least MongoDB 3.6.

Example:

aggregations:
- metrics:
  - name: myapp_example_simplevalue_total
    help: 'Simple gauge metric'
    value: total
  servers: [main]
  # With the mode push the pipeline is only executed if a change occured on the collection called objects
  mode: push
  database: mydb
  collection: objects
  pipeline: |
    [
      {"$count":"total"}
    ]

Debug

The mongodb-query-exporters also publishes a counter metric called mongodb_query_exporter_query_total which counts query results for each configured aggregation. Furthermore you might increase the log level to get more insight.

Used by

  • The balloon helm chart implements the mongodb-query-exporter to expose general stats from the MongoDB like the number of total nodes or files stored internally or externally. See the config-map here.

Please submit a PR if your project should be listed here!

mongodb-query-exporter's People

Contributors

dependabot[bot] avatar fredmaggiowski avatar gmaiztegi avatar guillaumelecerf avatar horgix avatar raffis avatar renovate[bot] avatar step-security-bot avatar xhensiladoda avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

mongodb-query-exporter's Issues

Can't take ObjectId("xyz") in the pipeline

Looks like it can't take ObjectId("xyz") in the pipeline. Could we support that?
pipeline: | [ {"$match":{"user_id":ObjectId("xyz")}}, {"$group":{"_id":null,"count":{"$sum":1}}}, {"$project":{"_id":0, "count":1}} ]

Will return error:
failed to handle metric failed to decode json aggregation pipeline: invalid JSON input.

Additional labels passed through Helm values are wrongly indented after the first one

Describe the bug

In the Helm chart, when the labels value is defined and it includes multiple additional labels, the labels after the first additional one are not indented and therefore result in invalid manifests.

To Reproduce

Steps to reproduce the behavior:

  • Take the Chart with the default values
  • Change the labels (default {}) value to:
labels:
  label1: "value1"
  label2: "value2"
  • Render the resulting Chart with helm template

This results in the following output:

---
# Source: mongodb-query-exporter/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: release-name-mongodb-query-exporter
  labels:
    app.kubernetes.io/name: mongodb-query-exporter
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Helm
    helm.sh/chart: mongodb-query-exporter-3.0.2
    label1: value1
label2: value2
---
# Source: mongodb-query-exporter/templates/secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: release-name-mongodb-query-exporter
  labels:
    app.kubernetes.io/name: mongodb-query-exporter
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Helm
    helm.sh/chart: mongodb-query-exporter-3.0.2
    label1: value1
label2: value2
[...]
---
# Source: mongodb-query-exporter/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: release-name-mongodb-query-exporter
  labels:
    app.kubernetes.io/name: mongodb-query-exporter
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Helm
    helm.sh/chart: mongodb-query-exporter-3.0.2
    label1: value1
label2: value2
data:
[...]
---
# Source: mongodb-query-exporter/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: release-name-mongodb-query-exporter
  labels:
    app.kubernetes.io/name: mongodb-query-exporter
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Helm
    helm.sh/chart: mongodb-query-exporter-3.0.2
    label1: value1
label2: value2
spec:
[...]

Expected behavior

Additional labels should be added properly to the default ones (chartLabels) and properly indented. In the case of the provided example, label2 should be indented at the same level as label1 and previous labels.

Environment

  • mongodb-query-exporter version: any. Reproduced with both 2.1.0 and the latest master commit.
  • prometheus version: Irrelevant
  • MongoDB version: Irrelevant
  • Deployed as: Helm chart

Additional context

Would be happy to find out a solution and send a pull request if this bug is confirmed ๐Ÿ™‚

Add a flag to remove process_* and go_*

Hi,

we use this nice service to feed metrics into Datadog. Unfortunately, it is not easy to tell datadog to ignore certain metrics and custom metrics cost money.

The metrics starting with go_ and process_ are not necessary for me, so I'd prefer to remove them if possible.

Describe the solution you'd like

A config item and/or environment variable that would filter out those metrics

Describe alternatives you've considered

Try to filter at the receiving side (i.e. datadog agent), but it is not possible

How to use "new Date(0)" in pipeline

I have the following pipeline:

pipeline: |
[
{"$match": {"book": {"$ne": null}}},{"$group": {"_id": "$env", "timestamp": {"$last": "$timestamp"}, "bookability": {"$last":
{"$multiply": ["$book",100]}}}}, {"$project": {"_id": "$_id", "bookability": "$bookability", "timestamp": {"$substr": [{"$subtract":
["$timestamp", new Date(0)]}, 0, -1]}}}

]

The idea behind this is to receive a metric (bookability) with two labels:

  • _id - the name of the environment;
  • timestamp - in epoch time, in string format. Therefore I can decode it in Prometheus.

The problem I have is with the timestamp parameter in the project part of the expression.
The above expression in the exact same format provides adequate result in mongo:
{ "_id" : "my_environment", "timestamp" : "1624349380000", "bookability" : 97.38846572361263 }

However, if I try to start the mongodb_exporter using the above pipeline I receive the below error:

{"level":"info","ts":1624371064.4728699,"caller":"v2/config.go:71","msg":"will listen on 0.0.0.0:9412"}
{"level":"info","ts":1624371064.4731367,"caller":"v2/config.go:101","msg":"use mongodb hosts []string{\"localhost:27017\"}"}
panic: failed to decode json aggregation pipeline: error decoding key 2.$project.timestamp.$substr.0.$subtract: invalid JSON literal. Position: 276, literal: new 

goroutine 1 [running]:
github.com/raffis/mongodb-query-exporter/cmd.glob..func1(0x14b67e0, 0x14f3f98, 0x0, 0x0)
        /home/thristov/mongodb-query-exporter-prometheus-mongodb-query-exporter-1.1.3/cmd/root.go:64 +0x3f6
github.com/spf13/cobra.(*Command).execute(0x14b67e0, 0xc00018a010, 0x0, 0x0, 0x14b67e0, 0xc00018a010)
        /home/thristov/go/pkg/mod/github.com/spf13/[email protected]/command.go:846 +0x2c2
github.com/spf13/cobra.(*Command).ExecuteC(0x14b67e0, 0x0, 0xcf6ee0, 0xc000114058)
        /home/thristov/go/pkg/mod/github.com/spf13/[email protected]/command.go:950 +0x375
github.com/spf13/cobra.(*Command).Execute(...)
        /home/thristov/go/pkg/mod/github.com/spf13/[email protected]/command.go:887
github.com/raffis/mongodb-query-exporter/cmd.Execute(...)
        /home/thristov/mongodb-query-exporter-prometheus-mongodb-query-exporter-1.1.3/cmd/root.go:97
main.main()
        /home/thristov/mongodb-query-exporter-prometheus-mongodb-query-exporter-1.1.3/main.go:8 +0x2e

If I enclose new Date(0) in quotes like "$new Date(0)" I am able to start the exporter but the result is empty both from the exporter and from mongo.

So my questions is, how can I add new Date(0) to the pipeline so that the exporter accepts it correctly?

collect data

hi
how collect data from database with command mongodb by exporter :
example : db.mytb1.find({"name":"value"}).count()
is it possible ?

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Rate-Limited

These updates are currently rate-limited. Click on a checkbox below to force their creation now.

  • chore(deps-dev): update pascalgn/size-label-action digest to bbbaa0d
  • chore(deps-dev): update actions/cache action to v3.3.3
  • chore(deps-dev): update actions/checkout action to v4.1.5
  • chore(deps-dev): update actions/dependency-review-action action to v3.1.5
  • chore(deps-dev): update jasonetco/create-an-issue action to v2.9.2
  • chore(deps): update module github.com/spf13/viper to v1.18.2
  • chore(deps): update module github.com/testcontainers/testcontainers-go to v0.30.0
  • chore(deps): update module go.mongodb.org/mongo-driver to v1.15.0
  • chore(deps): update module go.uber.org/zap to v1.27.0
  • chore(deps-dev): update actions/setup-python action to v4.8.0
  • chore(deps-dev): update aquasecurity/trivy-action action to v0.19.0
  • chore(deps-dev): update docker/login-action action to v3.1.0
  • chore(deps-dev): update helm/kind-action action to v1.10.0
  • chore(deps-dev): update imranismail/setup-kustomize action to v2.1.0
  • chore(deps-dev): update shogo82148/actions-goveralls action to v1.9.0
  • chore(deps-dev): update sigstore/cosign-installer action to v3.5.0
  • chore(deps-dev): update step-security/harden-runner action to v2.7.1
  • chore(deps-dev): update actions/cache action to v4
  • chore(deps-dev): update actions/dependency-review-action action to v4
  • chore(deps-dev): update actions/setup-go action to v5
  • chore(deps-dev): update actions/setup-python action to v5
  • chore(deps-dev): update actions/stale action to v9
  • chore(deps-dev): update github artifact actions to v4 (major) (actions/download-artifact, actions/upload-artifact)
  • chore(deps-dev): update github/codeql-action action to v3
  • ๐Ÿ” Create all rate-limited PRs at once ๐Ÿ”

Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

Detected dependencies

dockerfile
Dockerfile
  • gcr.io/distroless/static nonroot@sha256:92d40eea0b5307a94f2ebee3e94095e704015fb41e35fc1fcbd1d151cc282222
github-actions
.github/workflows/main.yaml
  • step-security/harden-runner v2.6.1@eb238b55efaa70779f274895e782ed17c84f2895
  • actions/checkout v4@3df4ab11eba7bda6032a0b82a6bb43b11571feac
  • actions/setup-go v4.1.0@93397bea11091df50f3d7e59dc26a7711a8bcfbe
  • actions/cache v3.3.2@704facf57e6136b1bc63b828d79edcd491f0ee84
  • shogo82148/actions-goveralls v1.8.0@7b1bd2871942af030d707d6574e5f684f9891fb2
.github/workflows/pr-actions.yaml
  • step-security/harden-runner v2.6.1@eb238b55efaa70779f274895e782ed17c84f2895
  • actions/checkout v4.1.1@b4ffde65f46336ab88eb53be808477a3936bae11
  • zgosalvez/github-actions-ensure-sha-pinned-actions v2.1.6@99589360fda82ecfac331cc6bfc9d7d74487359c
.github/workflows/pr-build.yaml
  • step-security/harden-runner v2.6.1@eb238b55efaa70779f274895e782ed17c84f2895
  • actions/checkout v4@3df4ab11eba7bda6032a0b82a6bb43b11571feac
  • azure/setup-helm v3.5@5119fcb9089d432beecbf79bb2c7915207344b78
  • actions/setup-python v4.7.1@65d7f2d534ac1bc67fcd62888c5f4f3d2cb2b236
  • helm/chart-testing-action v2.6.1@e6669bcd63d7cb57cb4380c33043eebe5d111992
  • step-security/harden-runner v2.6.1@eb238b55efaa70779f274895e782ed17c84f2895
  • actions/checkout v4@3df4ab11eba7bda6032a0b82a6bb43b11571feac
  • actions/setup-go v4.1.0@93397bea11091df50f3d7e59dc26a7711a8bcfbe
  • actions/cache v3.3.2@704facf57e6136b1bc63b828d79edcd491f0ee84
  • step-security/harden-runner v2.6.1@eb238b55efaa70779f274895e782ed17c84f2895
  • actions/checkout v4@3df4ab11eba7bda6032a0b82a6bb43b11571feac
  • actions/setup-go v4.1.0@93397bea11091df50f3d7e59dc26a7711a8bcfbe
  • actions/cache v3.3.2@704facf57e6136b1bc63b828d79edcd491f0ee84
  • shogo82148/actions-goveralls v1.8.0@7b1bd2871942af030d707d6574e5f684f9891fb2
  • actions/upload-artifact v3.1.3@a8a3f3ad30e3422c9c7b888a15615d19a852ae32
  • step-security/harden-runner v2.6.1@eb238b55efaa70779f274895e782ed17c84f2895
  • actions/checkout v4@3df4ab11eba7bda6032a0b82a6bb43b11571feac
  • azure/setup-helm v3.5@5119fcb9089d432beecbf79bb2c7915207344b78
  • actions/setup-python v4.7.1@65d7f2d534ac1bc67fcd62888c5f4f3d2cb2b236
  • helm/chart-testing-action v2.6.1@e6669bcd63d7cb57cb4380c33043eebe5d111992
  • helm/kind-action v1.8.0@dda0770415bac9fc20092cacbc54aa298604d140
  • actions/download-artifact v3.0.2@9bc31d5ccc31df68ecc42ccf4149144866c47d8a
  • step-security/harden-runner v2.6.1@eb238b55efaa70779f274895e782ed17c84f2895
  • actions/checkout v4@3df4ab11eba7bda6032a0b82a6bb43b11571feac
  • actions/download-artifact v3.0.2@9bc31d5ccc31df68ecc42ccf4149144866c47d8a
  • engineerd/setup-kind v0.5.0@aa272fe2a7309878ffc2a81c56cfe3ef108ae7d0
  • imranismail/setup-kustomize v2.0.0@6691bdeb1b0a3286fb7f70fd1423c10e81e5375f
.github/workflows/pr-dependency-review.yaml
  • step-security/harden-runner v2.6.1@eb238b55efaa70779f274895e782ed17c84f2895
  • actions/checkout v4.1.1@b4ffde65f46336ab88eb53be808477a3936bae11
  • actions/dependency-review-action v3.1.3@7bbfa034e752445ea40215fff1c3bf9597993d3f
.github/workflows/pr-label.yaml
  • step-security/harden-runner v2.6.1@eb238b55efaa70779f274895e782ed17c84f2895
  • pascalgn/size-label-action b1f4946f381d38d3b5960f76b514afdfef39b609
.github/workflows/pr-nancy.yaml
  • step-security/harden-runner v2.6.1@eb238b55efaa70779f274895e782ed17c84f2895
  • actions/checkout v4@3df4ab11eba7bda6032a0b82a6bb43b11571feac
  • actions/setup-go v4.1.0@93397bea11091df50f3d7e59dc26a7711a8bcfbe
  • actions/cache v3.3.2@704facf57e6136b1bc63b828d79edcd491f0ee84
  • sonatype-nexus-community/nancy-github-action v1.0.3@726e338312e68ecdd4b4195765f174d3b3ce1533
.github/workflows/pr-stale.yaml
  • actions/stale v8.0.0@1160a2240286f5da8ec72b1c0816ce2481aabf84
.github/workflows/pr-trivy.yaml
  • step-security/harden-runner v2.6.1@eb238b55efaa70779f274895e782ed17c84f2895
  • aquasecurity/trivy-action 0.14.0@2b6a709cf9c4025c5438138008beaddbb02086f0
.github/workflows/rebase.yaml
  • step-security/harden-runner v2.6.1@eb238b55efaa70779f274895e782ed17c84f2895
  • actions/checkout v4@3df4ab11eba7bda6032a0b82a6bb43b11571feac
  • cirrus-actions/rebase 1.8@b87d48154a87a85666003575337e27b8cd65f691
.github/workflows/release.yaml
  • step-security/harden-runner v2.6.1@eb238b55efaa70779f274895e782ed17c84f2895
  • actions/checkout v4@3df4ab11eba7bda6032a0b82a6bb43b11571feac
  • actions/setup-go v4.1.0@93397bea11091df50f3d7e59dc26a7711a8bcfbe
  • docker/login-action v3.0.0@343f7c4344506bcbf9b4de18042ae17996df046d
  • sigstore/cosign-installer v3.2.0@1fc5bd396d372bee37d608f955b336615edf79c8
  • anchore/sbom-action v0.14.3@78fc58e266e87a38d4194b2137a3d4e9bcaf7ca1
  • goreleaser/goreleaser-action v5.0.0@7ec5c2b0c6cdda6e8bbb49444bc797dd33d74dd8
  • step-security/harden-runner v2.6.1@eb238b55efaa70779f274895e782ed17c84f2895
  • actions/checkout v4@3df4ab11eba7bda6032a0b82a6bb43b11571feac
  • azure/setup-helm v3.5@5119fcb9089d432beecbf79bb2c7915207344b78
  • sigstore/cosign-installer v3.2.0@1fc5bd396d372bee37d608f955b336615edf79c8
.github/workflows/report-on-vulnerabilities.yaml
  • step-security/harden-runner v2.6.1@eb238b55efaa70779f274895e782ed17c84f2895
  • aquasecurity/trivy-action 0.14.0@2b6a709cf9c4025c5438138008beaddbb02086f0
  • actions/upload-artifact v3.1.3@a8a3f3ad30e3422c9c7b888a15615d19a852ae32
  • step-security/harden-runner v2.6.1@eb238b55efaa70779f274895e782ed17c84f2895
  • actions/checkout v4.1.1@b4ffde65f46336ab88eb53be808477a3936bae11
  • actions/download-artifact v3.0.2@9bc31d5ccc31df68ecc42ccf4149144866c47d8a
  • JasonEtco/create-an-issue v2.9.1@e27dddc79c92bc6e4562f268fffa5ed752639abd
.github/workflows/scan.yaml
  • step-security/harden-runner v2.6.1@eb238b55efaa70779f274895e782ed17c84f2895
  • actions/checkout v4@3df4ab11eba7bda6032a0b82a6bb43b11571feac
  • fossa-contrib/fossa-action v2.0.0@6728dc6fe9a068c648d080c33829ffbe56565023
  • step-security/harden-runner v2.6.1@eb238b55efaa70779f274895e782ed17c84f2895
  • actions/checkout v4@3df4ab11eba7bda6032a0b82a6bb43b11571feac
  • github/codeql-action codeql-bundle-20221020@5f18c9ab80428f9d5a42da9ab35e6d8a1b9a9bc4
  • github/codeql-action codeql-bundle-20221020@5f18c9ab80428f9d5a42da9ab35e6d8a1b9a9bc4
  • github/codeql-action codeql-bundle-20221020@5f18c9ab80428f9d5a42da9ab35e6d8a1b9a9bc4
.github/workflows/scorecard.yaml
  • step-security/harden-runner v2.6.1@eb238b55efaa70779f274895e782ed17c84f2895
  • actions/checkout v4.1.1@b4ffde65f46336ab88eb53be808477a3936bae11
  • ossf/scorecard-action v2.3.1@0864cf19026789058feabb7e87baa5f140aac736
  • actions/upload-artifact v3.1.3@a8a3f3ad30e3422c9c7b888a15615d19a852ae32
  • github/codeql-action v2.22.7@66b90a5db151a8042fa97405c6cf843bbe433f7b
gomod
go.mod
  • go 1.20
  • github.com/hashicorp/go-multierror v1.1.1
  • github.com/pkg/errors v0.9.1
  • github.com/prometheus/client_golang v1.16.0
  • github.com/prometheus/client_model v0.3.0
  • github.com/prometheus/common v0.42.0
  • github.com/spf13/pflag v1.0.5
  • github.com/spf13/viper v1.17.0
  • github.com/testcontainers/testcontainers-go v0.26.0
  • github.com/tj/assert v0.0.3
  • go.mongodb.org/mongo-driver v1.12.1
  • go.uber.org/zap v1.26.0
helm-values
chart/mongodb-query-exporter/values.yaml
kustomize
config/base/kustomization.yaml
  • raffis/mongodb-query-exporter 2.0.1

  • Check this box to trigger a request for Renovate to run again on this repository

Latest container image contains vulnerabilities which are already fixed on master

Describe the bug

The latest container image in the GHCR Repo (https://github.com/raffis/mongodb-query-exporter/pkgs/container/mongodb-query-exporter/17247589?tag=v1.0.0) is 9 month old and vulnerability scan with trivy (v0.31.2 with Vulnerability DB from 2022-12-12 12:07:26.803048482 +0000 UTC) shows that the container images contains 22 CVEs:

ghcr.io/raffis/mongodb-query-exporter:v1.0.0 (debian 11.2)
==========================================================
Total: 22 (UNKNOWN: 0, LOW: 12, MEDIUM: 2, HIGH: 1, CRITICAL: 7)

However, I've seen that 3 days ago some Dependabot pull requests have been merged into master, among them #72 which fixed a high severity vulnerability (CVE-2022-21698). So I cloned the git repo, built the Dockerfile.release file myself (tagged it locally as latest) and scanned it with trivy again. This image contains 0 CVEs:

mongodb-query-exporter:latest (debian 11.5)
===========================================
Total: 0 (UNKNOWN: 0, LOW: 0, MEDIUM: 0, HIGH: 0, CRITICAL: 0)

So the latest commit on master currently has no vulnerabilities. Unfortunately, there is no corresponding container image to this commit. So I would like to ask if you could tag the latest master commit to create a new release so that a new container image will be pushed to the package repo.

To Reproduce

Scan the latest container image from the GHCR package repo.

Expected behavior

Scan of the latest container image from the GHCR package repo contains no vulnerabilities (or at least none which are already fixed on master).

Environment

  • mongodb-query-exporter version: v1.0.0 and latest git commit on master f24440c
  • trivy: v0.31.2 with Vulnerability DB from 2022-12-12 12:07:26.803048482 +0000 UTC
  • Deployed as: docker

mapped folder '/etc/mongodb-query-exporter' worked in v1 but not in v2

Describe the bug

It's been working well with the following setting

  mongodb-query-exporter:
  # https://github.com/raffis/mongodb-query-exporter
  # docker pull ghcr.io/raffis/mongodb-query-exporter:v1.0.0
    image: ghcr.io/raffis/mongodb-query-exporter:v1.0.0
    container_name: mongodb-query-exporter
    hostname: mongodb-query-exporter
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Asia/Shanghai
    volumes:
      - ./:/etc/mongodb-query-exporter
    restart: always
    ports:
      - 9412:9412

I just removed :v1.0.0, and then docker compose pull to upgrade to latest and 2.0.1, it could not find the config.yaml anymore
It was the same when I added 2.0.1.

The error was as follows:


the following setting that used to work just stopped with 


main.main()

	/home/runner/work/mongodb-query-exporter/mongodb-query-exporter/cmd/root.go:117 +0x25

panic: Config File "config" Not Found in "[/home/nonroot/.mongodb_query_exporter /etc/mongodb_query_exporter]"

after quite a few trials, I could get it work with the olc /tmp trick

  mongodb-query-exporter:
  # https://github.com/raffis/mongodb-query-exporter
  # docker pull ghcr.io/raffis/mongodb-query-exporter:2.0.1
    image: ghcr.io/raffis/mongodb-query-exporter:2.0.1
    container_name: mongodb-query-exporter
    hostname: mongodb-query-exporter
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Asia/Shanghai
      - MDBEXPORTER_CONFIG=/tmp/config.yaml
    volumes:
      - ./:/tmp
    restart: always
    ports:
      - 9412:9412

To Reproduce

  • used the above snippet inside a docker-compose.yml file
  • run docker compose up and check the log for the error message
panic: Config File "config" Not Found in "[/home/nonroot/.mongodb_query_exporter /etc/mongodb_query_exporter]"

Expected behavior

config.yaml being found as documented

Environment

  • mongodb-query-exporter [version:v2.0.1]
  • prometheus version: [docker latest]
  • MongoDB version: [v6]
  • Deployed as: [docker]

Additional context

In retrospect, I suspect some fundamental user-related access right handling is missing in the code or documentation, which yields mapped file cannot be aceessed by the app inside docker container. And using /tmp bypassed it.

Question about connection MongoAtlas

Hi,

I'm trying to integrate this exporter to collect metrics from MongoAtlas database.

When I try to collect it I got an issue.

{"level":"error","ts":1669911025.5999675,"caller":"collector/collector.go:256","msg":"failed to generate metric: server selection error: context deadline exceeded, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: :1234, Type: Unknown, Last error: connection() error occured during connection handshake: dial tcp :1235: i/o timeout }, { Addr: :1236, Type: Unknown, Last error: connection() error occured during connection handshake: dial tcp :1237: i/o timeout }, { Addr: :1038, Type: Unknown, Last error: connection() error occured during connection handshake: dial tcp :1238: i/o timeout }, ] }"}

The resolution seems be good but the issue seems to be linked about the ssl protocol but on other containers on the same machines where connection to same mongodb database is done and no issue.

Could you help me ?

How to use JS in Pipeline?

I want to get data in 1 hour, but JS is not supported by the exporter. If I wrap JS in quotation marks, then it is treated as a string and is not executed.

Example:

 [
        {
            "$match": {
                "end_time": {
                    "$gte": new Date().getTime()/1000 - new Date().getTime()/1000 % 3600 - 1 * 60 * 60,
                    "$lt": new Date().getTime()/1000 - new Date().getTime()/1000 % 3600 - 0 * 60 * 60
                }
            }
        },
....
]

Errors:

Mar 18 10:05:38 pc prometheus-mongodb-query-exporter[14594]: panic: While parsing config: yaml: line 26: did not find expected key
Mar 18 10:05:38 pc prometheus-mongodb-query-exporter[14594]: goroutine 1 [running]:
Mar 18 10:05:38 pc prometheus-mongodb-query-exporter[14594]: github.com/raffis/mongodb-query-exporter/cmd.initConfig()
Mar 18 10:05:38 pc prometheus-mongodb-query-exporter[14594]: 	/go/src/github.com/raffis/mongodb-query-exporter/cmd/root.go:144 +0x205
Mar 18 10:05:38 pc prometheus-mongodb-query-exporter[14594]: github.com/spf13/cobra.(*Command).preRun(0x147f3a0)
Mar 18 10:05:38 pc prometheus-mongodb-query-exporter[14594]: 	/go/pkg/mod/github.com/spf13/[email protected]/command.go:872 +0x49
Mar 18 10:05:38 pc prometheus-mongodb-query-exporter[14594]: github.com/spf13/cobra.(*Command).execute(0x147f3a0, 0xc0000aa160, 0x2, 0x2, 0x147f3a0, 0xc0000aa160)
Mar 18 10:05:38 pc prometheus-mongodb-query-exporter[14594]: 	/go/pkg/mod/github.com/spf13/[email protected]/command.go:808 +0x14f
Mar 18 10:05:38 pc prometheus-mongodb-query-exporter[14594]: github.com/spf13/cobra.(*Command).ExecuteC(0x147f3a0, 0x0, 0xce2960, 0xc00009c058)
Mar 18 10:05:38 pc prometheus-mongodb-query-exporter[14594]: 	/go/pkg/mod/github.com/spf13/[email protected]/command.go:950 +0x375
Mar 18 10:05:38 pc prometheus-mongodb-query-exporter[14594]: github.com/spf13/cobra.(*Command).Execute(...)
Mar 18 10:05:38 pc prometheus-mongodb-query-exporter[14594]: 	/go/pkg/mod/github.com/spf13/[email protected]/command.go:887
Mar 18 10:05:38 pc prometheus-mongodb-query-exporter[14594]: github.com/raffis/mongodb-query-exporter/cmd.Execute(...)
Mar 18 10:05:38 pc prometheus-mongodb-query-exporter[14594]: 	/go/src/github.com/raffis/mongodb-query-exporter/cmd/root.go:93
Mar 18 10:05:38 pc prometheus-mongodb-query-exporter[14594]: main.main()
Mar 18 10:05:38 pc prometheus-mongodb-query-exporter[14594]: 	/go/src/github.com/raffis/mongodb-query-exporter/main.go:8 +0x2e
Mar 18 10:05:38 pc systemd[1]: prometheus-mongodb-query-exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Mar 18 10:05:38 pc systemd[1]: prometheus-mongodb-query-exporter.service: Failed with result 'exit-code'.
Mar 18 10:05:38 pc systemd[1]: prometheus-mongodb-query-exporter.service: Scheduled restart job, restart counter is at 1.
Mar 18 10:05:38 pc prometheus-mongodb-query-exporter[14619]: panic: While parsing config: yaml: line 26: did not find expected key
Mar 18 10:05:38 pc prometheus-mongodb-query-exporter[14619]: goroutine 1 [running]:
Mar 18 10:05:38 pc prometheus-mongodb-query-exporter[14619]: github.com/raffis/mongodb-query-exporter/cmd.initConfig()
Mar 18 10:05:38 pc prometheus-mongodb-query-exporter[14619]: 	/go/src/github.com/raffis/mongodb-query-exporter/cmd/root.go:144 +0x205
Mar 18 10:05:38 pc prometheus-mongodb-query-exporter[14619]: github.com/spf13/cobra.(*Command).preRun(0x147f3a0)
Mar 18 10:05:38 pc prometheus-mongodb-query-exporter[14619]: 	/go/pkg/mod/github.com/spf13/[email protected]/command.go:872 +0x49
Mar 18 10:05:38 pc prometheus-mongodb-query-exporter[14619]: github.com/spf13/cobra.(*Command).execute(0x147f3a0, 0xc00011a160, 0x2, 0x2, 0x147f3a0, 0xc00011a160)
Mar 18 10:05:38 pc prometheus-mongodb-query-exporter[14619]: 	/go/pkg/mod/github.com/spf13/[email protected]/command.go:808 +0x14f
Mar 18 10:05:38 pc prometheus-mongodb-query-exporter[14619]: github.com/spf13/cobra.(*Command).ExecuteC(0x147f3a0, 0x0, 0xce2960, 0xc00010c058)
Mar 18 10:05:38 pc prometheus-mongodb-query-exporter[14619]: 	/go/pkg/mod/github.com/spf13/[email protected]/command.go:950 +0x375
Mar 18 10:05:38 pc prometheus-mongodb-query-exporter[14619]: github.com/spf13/cobra.(*Command).Execute(...)
Mar 18 10:05:38 pc prometheus-mongodb-query-exporter[14619]: 	/go/pkg/mod/github.com/spf13/[email protected]/command.go:887
Mar 18 10:05:38 pc prometheus-mongodb-query-exporter[14619]: github.com/raffis/mongodb-query-exporter/cmd.Execute(...)
Mar 18 10:05:38 pc prometheus-mongodb-query-exporter[14619]: 	/go/src/github.com/raffis/mongodb-query-exporter/cmd/root.go:93
Mar 18 10:05:38 pc prometheus-mongodb-query-exporter[14619]: main.main()
Mar 18 10:05:38 pc prometheus-mongodb-query-exporter[14619]: 	/go/src/github.com/raffis/mongodb-query-exporter/main.go:8 +0x2e
Mar 18 10:05:38 pc systemd[1]: prometheus-mongodb-query-exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Mar 18 10:05:38 pc systemd[1]: prometheus-mongodb-query-exporter.service: Failed with result 'exit-code'.
Mar 18 10:05:39 pc systemd[1]: prometheus-mongodb-query-exporter.service: Scheduled restart job, restart counter is at 2.
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14626]: panic: While parsing config: yaml: line 26: did not find expected key
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14626]: goroutine 1 [running]:
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14626]: github.com/raffis/mongodb-query-exporter/cmd.initConfig()
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14626]: 	/go/src/github.com/raffis/mongodb-query-exporter/cmd/root.go:144 +0x205
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14626]: github.com/spf13/cobra.(*Command).preRun(0x147f3a0)
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14626]: 	/go/pkg/mod/github.com/spf13/[email protected]/command.go:872 +0x49
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14626]: github.com/spf13/cobra.(*Command).execute(0x147f3a0, 0xc00010e160, 0x2, 0x2, 0x147f3a0, 0xc00010e160)
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14626]: 	/go/pkg/mod/github.com/spf13/[email protected]/command.go:808 +0x14f
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14626]: github.com/spf13/cobra.(*Command).ExecuteC(0x147f3a0, 0x0, 0xce2960, 0xc000100058)
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14626]: 	/go/pkg/mod/github.com/spf13/[email protected]/command.go:950 +0x375
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14626]: github.com/spf13/cobra.(*Command).Execute(...)
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14626]: 	/go/pkg/mod/github.com/spf13/[email protected]/command.go:887
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14626]: github.com/raffis/mongodb-query-exporter/cmd.Execute(...)
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14626]: 	/go/src/github.com/raffis/mongodb-query-exporter/cmd/root.go:93
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14626]: main.main()
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14626]: 	/go/src/github.com/raffis/mongodb-query-exporter/main.go:8 +0x2e
Mar 18 10:05:39 pc systemd[1]: prometheus-mongodb-query-exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Mar 18 10:05:39 pc systemd[1]: prometheus-mongodb-query-exporter.service: Failed with result 'exit-code'.
Mar 18 10:05:39 pc systemd[1]: prometheus-mongodb-query-exporter.service: Scheduled restart job, restart counter is at 3.
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14633]: panic: While parsing config: yaml: line 26: did not find expected key
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14633]: goroutine 1 [running]:
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14633]: github.com/raffis/mongodb-query-exporter/cmd.initConfig()
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14633]: 	/go/src/github.com/raffis/mongodb-query-exporter/cmd/root.go:144 +0x205
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14633]: github.com/spf13/cobra.(*Command).preRun(0x147f3a0)
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14633]: 	/go/pkg/mod/github.com/spf13/[email protected]/command.go:872 +0x49
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14633]: github.com/spf13/cobra.(*Command).execute(0x147f3a0, 0xc000030040, 0x2, 0x2, 0x147f3a0, 0xc000030040)
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14633]: 	/go/pkg/mod/github.com/spf13/[email protected]/command.go:808 +0x14f
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14633]: github.com/spf13/cobra.(*Command).ExecuteC(0x147f3a0, 0x0, 0xce2960, 0xc00009c058)
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14633]: 	/go/pkg/mod/github.com/spf13/[email protected]/command.go:950 +0x375
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14633]: github.com/spf13/cobra.(*Command).Execute(...)
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14633]: 	/go/pkg/mod/github.com/spf13/[email protected]/command.go:887
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14633]: github.com/raffis/mongodb-query-exporter/cmd.Execute(...)
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14633]: 	/go/src/github.com/raffis/mongodb-query-exporter/cmd/root.go:93
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14633]: main.main()
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14633]: 	/go/src/github.com/raffis/mongodb-query-exporter/main.go:8 +0x2e
Mar 18 10:05:39 pc systemd[1]: prometheus-mongodb-query-exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Mar 18 10:05:39 pc systemd[1]: prometheus-mongodb-query-exporter.service: Failed with result 'exit-code'.
Mar 18 10:05:39 pc systemd[1]: prometheus-mongodb-query-exporter.service: Scheduled restart job, restart counter is at 4.
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14639]: panic: While parsing config: yaml: line 26: did not find expected key
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14639]: goroutine 1 [running]:
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14639]: github.com/raffis/mongodb-query-exporter/cmd.initConfig()
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14639]: 	/go/src/github.com/raffis/mongodb-query-exporter/cmd/root.go:144 +0x205
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14639]: github.com/spf13/cobra.(*Command).preRun(0x147f3a0)
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14639]: 	/go/pkg/mod/github.com/spf13/[email protected]/command.go:872 +0x49
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14639]: github.com/spf13/cobra.(*Command).execute(0x147f3a0, 0xc000030190, 0x2, 0x2, 0x147f3a0, 0xc000030190)
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14639]: 	/go/pkg/mod/github.com/spf13/[email protected]/command.go:808 +0x14f
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14639]: github.com/spf13/cobra.(*Command).ExecuteC(0x147f3a0, 0x0, 0xce2960, 0xc00003e0b8)
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14639]: 	/go/pkg/mod/github.com/spf13/[email protected]/command.go:950 +0x375
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14639]: github.com/spf13/cobra.(*Command).Execute(...)
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14639]: 	/go/pkg/mod/github.com/spf13/[email protected]/command.go:887
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14639]: github.com/raffis/mongodb-query-exporter/cmd.Execute(...)
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14639]: 	/go/src/github.com/raffis/mongodb-query-exporter/cmd/root.go:93
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14639]: main.main()
Mar 18 10:05:39 pc prometheus-mongodb-query-exporter[14639]: 	/go/src/github.com/raffis/mongodb-query-exporter/main.go:8 +0x2e

Is it possible to somehow escape JS so that it executed or add this function?

Thanks

Automatically fill in release version for chart appVersion

Describe the change

A clear and concise description of what the change is about.

Current situation

Describe the current situation.

Should

Describe the changes you would like to propose.

Additional context

Add any other context about the problem here.

Expose Pod Information to Containers Through Environment Variables

Is your feature request related to a problem? Please describe

I need to expose pod information on my custom metrics from the container.
In more specific, I need to add the following environment variable:

env:
    - name: POD_NAME
      valueFrom:
          fieldRef:
              fieldPath: metadata.name

Currently, this is not supported on the deployment.

Describe the solution you'd like

Ideally, I would like to have a key/value filed where:

  • key is the environment variable name
  • value is the fieldPath where to retrieve the pod information

Example:

envFromFieldPath:
    POD_NAME: "metadata.name"

Describe alternatives you've considered

I have considered changing myself the chart, and adding in the deployment.yaml:

        {{- range $key, $value := .Values.extraEnvFieldPath }}
          - name: {{ $key }}
            valueFrom:
              fieldRef:
                fieldPath: {{ $value }}
          {{- end }}

Then, I specify in my values.yaml:

  extraEnvFieldPath:
    POD_NAME: "metadata.name"

After I install the chart, I get on the container environment variables:

    Environment:
      POD_NAME:                 metrics-exporter-prometheus-mongodb-query-exporter-6545c56fbxgv (v1:metadata.name)

Additional context

NA

Require hints to connect to MongoDB on Atlas

How do I configure a simple uri for MongoDB on Atlas where the URI would like something like this:

mongodb+srv://:@some.thing.mongodb.net

?

A simple example would be REALLY helpful.

Apologies for the rookie question.

tls support

Hello! I'm not finding a way to pass CA and PEM certs for ssl connections to mongodb. Is it supported? Any plans to support it?

Suggestion: Have readPreference set to "secondaryPreferred" by default

I recommend that the query exporter be set to readPreference "secondaryPreferred" by default, so that it generally reads from secondaries rather than the primary.

The advantage of this is that it avoids load on your primary DB. In particular, aggregation pipeline queries often seem to lock the DB collection and make other queries (i.e. from your application) artificially slow.

The main disadvantage of this is that your exported metrics may be subject to replication lag (usually less than 1 sec) vs. the state of the primary. Since the query exporter is typically used for passive stats exporting, this is generally acceptable in the vast majority of cases.

MongoDB uri password as env

Hi @raffis

I'm trying to authenticate against my MongoDB with the password set environment variables in uri.
Example:

uri: mongodb://test:$TEST@localhost

Unfortunately this does not seem to work and the following message appears in the log.

auth error: sasl conversation error: unable to authenticate using mechanism "SCRAM-SHA-256": (AuthenticationFailed) Authentication failed., awaiting the next pull.

Do you know the error respectively does your go application not support the MongoDB User Password as ENV in uri string?

Question about debugging

I have managed to setup mongodb-query-exporter and i get the mongodb_query_exporter_query_total metric out of it.

Currently I'm unable to get my first aggregation to work, and I wonder is it connection error or configuration error. Could someone give any hints? Any idea what has happened to my kookoo aggregation?

My config is as follows:

version: 3.0
bind: 0.0.0.0:9412
log:
  encoding: json
  level: info
  development: false
  disableCaller: false
global:
  queryTimeout: 10
  maxConnection: 3
  defaultCache: 0
servers:
- name: main
  uri: [connection string from Azure portal that I know , works]
- metrics:
  - name: kookoo
    help: 'How many solutions presented in the cosmosDB'
    value: total
  database: mwise
  collection: solutions
  servers: [main]
  mode: push
  pipeline: |
    [
      {"$count":"total"}
    ]

And logging prints like this:

16:38:27 linux systemd[1]: Stopping MongoDB Query Exporter...
16:38:27 linux systemd[1]: Stopped MongoDB Query Exporter.
16:38:27 linux systemd[1]: Started MongoDB Query Exporter.
16:38:27 linux mongodb-query-exporter[20738]: {"level":"info","ts":1657719507.8708718,"caller":"v3/config.go:103","msg":"will listen on 0.0.0.0:9412"}
16:38:27 linux mongodb-query-exporter[20738]: {"level":"info","ts":1657719507.870922,"caller":"v3/config.go:141","msg":"use mongodb hosts []string{"cosmos-mongo.mongo.cosmos.azure.com:10255"}"}
16:38:27 linux mongodb-query-exporter[20738]: {"level":"info","ts":1657719507.871182,"caller":"collector/collector.go:341","msg":"start changestream on mwise.solutions, waiting for changes"}
16:38:28 linux mongodb-query-exporter[20738]: {"level":"error","ts":1657719508.9258626,"caller":"collector/collector.go:329","msg":"failed to start changestream listener (BadValue) fullDocument option must be "updateLookup".; failed to watch for updates, fallback to pull"}
16:38:37 linux mongodb-query-exporter[20738]: {"level":"error","ts":1657719517.7763827,"caller":"collector/collector.go:256","msg":"failed to generate metric: context deadline exceeded"}
16:38:52 linux mongodb-query-exporter[20738]: {"level":"error","ts":1657719532.7756834,"caller":"collector/collector.go:256","msg":"failed to generate metric: context deadline exceeded"}
16:39:07 linux mongodb-query-exporter[20738]: {"level":"error","ts":1657719547.7756813,"caller":"collector/collector.go:256","msg":"failed to generate metric: context deadline exceeded"}
16:39:22 linux mongodb-query-exporter[20738]: {"level":"error","ts":1657719562.7757967,"caller":"collector/collector.go:256","msg":"failed to generate metric: context deadline exceeded"}
16:39:37 linux mongodb-query-exporter[20738]: {"level":"error","ts":1657719577.7758172,"caller":"collector/collector.go:256","msg":"failed to generate metric: context deadline exceeded"}
16:39:52 linux mongodb-query-exporter[20738]: {"level":"error","ts":1657719592.7760677,"caller":"collector/collector.go:256","msg":"failed to generate metric: context deadline exceeded"}
16:40:07 linux mongodb-query-exporter[20738]: {"level":"error","ts":1657719607.777052,"caller":"collector/collector.go:256","msg":"failed to generate metric: context deadline exceeded"}
16:40:22 linux mongodb-query-exporter[20738]: {"level":"error","ts":1657719622.7757149,"caller":"collector/collector.go:256","msg":"failed to generate metric: context deadline exceeded"}
16:40:37 linux mongodb-query-exporter[20738]: {"level":"error","ts":1657719637.7755935,"caller":"collector/collector.go:256","msg":"failed to generate metric: timed out while checking out a connection from connection pool: context deadline exceeded; maxPoolSize: 100, connections in use by cursors: 0, connections in use by transactions: 0, connections in use by other operations: 0"}
16:40:52 linux mongodb-query-exporter[20738]: {"level":"error","ts":1657719652.775446,"caller":"collector/collector.go:256","msg":"failed to generate metric: timed out while checking out a connection from connection pool: context deadline exceeded; maxPoolSize: 100, connections in use by cursors: 0, connections in use by transactions: 0, connections in use by other operations: 0"}
16:41:07 linux mongodb-query-exporter[20738]: {"level":"error","ts":1657719667.775576,"caller":"collector/collector.go:256","msg":"failed to generate metric: timed out while checking out a connection from connection pool: context deadline exceeded; maxPoolSize: 100, connections in use by cursors: 0, connections in use by transactions: 0, connections in use by other operations: 0"}
16:41:22 linux mongodb-query-exporter[20738]: {"level":"error","ts":1657719682.7756364,"caller":"collector/collector.go:256","msg":"failed to generate metric: timed out while checking out a connection from connection pool: context deadline exceeded; maxPoolSize: 100, connections in use by cursors: 0, connections in use by transactions: 0, connections in use by other operations: 0"}
16:41:37 linux mongodb-query-exporter[20738]: {"level":"error","ts":1657719697.775633,"caller":"collector/collector.go:256","msg":"failed to generate metric: timed out while checking out a connection from connection pool: context deadline exceeded; maxPoolSize: 100, connections in use by cursors: 0, connections in use by transactions: 0, connections in use by other operations: 0"}
16:41:52 linux mongodb-query-exporter[20738]: {"level":"error","ts":1657719712.7755651,"caller":"collector/collector.go:256","msg":"failed to generate metric: timed out while checking out a connection from connection pool: context deadline exceeded; maxPoolSize: 100, connections in use by cursors: 0, connections in use by transactions: 0, connections in use by other operations: 0"}
16:42:07 linux mongodb-query-exporter[20738]: {"level":"error","ts":1657719727.7757118,"caller":"collector/collector.go:256","msg":"failed to generate metric: timed out while checking out a connection from connection pool: context deadline exceeded; maxPoolSize: 100, connections in use by cursors: 0, connections in use by transactions: 0, connections in use by other operations: 0"}
...

BR

Add the label for using multiple servers

Is your feature request related to a problem? Please describe

When I use multiple MongoDB servers, such as test and prod, I can't differentiate them in metrics
Let say, I defined servers below

servers:
- name: test
  uri: 'mongodb://readuser:readpassword@hostname_test:27017'
- name: prod
  uri: 'mongodb://readuser:readpassword@hostname_prod:27017'

And next I want to use both servers for one metric

servers: [] #Can also be empty, if empty the metric will be used for every server defined

But in the end, on the /metrics page, I see only one metric.

mongodb_query_exporter_query_total{aggregation="aggregation_4",result="SUCCESS",server="prod"} 1
mongodb_query_exporter_query_total{aggregation="aggregation_4",result="SUCCESS",server="test"} 1
sample_metric_name_gauge{mylabel="mylabel"} 1

And I would like to see something like this

mongodb_query_exporter_query_total{aggregation="aggregation_4",result="SUCCESS",server="prod"} 1
mongodb_query_exporter_query_total{aggregation="aggregation_4",result="SUCCESS",server="test"} 1
sample_metric_name_gauge{server="test", mylabel="mylabel"} 1
sample_metric_name_gauge{server="prod", mylabel="mylabel"} 3

Describe the solution you'd like

Is it possible to add a label with the name of the server to make difference between them?
Please let me know if such a possibility already exists. I didn't find this in Readme

metric setting has no effect

I tried to set metrics as counter in config.yml as follows:

- database: edge
  collection: barcode_counter_log
  servers: [lgs11l] #Can also be empty, if empty the metric will be used for every server defined
  metrics:
  - name: lgs11l_sw_product_total
    type: counter  #Can also be empty, the default is gauge
    help: 'accumulated parts number'
    value: createTime
    overrideEmpty: true # if an empty result set is returned..
    emptyValue: 0       # create a metric with value 0
    labels: []
    constLabels: []
  mode: pull
  pipeline: |
    [
       {"$count":"createTime"}
    ]

however, the query always returns gauge,

# HELP lgs11l_sw_product_total accumulated parts number
# TYPE lgs11l_sw_product_total gauge
lgs11l_sw_product_total 7

Am I doing anything wrong?

ๅŒๅญฆ๏ผŒๆ‚จ่ฟ™ไธช้กน็›ฎๅผ•ๅ…ฅไบ†285ไธชๅผ€ๆบ็ป„ไปถ๏ผŒๅญ˜ๅœจ2ไธชๆผๆดž๏ผŒ่พ›่‹ฆๅ‡็บงไธ€ไธ‹

ๆฃ€ๆต‹ๅˆฐ raffis/mongodb-query-exporter ไธ€ๅ…ฑๅผ•ๅ…ฅไบ†285ไธชๅผ€ๆบ็ป„ไปถ๏ผŒๅญ˜ๅœจ2ไธชๆผๆดž

ๆผๆดžๆ ‡้ข˜๏ผšjwt-go ๅฎ‰ๅ…จๆผๆดž
็ผบ้™ท็ป„ไปถ๏ผšgithub.com/dgrijalva/[email protected]+incompatible
ๆผๆดž็ผ–ๅท๏ผšCVE-2020-26160
ๆผๆดžๆ่ฟฐ๏ผšjwt-goๆ˜ฏไธชไบบๅผ€ๅ‘่€…็š„ไธ€ไธชGo่ฏญ่จ€็š„JWTๅฎž็Žฐใ€‚
jwt-go 4.0.0-preview1ไน‹ๅ‰็‰ˆๆœฌๅญ˜ๅœจๅฎ‰ๅ…จๆผๆดžใ€‚ๆ”ปๅ‡ป่€…ๅฏๅˆฉ็”จ่ฏฅๆผๆดžๅœจไฝฟ็”จ[]string{} for m[\"aud\"](่ง„่Œƒๅ…่ฎธ)็š„ๆƒ…ๅ†ตไธ‹็ป•่ฟ‡้ข„ๆœŸ็š„่ฎฟ้—ฎ้™ๅˆถใ€‚
ๅฝฑๅ“่Œƒๅ›ด๏ผš(โˆž, 4.0.0-preview1)
ๆœ€ๅฐไฟฎๅค็‰ˆๆœฌ๏ผš4.0.0-preview1
็ผบ้™ท็ป„ไปถๅผ•ๅ…ฅ่ทฏๅพ„๏ผšgithub.com/raffis/mongodb-query-exporter@->github.com/dgrijalva/[email protected]+incompatible

ๅฆๅค–่ฟ˜ๆœ‰2ไธชๆผๆดž๏ผŒ่ฏฆ็ป†ๆŠฅๅ‘Š๏ผšhttps://mofeisec.com/jr?p=aec436

Erro deploy default image

Hi can you help me? I can't run the image on kubernetes on kubernetes.

It may not have / bin / bash

Error response from daemon: OCI runtime create failed: container_linux.go: 348: starting container process caused "exec: " / bin / sh \ ": stat / bin / sh: no such file or directory": unknown [kubelet, dev -0-6020372984483016836]

add binaries

Hi, I would like to ask you to add binaries to releases

Metrics never updated within startPullListeners?

Current config:

bind: 0.0.0.0:9412
logLevel: debug
mongodb:
  uri: mongodb://127.0.0.1:27017
  connectionTimeout: 10
  maxConnection: 3
  defaultCacheTime: 15
metrics:
- name: requests_status
  type: gauge
  help: 'status of requests'
  value: count
  database: dbnamehere
  collection: somecollection
  constLabels:
  - type: somevalue
    status: finished
  realtime: false
  pipeline: |
    [
      { "$project": {
          "state": {
            "$cond": [
                { "$eq": [ "$done", true ] }, 1, 0
            ]
          }
        }
      },
      { "$group": {
          "_id": {},
          "count": {"$sum": "$state"}
        }
      }
    ]

<SKIPPED>

Here is how it looks in logs:

Mar 13 22:39:38 mongoz-72927 systemd[1]: Started mongodb_query_exporter.
Mar 13 22:39:38 mongoz-72927 mongodb_query_exporter[17743]: time="2020-03-13T22:39:38+03:00" level=info msg="using config file /usr/local/etc/mongodb_query_exporter/config.yml"
Mar 13 22:39:38 mongoz-72927 mongodb_query_exporter[17743]: time="2020-03-13T22:39:38+03:00" level=info msg="connect to mongodb, connect_timeout=10"
Mar 13 22:39:38 mongoz-72927 mongodb_query_exporter[17743]: time="2020-03-13T22:39:38+03:00" level=debug msg="ping mongodb and enforce connection"
Mar 13 22:39:38 mongoz-72927 mongodb_query_exporter[17743]: time="2020-03-13T22:39:38+03:00" level=debug msg="mongodb up an reachable, start listeners"
Mar 13 22:39:38 mongoz-72927 mongodb_query_exporter[17743]: time="2020-03-13T22:39:38+03:00" level=info msg="initialize metric requests_status"
Mar 13 22:39:38 mongoz-72927 mongodb_query_exporter[17743]: time="2020-03-13T22:39:38+03:00" level=info msg="initialize metric requests_status"
Mar 13 22:39:38 mongoz-72927 mongodb_query_exporter[17743]: time="2020-03-13T22:39:38+03:00" level=info msg="initialize metric requests_status"
Mar 13 22:39:38 mongoz-72927 mongodb_query_exporter[17743]: time="2020-03-13T22:39:38+03:00" level=info msg="initialize metric requests_status"
Mar 13 22:39:38 mongoz-72927 mongodb_query_exporter[17743]: time="2020-03-13T22:39:38+03:00" level=info msg="initialize metric test1"
Mar 13 22:39:38 mongoz-72927 mongodb_query_exporter[17743]: time="2020-03-13T22:39:38+03:00" level=info msg="start http listener on 0.0.0.0:9412"
Mar 13 22:39:38 mongoz-72927 mongodb_query_exporter[17743]: time="2020-03-13T22:39:38+03:00" level=info msg="initialize metric requests_status"
Mar 13 22:39:38 mongoz-72927 mongodb_query_exporter[17743]: time="2020-03-13T22:39:38+03:00" level=info msg="initialize metric requests_status"
Mar 13 22:39:38 mongoz-72927 mongodb_query_exporter[17743]: time="2020-03-13T22:39:38+03:00" level=debug msg="aggregate mongodb pipeline [\n  { \"$project\": {\n      \"state\": {\n        \"$cond\": [\n
Mar 13 22:39:38 mongoz-72927 mongodb_query_exporter[17743]: time="2020-03-13T22:39:38+03:00" level=debug msg="aggregate mongodb pipeline [\n  { \"$project\": {\n      \"state\": {\n        \"$cond\": [\n
Mar 13 22:39:38 mongoz-72927 mongodb_query_exporter[17743]: time="2020-03-13T22:39:38+03:00" level=info msg="initialize metric requests_status"
Mar 13 22:39:38 mongoz-72927 mongodb_query_exporter[17743]: time="2020-03-13T22:39:38+03:00" level=debug msg="aggregate mongodb pipeline [\n  { \"$project\": {\n      \"state\": {\n        \"$cond\": [\n
Mar 13 22:39:38 mongoz-72927 mongodb_query_exporter[17743]: time="2020-03-13T22:39:38+03:00" level=info msg="initialize metric requests_status"
Mar 13 22:39:38 mongoz-72927 mongodb_query_exporter[17743]: time="2020-03-13T22:39:38+03:00" level=debug msg="aggregate mongodb pipeline [\n  { \"$project\": {\n      \"state\": {\n        \"$cond\": [\n
Mar 13 22:39:38 mongoz-72927 mongodb_query_exporter[17743]: time="2020-03-13T22:39:38+03:00" level=info msg="initialize metric test1"
Mar 13 22:39:38 mongoz-72927 mongodb_query_exporter[17743]: time="2020-03-13T22:39:38+03:00" level=debug msg="aggregate mongodb pipeline [\n  { \"$project\": {\n      \"state\": {\n        \"$cond\": [\n
Mar 13 22:39:38 mongoz-72927 mongodb_query_exporter[17743]: time="2020-03-13T22:39:38+03:00" level=debug msg="found record map[_id:map[] count:%!s(int32=0)] from metric requests_status"
Mar 13 22:39:38 mongoz-72927 mongodb_query_exporter[17743]: time="2020-03-13T22:39:38+03:00" level=debug msg="found record map[_id:map[] count:%!s(int32=40697)] from metric requests_status"
Mar 13 22:39:38 mongoz-72927 mongodb_query_exporter[17743]: time="2020-03-13T22:39:38+03:00" level=debug msg="found record map[_id:map[] count:%!s(int32=45078)] from metric test1"
Mar 13 22:39:38 mongoz-72927 mongodb_query_exporter[17743]: time="2020-03-13T22:39:38+03:00" level=debug msg="found record map[_id:map[] count:%!s(int32=45078)] from metric requests_status"
Mar 13 22:39:38 mongoz-72927 mongodb_query_exporter[17743]: time="2020-03-13T22:39:38+03:00" level=debug msg="found record map[_id:map[] count:%!s(int32=0)] from metric requests_status"
Mar 13 22:39:42 mongoz-72927 mongodb_query_exporter[17743]: time="2020-03-13T22:39:42+03:00" level=debug msg="handle incoming http request /metrics"
Mar 13 22:39:42 mongoz-72927 mongodb_query_exporter[17743]: time="2020-03-13T22:39:42+03:00" level=debug msg="handle incoming http request /metrics"
Mar 13 22:39:57 mongoz-72927 mongodb_query_exporter[17743]: time="2020-03-13T22:39:57+03:00" level=debug msg="handle incoming http request /metrics"
Mar 13 22:39:57 mongoz-72927 mongodb_query_exporter[17743]: time="2020-03-13T22:39:57+03:00" level=debug msg="handle incoming http request /metrics"
Mar 13 22:40:12 mongoz-72927 mongodb_query_exporter[17743]: time="2020-03-13T22:40:12+03:00" level=debug msg="handle incoming http request /metrics"
Mar 13 22:40:12 mongoz-72927 mongodb_query_exporter[17743]: time="2020-03-13T22:40:12+03:00" level=debug msg="handle incoming http request /metrics"
Mar 13 22:40:27 mongoz-72927 mongodb_query_exporter[17743]: time="2020-03-13T22:40:27+03:00" level=debug msg="handle incoming http request /metrics"
Mar 13 22:40:27 mongoz-72927 mongodb_query_exporter[17743]: time="2020-03-13T22:40:27+03:00" level=debug msg="handle incoming http request /metrics"
Mar 13 22:40:42 mongoz-72927 mongodb_query_exporter[17743]: time="2020-03-13T22:40:42+03:00" level=debug msg="handle incoming http request /metrics"

Based on quick look at source, I assume that when we're working in pull mode, we should see "waiting" messages from main.go:380:

log.Debugf("wait %ds to refresh metric %s", metric.CacheTime, metric.Name)

But it never happens. Is it a bug or I'm missing something?

Thanks.

PS: MongoDB 3.0, binary exporter was taken from the latest Docker image:

# sha256sum /usr/local/bin/mongodb_query_exporter
c41ac1b672fa656a5375dd49d2329a7668e12ae64a753fbcc6215da4c8cc6e25  /usr/local/bin/mongodb_query_exporter

[Question about usage]

Hello, I have a problem, I have this database.
image
And I am using this config
image

when it works the following error occurs.
image

Could you please tell me what the problem is and how to write a correct query to the database for the correct work of the tool.
Or can you attach an example of a simple data find query?
Thank you in advance!

Provide kustomize manifests

Is your feature request related to a problem? Please describe

No

Describe the solution you'd like

Provide kustomization deployment besides helm chart.

Describe alternatives you've considered

Using helm.

Make `checksum/hash annotations` work with `existingConfig`

Is your feature request related to a problem? Please describe

When using the existingConfig value, the checksum/hash annotations seems not to be computed with respect to the used config.

Describe the solution you'd like

Adapt the tempalte in order to works either with embedded config or with externally provided

Additional context

  • I'm using this chart as a dependant chart for another one, and the config is templated in the parent chart. I do that since some of the configMap depend on my environments, and the chart values are used in the configmap templating

Describe alternatives you've considered

  • Better: allow to use values variables inside the values file. I'm not an expert of writing helm templates, but it seems to be possible with special treatment in the templates.
  • My current workaround : provide the annotations myself in podAnnotations.

This is not critical as the workaround is very simple, but I think it would make the chart more 'out of the box' usable.

If I have some time next week I might try to work on that.

Support multiple metric values per aggregation pipeline

I'd like to be able to extract multiple metrics per aggregation pipeline. This would be more efficient because we could run one pipeline for N metrics (rather than N pipelines for N metrics).

Here's an example config structure that could be used:

  metrics:
  - exported_metrics:
     -  name: jobs_count_per_queue
        type: gauge
        help: "Count of jobs per queue."
        value: count
        labels: ["queue_name"]
        constLabels: []
     -  name: jobs_max_age_per_queue
        type: gauge
        help: "Maximum age of jobs per queue."
        value: max_seconds_in_queue
        labels: ["queue_name"]
        constLabels: []
     -  name: jobs_avg_age_per_queue
        type: gauge
        help: "Average age of jobs per queue."
        value: avg_seconds_in_queue
        labels: ["queue_name"]
        constLabels: []
    mode: pull
    cache: 0
    database: mongo_production
    collection: jobs
    pipeline: |
      [
        ....
        }, {
          "$project": {
            "_id": 0,
            "queue_name": "$_id",
            "count": "$count",
            "max_seconds_in_queue": "$max_seconds_in_queue",
            "avg_seconds_in_queue": "$ave_seconds_in_queue"
          }
        }
      ]

Rename deployment port to http-metrics

Describe the change

Rename the deployment/service port to http-metrics in the helm chart as well as the kustomize base.

Current situation

Port is called metrics.

Programmatically add metrics

Is there a way to programmatically add metrics ?

For example, I'd like to monitor the total number of documents in all collections.
When a new collection is created, do I have to edit the exporter config file and restart or there's another way ?

Thanks for you help.

Is this project still active?

Hi, we just wanted to know if this project is still active. At the moment, we are using the beta release and we were wondering if any stable release is expected in the near future.

Thank you

How to define the ISO date filter

db.data.aggregate([{ "$match" : {"expire": {"$gt": new Date(ISODate().getTime() + (1000 * 3600 * 365 * 1))}}}, { "$sort": { "expire": -1 } }, {"$limit": 1},{ "$project": { "_id" : 1, "expire": 2}}]);

This aggregation fails with :
panic: failed to decode json aggregation pipeline: error decoding key 0: invalid JSON literal. Position: 35, literal: new

panic: Unsupported Config Type ""

Describe the bug

using docker container ghcr.io/raffis/mongodb-query-exporter:v1.0.0, to try it. I had 2 problems

  • cannot open file /etc/mongodb-query-exporter/config.yaml, even it's mapped already. I had it worked around by changing it to /tmp
  • then I have this mythic error: panic: Unsupported Config Type "" with all v1,v2,v3 config.yml files from folder /example. I actually modified seom filelds and them restored the original files. all the same error.

below is my docker compose snippet

  mongodb_query_exporter:
  # https://github.com/raffis/mongodb-query-exporter
  # docker pull ghcr.io/raffis/mongodb-query-exporter:v1.0.0
    image: ghcr.io/raffis/mongodb-query-exporter:v1.0.0
    container_name: mongodb_query_exporter
    hostname: mongodb_query_exporter
    environment:
      - "MDBEXPORTER_CONFIG=/tmp"
    volumes:
      - ./:/tmp

To Reproduce

Steps to reproduce the behavior:

Expected behavior

it should run as expected, with out error in parsing config.yml, or if it does, it better tells me where it is(line# , fileds etc.)

Environment

  • mongodb-query-exporter version: v1.0.0
  • Deployed as: docker

Additional context

None

[Question] Metric with some label at value == 0

I'm not sure I completely understand the usage of labels and constLabels :

I would like to export a metric with a different label value depending on one field in the collection, (with a bounded set of values):

      metrics:
      - name: a_metric
        type: gauge
        value: count
        overrideEmpty: true 
        emptyValue: 0       
        labels: [state]
      cache: 0
      mode: pull
      pipeline: |
        [
          {"$group": {
              _id: "$some_state",
              count: {
                $sum:1
              }
            }},
          {"$project": {
              _id: 0,
              state: "$_id",
              count: 1
            }}
        ]

Let's say some_state can be A, B, or C
If I understand correctly, they would be exported like so :

a_metric{state="A"} x
a_metric{state="B"} y
a_metric{state="C"} z

Sometimes there is no document with some_state==C
Can I force the exporter to still export a_metric{state="C"} 0 (value 0) in that case ? (constLabels seems to be related to that, but I'm not quite sure).

Thanks !

Include binary file to the binary collection

Describe the change

I expected to find file called "mongodb_query_exporter" from the package prometheus-mongodb-query-exporter-2.1.0.tgz but it isn't there. Could you add it?

Current situation

Command ls produces the following outcome Chart.yaml README.md templates values.yaml .

Should

I guess command ls should produce something like Chart.yaml README.md templates mongodb_query_exporter values.yaml

Additional context

I'm not experienced with prometheus, mongoDB or Go, so if there are things that I have overlooked I'm sorry.

Can pipeline use date math?

Hi,
In my case I would like to export some aggregation pipeline result as gauge metric.
In my particular case I want to limit the aggregation to last hour.

Is that supported and if so can you post an example please?

MongoDB shell pipeline is:

[
                { 
                    $match: {
                        $and: [
                            {startTime: {$gte: new Date(ISODate().getTime() - 1000 * 60 * 60)}},
                            {taskName: "CREATE_PO_FILE"},
                            {success: true}
                        ]
                    }
                }, 
                {$count: 'poGeneratedCount'}
]

The query problem

Describe the bug

A clear and concise description of what the bug is.
We have run Mongo-query-EXPORTER after configuring as per the instructions,However, the following error occurs during query execution as shown in the following picture, would you mind to help us?
image

image

To Reproduce

Steps to reproduce the behavior:

Expected behavior

A clear and concise description of what you expected to happen.

Environment

  • mongodb-query-exporter version: [v1.0.0]
  • MongoDB version: [v3.6.1]

Additional context

Add any other context about the problem here.

Provide support for Mongo 5

Is your feature request related to a problem? Please describe

This repo uses the MongoDriver v1.4.1 which, based on the compatibility matrix provided by MongoDB is not compatible with Mongo 5.

Describe the solution you'd like

Update the driver to the latest available stable version.

Describe alternatives you've considered

There is no alternative other than forking the repo and maintain it (which I'd prefer not to ๐Ÿ˜‡).

[Question] Is is possible to use a variable database ?

I'm working on setting up the exporter for our project.

We have autodeployed dev environnements with one database for each environment, is it possible to make the database value inside the config depend on a variable environment ?

For example:

aggregations:
  - database:  ${DB}
    collection: collection_x
    ...

Support Same Aggregation Across 1+ Collections

Describe the change

Support running a single aggregation pipeline across one or more collections.

Current situation

Currently only one collection is supported per aggregation. To work around this limitation, one must define multiple of the same aggregation pipeline, one for each collection.

While an inelegant workaround, defining the same aggregation on the same database but modifying the collection causes the server to error (will have an issue up shortly). Thus, one must define multiple configuration files and run multiple mongodb-query-exporters (on different ports) in order to query multiple collections per server. Using multiple exporters is a valid workaround here, but it is inelegant and somewhat prohibitive in a tightly-managed network.

Should

Modify config (and parsing), aggregation initialization, and possibly caching behavior to support running a single aggregation pipeline across one or more collections.

Config parsing changes would involve either defining a new collections array of strings variable in tandem with the existing collection string variable, or more preferably transitioning the collection variable to a collections array of strings variable. This would probably require a new config version as it is breaking.

On the side of modifying aggregation initialization, two paths seem most evident. The first requires less changes and involves initializing separate aggregations per collection for the same database, likely somewhere around here in a separate v4 config parser. The alternative is initializing an aggregation as having one or more collections, requiring modifications to the relevant structs and the caching behavior, also in a separate v4 config version parser. As cache entries appear to be referenced by aggregation pipeline and server currently, this would also require changes to cache insertion, invalidation, etc. in order to support an aggregation querying multiple collections (e.g. this line.

Additional context

Main point is querying multiple collections per server without having to run multiple exporters.

Stale data and "retry" on query error?

Hello, Raffael.

Recently I found this in logs:

Mar 16 02:20:03 mongoz-72927 mongodb_query_exporter[22759]: time="2020-03-16T02:20:03+03:00" level=error msg="failed to handle metric metric requests_status aggregation returned an emtpy result set, abort listen on metric requests_status"
Mar 16 02:20:03 mongoz-72927 mongodb_query_exporter[22759]: time="2020-03-16T02:20:03+03:00" level=error msg="failed to handle metric metric requests_status aggregation returned an emtpy result set, abort listen on metric requests_status"

And after that exporter doesn't export an "actual" value for metric, but keeps a stale one. But the stale data could be misleading in many ways. I suggest don't export "stale" value for metric at all. Also, probably a better way to handle this โ€” to add a retry option? When target collection will be available for queries again, the exporter will continue its work.

What do you think about that?

PS: MongoDB 3.0, i.e. in "pull mode" (w/out "change streams").

Ability to extract replica set related info

Hi,

Tried the exporter and it works nice for custom queries to collections.

I also have a specific need to query replica set statu ( rs.status() ) and put certain details into grafana dashboard. A nice improvement would be to allow that.

Thanks,

customizable http endpoint path to metrics

Is your feature request related to a problem? Please describe

Our platform requires (as a convention) all the services to expose metrics atGET /-/metrics http endpoint.
mongodb-query-exporter exposes the metrics at GET /metrics. It'd be useful to allow the user to choose the path at which the metrics are exposed.

Describe the solution you'd like

A simple configuration parameter, like metricsEndpointPath

Please help to support the command

  1. I think use the Mongo console function like Date/ISODate is very useful, because we can put it in the config file directly.

All the mongodb sql tools support the console command.

Could you please think about it ?

The related issue :

  1. Could you please support multi aggrelation line results?

id as tag, count as value

image

{ gauge:<value:89 > } was collected before with the same name and label values

Describe the bug

This exporter returns error 500 when doing a simple aggregation query.

{ gauge:<value:89 > } was collected before with the same name and label values

To Reproduce

version: 3.0
bind: 0.0.0.0:9412
metricsPath: /metrics
log:
  encoding: json
  level: debug
  development: false
  disableCaller: false
global:
  queryTimeout: "10s"
  maxConnection: 3
  defaultCache: 0
servers:
- name: main
  uri: mongodb+srv://myuser:[email protected]/esport
aggregations:
- database: aaaa
  collection: ccccccc
  servers: [main] #Can also be empty, if empty the metric will be used for every server defined
  metrics:
  - name: reloader_amount
    type: gauge #Can also be empty, the default is gauge
    overrideEmpty: true # if an empty result set is returned..
    emptyValue: 0       # create a metric with value 0
    value: myAmount
    labels: []
    constLabels: []
  mode: push
  pipeline: |
    [
      {
        "$project" : {
          "createdAt" : "$createdAt",
          "myAmount" : "$myAmount"
        }
      },
      {
        "$match": {
          "myAmount" : {"$ne" : null }
        }
      }
    ]

Expected behavior

Expect for it to scrape the metric without throwing error.

Environment

  • mongodb-query-exporter version: 2.0.3
  • prometheus version: 2.41.0
  • MongoDB version: 6
  • Deployed as: Helm release

Additional context

Mongodb aggregate date

Hi, First thank you very much for the tool.
Is it possible to make queries by date range?

How to return a count of 0 from the pipeline

Is your feature request related to a problem? Please describe

I am attempting to produce a metric that is the number of items, and that count can be 0. In this case I am trying to get the count of stuck orders in the system.

Pipeline:

[
  {"$match": {"status": "STUCK"}},
  {"$count": {"total"}
]

This works great when there is at least 1 stuck order, however, if there are none, then there are no documents going through the pipeline, so the $count step does not do anything, and 0 records are returned, which means the mongodb-query-exporter does not see this as a valid metric and does not report it. From looking around, the recommended approach is to see if 0 records are returned from a $count step in a pipeline, interpret this as 0 records. I do not see a way of doing this in mongodb-query-exporter currently.

So far I have managed to come up with adding an item into the pipeline just before the count, then subtracting 1 from it at the end, but not very nice!

Describe the solution you'd like

A way of returning a 0 count

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.