Code Monkey home page Code Monkey logo

gluster_exporter's People

Contributors

coder-hugo avatar mjtrangoni avatar ofesseler avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gluster_exporter's Issues

Export inode information

We should be exporting gluster_node_inode_count and gluster_node_inode_free metrics as well as we do with the size metrics.

See,

# gluster volume status all detail
Status of volume: test-volume
------------------------------------------------------------------------------
Brick                : Brick glusterfs1:/data/brick1/gv0
TCP Port             : 49152
RDMA Port            : 0
Online               : Y
Pid                  : 1942
File System          : xfs
Device               : /dev/sdb1
Mount Options        : rw,seclabel,relatime,attr2,inode64,noquota
Inode Size           : 512
Disk Space Free      : 499.6GB
Total Disk Space     : 499.8GB
Inode Count          : 262143424
Free Inodes          : 262143110

@ofesseler any thoughts?

missing gluster_node_* metrics

Using release v0.2.5 (can't build from source, see my other bug report) and glusterfs 9.2

Readme.md says it will acquire node information via gluster volume status all detail --xml. I can launch this and I can see per-node metrics such as sizeTotal, sizeFree, inodesTotal, inodesFree. But when I launch the exporter and ask for /metrics, I do not see any mention of gluster_node_size_bytes_total, gluster_node_size_free_bytes, gluster_node_size_free_bytes, gluster_node_inodes_free

I launched with -log.level debug and there are no error messages... no message at all actually, after reporting it started and "GlusterFS Metrics Exporter v0.2.5", not even saying it received a request.

Output of gluster volume status all detail --xml

<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
  <volStatus>
    <volumes>
      <volume>
        <volName>gameserver</volName>
        <nodeCount>2</nodeCount>
        <node>
          <hostname>[redacted-1]</hostname>
          <path>/opt/data/gluster/gameserver</path>
          <peerid>fb8dee35-0ea1-4d79-b733-e6edfb9f6b71</peerid>
          <status>1</status>
          <port>49154</port>
          <ports>
            <tcp>49154</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>2468110</pid>
          <sizeTotal>3541511364608</sizeTotal>
          <sizeFree>2442654326784</sizeFree>
          <device>/dev/mapper/vol2-gluster</device>
          <blockSize>4096</blockSize>
          <mntOptions>rw,noexec,noatime,nodiratime</mntOptions>
          <fsName>ext4</fsName>
          <inodeSize>ext4</inodeSize>
          <inodesTotal>219611136</inodesTotal>
          <inodesFree>216624397</inodesFree>
        </node>
        <node>
          <hostname>[redacted-2]</hostname>
          <path>/opt/data/gluster/gameserver</path>
          <peerid>f938331e-e057-4cf7-a63a-b2121c3f87ba</peerid>
          <status>0</status>
          <port>N/A</port>
          <ports>
            <tcp>N/A</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>-1</pid>
          <sizeTotal>3540553007104</sizeTotal>
          <sizeFree>2437145952256</sizeFree>
          <device>/dev/mapper/vol2-gluster</device>
          <blockSize>4096</blockSize>
          <mntOptions>rw,noexec,noatime,nodiratime</mntOptions>
          <fsName>ext4</fsName>
          <inodeSize>ext4</inodeSize>
          <inodesTotal>219611136</inodesTotal>
          <inodesFree>216624417</inodesFree>
        </node>
      </volume>
      <volume>
        <volName>iris</volName>
        <nodeCount>2</nodeCount>
        <node>
          <hostname>[redacted-1]</hostname>
          <path>/opt/data/gluster/iris</path>
          <peerid>fb8dee35-0ea1-4d79-b733-e6edfb9f6b71</peerid>
          <status>1</status>
          <port>49155</port>
          <ports>
            <tcp>49155</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>2468118</pid>
          <sizeTotal>3541511364608</sizeTotal>
          <sizeFree>2442654326784</sizeFree>
          <device>/dev/mapper/vol2-gluster</device>
          <blockSize>4096</blockSize>
          <mntOptions>rw,noexec,noatime,nodiratime</mntOptions>
          <fsName>ext4</fsName>
          <inodeSize>ext4</inodeSize>
          <inodesTotal>219611136</inodesTotal>
          <inodesFree>216624397</inodesFree>
        </node>
        <node>
          <hostname>[redacted-2]</hostname>
          <path>/opt/data/gluster/iris</path>
          <peerid>f938331e-e057-4cf7-a63a-b2121c3f87ba</peerid>
          <status>0</status>
          <port>N/A</port>
          <ports>
            <tcp>N/A</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>-1</pid>
          <sizeTotal>3540553007104</sizeTotal>
          <sizeFree>2437145952256</sizeFree>
          <device>/dev/mapper/vol2-gluster</device>
          <blockSize>4096</blockSize>
          <mntOptions>rw,noexec,noatime,nodiratime</mntOptions>
          <fsName>ext4</fsName>
          <inodeSize>ext4</inodeSize>
          <inodesTotal>219611136</inodesTotal>
          <inodesFree>216624417</inodesFree>
        </node>
      </volume>
    </volumes>
  </volStatus>
</cliOutput>

Output of curl -s http://localhost:9189/metrics |grep gluster_

# HELP gluster_brick_count Number of bricks at last query.
# TYPE gluster_brick_count gauge
gluster_brick_count{volume="gameserver"} 2
gluster_brick_count{volume="iris"} 2
# HELP gluster_exporter_build_info A metric with a constant '1' value labeled by version, revision, branch, and goversion from which gluster_exporter was built.
# TYPE gluster_exporter_build_info gauge
gluster_exporter_build_info{branch="master",goversion="go1.7.4",revision="0e16052c6a0399880d6c74ba5b94fe1ade283615",version="0.2.5"} 1
# HELP gluster_peers_connected Is peer connected to gluster cluster.
# TYPE gluster_peers_connected gauge
gluster_peers_connected 1
# HELP gluster_up Was the last query of Gluster successful.
# TYPE gluster_up gauge
gluster_up 1
# HELP gluster_volume_status Status code of requested volume.
# TYPE gluster_volume_status gauge
gluster_volume_status{volume="gameserver"} 1
gluster_volume_status{volume="iris"} 1
# HELP gluster_volumes_count How many volumes were up at the last query.
# TYPE gluster_volumes_count gauge
gluster_volumes_count 2

I am interested in continuing this project

Tks your code can running in production environment;
I just interesting in continuing this project.

But I am a new gopher. How to connect you?

PS: I change your project code structure like mysql_exporter.

make build error

I m getting the following error when I try to build :
[root@k8s-01 gluster_exporter]# make build

ensure vendoring
grouped write of manifest, lock and vendor: error while writing out vendor tree: failed to write dep tree: failed to export golang.org/x/crypto: unable to deduce repository and source type for "golang.org/x/crypto": unable to read metadata: unable to fetch raw metadata: failed HTTP request to URL "http://golang.org/x/crypto?go-get=1": Get "http://golang.org/x/crypto?go-get=1": proxyconnect tcp: dial tcp: lookup xn--http-u96a on 114.114.114.114:53: no such host

Please help

heal statistics heal-count

Hi, to get get heal status of volumes is necessary to specify heal_info_files_countin promethues metrics, that internally execute gluster v heal VOLNAME info.

This runs ok when there is few files to be healed, but when there are too many files to be healed, the command spend too time to get values.

Is it possible to change internally command to gluster volume heal VOLNAME statistics heal-count?

Or, add new metric (for example heal_info_files_statistics) to get this value?

Regards.

Multiple volumes profile

Hi,

The exporter does not get any profiling data when used with cluster of two or more Gluster volumes, i have tested on two different clusters, on one cluster it only gets profile data of one volume and ignore the other one, and on another cluster it just ignore the "-profile" option and displays nothing.
Gluster gets the request of profiling info command on all volumes, but the exporter just does display the data.

[Gluster exporter errors]tried to execute [volume info] and got error: exit status 1

I am getting the following error when trying to use gluster exporter

I see #49 (comment), so I set “privileged: true”, but there are still the following problems:
An error has occurred during metrics gathering:
collected metric gluster_up gauge:<value:1 > was collected before with the same name and label values

gluster exporter log
time="2022-05-11T11:54:49Z" level=error msg="tried to execute [volume info] and got error: exit status 1" source="gluster_client.go:23"
time="2022-05-11T11:54:49Z" level=error msg="couldn't parse xml volume info: exit status 1" source="main.go:211"
time="2022-05-11T11:54:49Z" level=error msg="tried to execute [peer status] and got error: exit status 1" source="gluster_client.go:23"
time="2022-05-11T11:54:49Z" level=error msg="couldn't parse xml of peer status: exit status 1" source="main.go:248"
time="2022-05-11T11:54:49Z" level=error msg="tried to execute [volume status all detail] and got error: exit status 1" source="gluster_client.go:23"
time="2022-05-11T11:54:49Z" level=error msg="couldn't parse xml of peer status: exit status 1" source="main.go:305"
time="2022-05-11T11:54:49Z" level=warning msg="no Volumes were given." source="main.go:322"
time="2022-05-11T11:54:49Z" level=error msg="tried to execute [volume list] and got error: exit status 1" source="gluster_client.go:23"

k8s yaml:

  • image: gluster_exporter:v0.2.7
    imagePullPolicy: IfNotPresent
    name: gluster-exporter
    ports:
    • containerPort: 9189
      protocol: TCP
      securityContext:
      capabilities: {}
      privileged: true
    • image: gluster/glusterfs:7.5.6
      imagePullPolicy: IfNotPresent
      name: glusterfs
      env:
    • name: HOST_DEV_DIR
      value: "/mnt/host-dev"
      • name: GLUSTER_BLOCKD_STATUS_PROBE_ENABLE
        value: "0"
      • name: GB_GLFS_LRU_COUNT
        value: "15"
      • name: TCMU_LOGDIR
        value: "/var/log/glusterfs/gluster-block"
        resources:
        ...
        volumeMounts:
      • name: glusterfs-heketi
        mountPath: "/var/lib/heketi"
      • name: glusterfs-run
        mountPath: "/run"
      • name: glusterfs-lvm
        mountPath: "/run/lvm"
      • name: glusterfs-etc
        mountPath: "/etc/glusterfs"
      • name: glusterfs-logs
        mountPath: "/var/log/glusterfs"
      • name: glusterfs-config
        mountPath: "/var/lib/glusterd"
      • name: glusterfs-host-dev
        mountPath: "/mnt/host-dev"
      • name: glusterfs-misc
        mountPath: "/var/lib/misc/glusterfsd"
      • name: glusterfs-block-sys-class
        mountPath: "/sys/class"
      • name: glusterfs-block-sys-module
        mountPath: "/sys/module"
      • name: glusterfs-cgroup
        mountPath: "/sys/fs/cgroup"
        readOnly: true
      • name: glusterfs-ssl
        mountPath: "/etc/ssl"
        readOnly: true
      • name: kernel-modules
        mountPath: "/lib/modules"
        readOnly: true
        securityContext:
        capabilities: {}
        privileged: true

client side metrics require glusterd

hi,

maybe i'm doing this wrong but the metrics:

  • volume_writeable
  • mount_successful
    should be collected from the client however the exporter returns error because seems requiring the glusterd running:
$ curl -v http://127.0.0.1:9189/metrics
* About to connect() to 127.0.0.1 port 9189 (#0)
*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 9189 (#0)
> GET /metrics HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 127.0.0.1:9189
> Accept: */*
> 
< HTTP/1.1 500 Internal Server Error
< Content-Type: text/plain; charset=utf-8
< X-Content-Type-Options: nosniff
< Date: Thu, 04 Oct 2018 12:21:09 GMT
< Content-Length: 152
< 
An error has occurred during metrics gathering:

collected metric gluster_up gauge:<value:1 >  was collected before with the same name and label values
* Connection #0 to host 127.0.0.1 left intact

promtool check-metrics issues

We have to fix these issues reported by the promtool checker.

# curl -s glusterfs1:9189/metrics  | promtool check-metrics
gluster_brick_data_read: counter metrics should have "_total" suffix
gluster_brick_data_written: counter metrics should have "_total" suffix
gluster_brick_duration: counter metrics should have "_total" suffix
gluster_brick_fop_hits: counter metrics should have "_total" suffix
gluster_brick_fop_latency_avg: counter metrics should have "_total" suffix
gluster_brick_fop_latency_max: counter metrics should have "_total" suffix
gluster_brick_fop_latency_min: counter metrics should have "_total" suffix
gluster_heal_info_files_count: counter metrics should have "_total" suffix
gluster_node_size_free_bytes: counter metrics should have "_total" suffix
gluster_node_size_total_bytes: counter metrics should have "_total" suffix
gluster_brick_count: non-histogram and non-summary metrics should not have "_count" suffix
gluster_heal_info_files_count: non-histogram and non-summary metrics should not have "_count" suffix
gluster_volumes_count: non-histogram and non-summary metrics should not have "_count" suffix

Installation failed "go get no longer supported"

using go version go1.18.1 linux/amd64

Installation instruction in readme.md says go get github.com/ofesseler/gluster_exporter

What I saw:

go: go.mod file not found in current directory or any parent directory.
'go get' is no longer supported outside a module.
To build and install a command, use 'go install' with a version,
like 'go install example.com/cmd@latest'
For more information, see https://golang.org/doc/go-get-install-deprecation
or run 'go help get' or 'go help install'.

So I tried go install github.com/ofesseler/gluster_exporter@latest

What I saw:

go: finding module for package github.com/prometheus/common/version
go: finding module for package github.com/prometheus/client_golang/prometheus/promhttp
go: finding module for package github.com/prometheus/common/log
go: finding module for package github.com/prometheus/client_golang/prometheus
go: found github.com/prometheus/client_golang/prometheus in github.com/prometheus/client_golang v1.12.1
go: found github.com/prometheus/client_golang/prometheus/promhttp in github.com/prometheus/client_golang v1.12.1
go: found github.com/prometheus/common/version in github.com/prometheus/common v0.34.0
go: finding module for package github.com/prometheus/common/log
[redacted]/go/pkg/mod/github.com/ofesseler/[email protected]/structs/xmlStructs.go:8:2: module github.com/prometheus/common@latest found (v0.34.0), but does not contain package github.com/prometheus/common/log

Getting build error

Hi,

I m getting the following error when I try to build

vagrant@es1:~/work/src/github.com/gluster_exporter$ make build
>> ensure vendoring
>> vetting code
# github.com/gluster_exporter
./main.go:329:59: node.InodesTotal undefined (type struct { Hostname string "xml:\"hostname\""; Path string "xml:\"path\""; PeerID string "xml:\"peerid\""; Status int "xml:\"status\""; Port int "xml:\"port\""; Ports struct { TCP int "xml:\"tcp\""; RDMA string "xml:\"rdma\"" } "xml:\"ports\""; Pid int "xml:\"pid\""; SizeTotal uint64 "xml:\"sizeTotal\""; SizeFree uint64 "xml:\"sizeFree\""; Device string "xml:\"device\""; BlockSize int "xml:\"blockSize\""; MntOptions string "xml:\"mntOptions\""; FsName string "xml:\"fsName\"" } has no field or method InodesTotal)
./main.go:333:56: node.InodesFree undefined (type struct { Hostname string "xml:\"hostname\""; Path string "xml:\"path\""; PeerID string "xml:\"peerid\""; Status int "xml:\"status\""; Port int "xml:\"port\""; Ports struct { TCP int "xml:\"tcp\""; RDMA string "xml:\"rdma\"" } "xml:\"ports\""; Pid int "xml:\"pid\""; SizeTotal uint64 "xml:\"sizeTotal\""; SizeFree uint64 "xml:\"sizeFree\""; Device string "xml:\"device\""; BlockSize int "xml:\"blockSize\""; MntOptions string "xml:\"mntOptions\""; FsName string "xml:\"fsName\"" } has no field or method InodesFree)
Makefile:33: recipe for target 'vet' failed
make: *** [vet] Error 2

Please help

Not getting _fop metrics

Built from the latest source,
Am I forgetting something?

./gluster_exporter --gluster.volumes data --web.listen-address=0.0.0.0:8010 --profile

Gluster exporter errors

I am getting the following error when trying to use gluster exporter

An error has occurred during metrics gathering:
collected metric gluster_up gauge:<value:1 > was collected before with the same name and label values

tried to execute [volume info] and got error: exit status 1" source="gluster_client.go:23
couldn't parse xml volume info: exit status 1" source="main.go:211

tried to execute [peer status] and got error: exit status 1" source="gluster_client.go:23
couldn't parse xml of peer status: exit status 1" source="main.go:248

tried to execute [volume status all detail] and got error: exit status 1" source="gluster_client.go:23
couldn't parse xml of peer status: exit status 1" source="main.go:305

Please help

Can't get all available metrics?

Hi,

From the instructions of README, seems this exporter has a lot of metrics to use, but why I can see only 9 items? See below picture:

gluspro

Multiple up checks are sent causing 500s

https://github.com/ofesseler/gluster_exporter/blob/master/main.go#L210

Im seeing a behaviour where the exporter is returning a 500 from the metrics api with this error:

An error has occurred during metrics gathering:

collected metric gluster_up gauge:<value:1 >  was collected before with the same name and label values

My guess is that here

	volumeInfo, err := ExecVolumeInfo()

	if err != nil {
		log.Errorf("couldn't parse xml volume info: %v", err)
		ch <- prometheus.MustNewConstMetric(
			up, prometheus.GaugeValue, 0.0,
		)
	}

	// use OpErrno as indicator for up
	if volumeInfo.OpErrno != 0 {
		ch <- prometheus.MustNewConstMetric(
			up, prometheus.GaugeValue, 0.0,
		)
	} else {
		ch <- prometheus.MustNewConstMetric(
			up, prometheus.GaugeValue, 1.0,
		)
	}

ExecVolumeInfo is returning an error and we are also sending up status on whats in volumeInfo.OpErrno. We should be short circuiting there.

volume heal error

"tried to execute [volume heal volume info] and got error: exit status 234" source="gluster_client.go:23"

cliOutput error with glusterfs 3.7.11

INFO[0000] GlusterFS Metrics Exporter v0.2.7 source="main.go:531" WARN[0001] no Volumes were given. source="main.go:322" ERRO[0002] expected element type <cliOutput> but have <2f1c16b4-8477-493e-bf1b-3d0a1301fe39> source="xmlStructs.go:180"

glusterfs 3.7.11 built on Apr 18 2016 13:20:46

Failed to build docker images based on your Dockerfile

Steps to reproduce:

  1. git clone https://github.com/ofesseler/gluster_exporter
  2. docker build . -t beylistan/gluster-exporter

Step 6/19 : RUN apt-get update && apt-get install -y apt-utils apt-transport-https ca-certificates gnupg2
---> Running in 8cfff2a0bf71
Err:1 http://deb.debian.org/debian stretch InRelease
Temporary failure resolving 'deb.debian.org'
Err:2 http://security.debian.org/debian-security stretch/updates InRelease
Temporary failure resolving 'security.debian.org'
Err:3 http://deb.debian.org/debian stretch-updates InRelease
Temporary failure resolving 'deb.debian.org'
Reading package lists...
W: Failed to fetch http://deb.debian.org/debian/dists/stretch/InRelease Temporary failure resolving 'deb.debian.org'
W: Failed to fetch http://security.debian.org/debian-security/dists/stretch/updates/InRelease Temporary failure resolving 'security.debian.org'
W: Failed to fetch http://deb.debian.org/debian/dists/stretch-updates/InRelease Temporary failure resolving 'deb.debian.org'
W: Some index files failed to download. They have been ignored, or old ones used instead.
Reading package lists...
Building dependency tree...
Reading state information...
Package apt-utils is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
However the following packages replace it:
apt

Package gnupg2 is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
However the following packages replace it:
gpgv

E: Package 'apt-utils' has no installation candidate
E: Unable to locate package apt-transport-https
E: Unable to locate package ca-certificates
E: Package 'gnupg2' has no installation candidate
The command '/bin/sh -c apt-get update && apt-get install -y apt-utils apt-transport-https ca-certificates gnupg2' returned a non-zero code: 100

grafana dashboards

Do you have any grafana dashboards for this exporter? I could not find any on grafana.net

no Volumes were given. in v0.2.7

when i use v0.2.7 gluster_exporter. there were someting wrong.
WARN[0011] no Volumes were given. source="main.go:322"

i don not know how to do it. please help!

Incorrect volume name or error "no Volumes were given"

Hi,

Can anyone explain why all of my volume names were "devops-registry" from metrics?

gluster_exporter version: v0.2.6
Glusterfs server version: 3.10.0

# HELP gluster_node_size_free_bytes Free bytes reported for each node on each instance. Labels are to distinguish origins
# TYPE gluster_node_size_free_bytes counter
gluster_node_size_free_bytes{hostname="10.10.0.100",path="/glusterfsvolumes/devops/devops-alertmanager/brick",volume="devops-registry"} 1.05489092608e+11
gluster_node_size_free_bytes{hostname="10.10.0.100",path="/glusterfsvolumes/devops/devops-es-data0/brick",volume="devops-registry"} 5.2823959552e+11
gluster_node_size_free_bytes{hostname="10.10.0.100",path="/glusterfsvolumes/devops/devops-es-data1/brick",volume="devops-registry"} 5.2823959552e+11
gluster_node_size_free_bytes{hostname="10.10.0.100",path="/glusterfsvolumes/devops/devops-es-data2/brick",volume="devops-registry"} 5.2823959552e+11
gluster_node_size_free_bytes{hostname="10.10.0.100",path="/glusterfsvolumes/devops/devops-grafana/brick",volume="devops-registry"} 1.0409398272e+10
gluster_node_size_free_bytes{hostname="10.10.0.100",path="/glusterfsvolumes/devops/devops-influxdb/brick",volume="devops-registry"} 9.925582848e+09
gluster_node_size_free_bytes{hostname="10.10.0.100",path="/glusterfsvolumes/devops/devops-prometheus/brick",volume="devops-registry"} 1.0427400192e+11
gluster_node_size_free_bytes{hostname="10.10.0.100",path="/glusterfsvolumes/devops/devops-registry/brick",volume="devops-registry"} 4.8898015232e+10
gluster_node_size_free_bytes{hostname="10.10.0.101",path="/glusterfsvolumes/devops/devops-alertmanager/brick",volume="devops-registry"} 1.05489092608e+11
gluster_node_size_free_bytes{hostname="10.10.0.101",path="/glusterfsvolumes/devops/devops-es-data0/brick",volume="devops-registry"} 5.2823959552e+11
gluster_node_size_free_bytes{hostname="10.10.0.101",path="/glusterfsvolumes/devops/devops-es-data1/brick",volume="devops-registry"} 5.2823959552e+11
gluster_node_size_free_bytes{hostname="10.10.0.101",path="/glusterfsvolumes/devops/devops-es-data2/brick",volume="devops-registry"} 5.2823959552e+11
gluster_node_size_free_bytes{hostname="10.10.0.101",path="/glusterfsvolumes/devops/devops-grafana/brick",volume="devops-registry"} 1.0409398272e+10
gluster_node_size_free_bytes{hostname="10.10.0.101",path="/glusterfsvolumes/devops/devops-influxdb/brick",volume="devops-registry"} 9.925582848e+09
gluster_node_size_free_bytes{hostname="10.10.0.101",path="/glusterfsvolumes/devops/devops-prometheus/brick",volume="devops-registry"} 1.04273997824e+11
gluster_node_size_free_bytes{hostname="10.10.0.101",path="/glusterfsvolumes/devops/devops-registry/brick",volume="devops-registry"} 4.8898019328e+10

here's my glusterfs vol info

Volume Name: devops-influxdb
Type: Replicate
Volume ID: 2803fc56-cdc6-469e-a57e-7982fc20023c
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.10.0.100:/glusterfsvolumes/devops/devops-influxdb/brick
Brick2: 10.10.0.101:/glusterfsvolumes/devops/devops-influxdb/brick
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
 
Volume Name: devops-prometheus
Type: Replicate
Volume ID: 89c44318-e975-408d-9a6c-d15e44fddd0d
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.10.0.100:/glusterfsvolumes/devops/devops-prometheus/brick
Brick2: 10.10.0.101:/glusterfsvolumes/devops/devops-prometheus/brick
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
 
Volume Name: devops-registry
Type: Replicate
Volume ID: 2bb07777-248d-46aa-863a-dad64a5207d0
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.10.0.100:/glusterfsvolumes/devops/devops-registry/brick
Brick2: 10.10.0.101:/glusterfsvolumes/devops/devops-registry/brick
Options Reconfigured:
nfs.disable: on
transport.address-family: inet

error logs from syslog:

Mar 27 15:25:03 prdsh01glus01 gluster_exporter[23074]: time="2017-03-27T15:25:03+08:00" level=warning msg="no Volumes were given." source="main.go:286"
Mar 27 15:25:08 prdsh01glus01 gluster_exporter[23074]: time="2017-03-27T15:25:08+08:00" level=warning msg="no Volumes were given." source="main.go:286"
Mar 27 15:25:32 prdsh01glus01 gluster_exporter[23074]: time="2017-03-27T15:25:32+08:00" level=warning msg="no Volumes were given." source="main.go:286"

Not suitable with glusterfs version < 3.8

Here is a memory leak issue of glusterd reported in 2015
https://bugzilla.redhat.com/show_bug.cgi?id=1287517

The memory leak is caused by running "gluster volume status all" thousands of times which is the similar behavior with gluster-exporter does.
This issue has been fixed in glusterfs-3.8

My experience with glusterfs-3.6.9, the exporter will cause the severe memory leak on the server.
(prometheus scrape period : 180s)
The glusterd will crashed from time to time due to out of memory.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.