Code Monkey home page Code Monkey logo

squid-exporter's Introduction

Gitpod ready-to-code

Github Actions Github Docker Go Report Card Maintainability Donate

Note: I've been very busy on the past couple of months with my personal life and work. Thanks for filing issues and feature requests. I'll start to go through them and provide updates very soon.

Squid Prometheus exporter

Exports squid metrics in Prometheus format

NOTE: From release 1.0 metric names and some parameters has changed. Make sure you check the docs and update your deployments accordingly!

New

  • Using environment variables to configure the exporter
  • Adding custom labels to metrics
  • Enabling TLS for exporter via WebConfig

Usage:

Simple usage:

squid-exporter -squid-hostname "localhost" -squid-port 3128

Configure Prometheus to scrape metrics from localhost:9301/metrics

- job_name: squid
  # squid-exporter is installed, grab stats about the local
  # squid instance.
  target_groups:
    - targets: ['localhost:9301']

To get all the parameteres, command line arguments always override default and environment variables configs:

squid-exporter -help

The following environment variables can be used to override default parameters:

SQUID_EXPORTER_LISTEN
SQUID_EXPORTER_WEB_CONFIG_PATH
SQUID_EXPORTER_METRICS_PATH
SQUID_HOSTNAME
SQUID_PORT
SQUID_LOGIN
SQUID_PASSWORD
SQUID_EXTRACTSERVICETIMES

Usage with docker:

Basic setup assuming Squid is running on the same machine:

docker run --net=host -d boynux/squid-exporter

Setup with Squid running on a different host

docker run -p 9301:9301 -d boynux/squid-exporter -squid-hostname "192.168.0.2" -squid-port 3128 -listen ":9301"

With environment variables

docker run -p 9301:9301 -d -e SQUID_PORT="3128" -e SQUID_HOSTNAME="192.168.0.2" -e SQUID_EXPORTER_LISTEN=":9301" boynux/squid-exporter

Build:

This project is written in Go, so all the usual methods for building (or cross compiling) a Go application would work.

If you are not very familiar with Go you can download the binary from releases.

Or build it for your OS:

go install https://github.com/boynux/squid-exporter

then you can find the binary in: $GOPATH/bin/squid-exporter

Features:

  • Expose Squid counters
    • Client HTTP
    • Server HTTP
    • Server ALL
    • Server FTP
    • Server Other
    • ICP
    • CD
    • Swap
    • Page Faults
    • Others
  • Expose Squid service times
    • HTTP requests
    • Cache misses
    • Cache hits
    • Near hits
    • Not-Modified replies
    • DNS lookups
    • ICP queries
  • Expose squid Info
    • Squid service info (as label)
    • Connection information for squid
    • Cache information for squid
    • Median Service Times (seconds) 5 min
    • Resource usage for squid
    • Memory accounted for
    • File descriptor usage for squid
    • Internal Data Structures
  • Histograms
  • Other metrics
  • Squid Authentication (Basic Auth)

FAQ:

  • Q: Metrics are not reported by exporter

  • A: That usually means the exporter cannot reach squid server or the config manager permissions are not set corretly. To debug and mitigate:

    • First make sure the exporter service can reach to squid server IP Address (you can use telnet to test that)
    • Make sure you allow exporter to query the squid server in config you will need something like this (172.20.0.0/16 is the network for exporter, you can also use a single IP if needed):
    #http_access allow manager localhost
    acl prometheus src 172.20.0.0/16
    http_access allow manager prometheus
    
  • Q: Why process_open_fds metric is not exported?

  • A: This usualy means exporter don't have permission to read /proc/<squid_proc_id>/fd folder. You can either

  1. [recommended] Set CAP_DAC_READ_SEARCH capability for squid exporter process (or docker). (eg. sudo setcap 'cap_dac_read_search+ep' ./bin/squid-exporter)
  2. [not recommended] Run the exporter as root.

Contribution:

Pull request and issues are very welcome.

If you found this program useful please consider donations Donate

Copyright:

MIT License

squid-exporter's People

Contributors

achinthagunasekara avatar alexh-sauce avatar alvarocabanas avatar boynux avatar dswarbrick avatar eugenetolan avatar jlrgraham23 avatar jspashett avatar kintoandar avatar kouki-h avatar moreirfi avatar nageshlop avatar pharaujo avatar roffe avatar sachk avatar tofolcr avatar tome25 avatar zbuchalski avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

squid-exporter's Issues

docker image for arm64 platform

Hi,

I am running Linux staging_proxy 4.14.309-231.529.amzn2.aarch64 #1 SMP Tue Mar 14 23:45:16 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux

Please let me know if docker image for squid-proxy aarch64 will be published to docker hub.

docker run -p 9301:9301 -d boynux/squid-exporter -squid-hostname "staging_proxy" -squid-port 3128 -listen ":9301"

WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested

Repetitive errors on Time Percentiles

Hello, i've just installed the version 1.9.1 and i get some errors:

servicec times - could not parse line: Service Time Percentiles 5 min 60 min:

I had no problem with the version 1.8.3.

OS: debian 10
Squid version: 4.6-1+deb10u4
squid exporter version: 1.9.1

ERROR: cannot find package "https:/github.com/boynux/squid-exporter\x03\x03" in any of:

Describe the bug
Installing squid-exporter using go install
cannot find package "https:/github.com/boynux/squid-exporter\x03\x03" in any of:
/home/i567209/src/https:/github.com/boynux/squid-exporter (from $GOROOT)
/src/https:/github.com/boynux/squid-exporter (from $GOPATH)

To Reproduce
Everytime using go install

Expected behavior
installation complete

OS (please complete the following information):
SLES 12SP5

Additional context
Go Version o version go1.19.3 linux/amd64
go env
GO111MODULE="off"
GOARCH="amd64"
GOBIN=""
GOCACHE="/root/.cache/go-build"
GOENV="/root/.config/go/env"
GOEXE=""
GOEXPERIMENT=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOINSECURE=""
GOMODCACHE="/pkg/mod"
GONOPROXY=""
GONOSUMDB=""
GOOS="linux"
GOPATH="/"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/home/i567209"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/home/i567209/pkg/tool/linux_amd64"
GOVCS=""
GOVERSION="go1.19.3"
GCCGO="gccgo"
GOAMD64="v1"
AR="ar"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
GOWORK=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -Wl,--no-gc-sections -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build3038272853=/tmp/go-build -gno-record-gcc-switches"

Website is down and lack of documentation

My employer wants to use your container as a sidecar in our Kubernetes cluster, We receive all sorts of metrics in our prometheus cluster but we are struggling to decipher the meaning of some of these metrics, specifically the almost 100 squid_Cache_Hits_X and squid_Cache_Misses_Xmetrics.

One of the things we need to create is a Hit&Miss graph but its unclear which hits or misses we need to pick and what the meaning is of the number behind the metric.

It would help alot of we have documentation on what the number behind this metrics means, a google search nor analyzing the source code does not explain.

panic: runtime error: invalid memory address or nil pointer dereference

Describe the bug
Something bad happened during a memory access

To Reproduce
¯\_(ツ)_/¯

Expected behavior
Run forever without memory exception

OS (please complete the following information):

  • OS: Debian
  • Version 10

Additional context

prometheus-squid-exporter[75168]: panic: runtime error: invalid memory address or nil pointer dereference
prometheus-squid-exporter[75168]: [signal SIGSEGV: segmentation violation code=0x1 addr=0x48 pc=0x75cc20]
prometheus-squid-exporter[75168]: goroutine 2221 [running]:
prometheus-squid-exporter[75168]: github.com/boynux/squid-exporter/collector.(*CacheObjectClient).GetCounters(0xc000085800, 0x0, 0x85fc2a, 0x5, 0xc00
prometheus-squid-exporter[75168]:         github.com/boynux/squid-exporter/collector/client.go:65 +0xf0
prometheus-squid-exporter[75168]: github.com/boynux/squid-exporter/collector.(*Exporter).Collect(0xc0000858f0, 0xc0000902a0)
prometheus-squid-exporter[75168]:         github.com/boynux/squid-exporter/collector/metrics.go:62 +0x69
prometheus-squid-exporter[75168]: github.com/prometheus/client_golang/prometheus.(*Registry).Gather.func1()
prometheus-squid-exporter[75168]:         github.com/prometheus/client_golang/prometheus/registry.go:430 +0x193
prometheus-squid-exporter[75168]: created by github.com/prometheus/client_golang/prometheus.(*Registry).Gather
prometheus-squid-exporter[75168]:         github.com/prometheus/client_golang/prometheus/registry.go:522 +0xe23
systemd[1]: prometheus-squid-exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
systemd[1]: prometheus-squid-exporter.service: Failed with result 'exit-code'.

Make config simpler

Usually you'd have a single flag for host:port, the code is just joining them together again so you could make it a little simpler by doing so.

Merge --listen-port and --listen-address parameters to one.

Suggestion from Prometheus community, more info: prometheus/docs#1154

Option for non-proxy authentication redux

I've got the same problem that came up in #29, namely that squid 3.5.20 (in my case) will not take the authentication password from squid_exporter.

The PR associated with the previous issue was closed without merging, so the problem is still around.

exporter does not recover after squid has restarted

If squid has been restarted the exporter stops working, this then has to be restarted in order for metrics to collect again.

ERRROR
The requested URL could not be retrieved
The following error was encountered while trying to retrieve the URL: http://172.17.43.48:9301/metrics

Connection to xxxxxxxx failed.

The system returned: (111) Connection refused

The remote host or network may be down. Please try the request again.

Your cache administrator is webmaster.

Binary in 1.4 release does not match the code

Describe the bug
The binary in the 1.4 release lacks the squid_up metric, which is present in the release tar.

To Reproduce
Run the binary and check for squid_up

Expected behavior
The metric should be available.

OS (please complete the following information):

  • OS: Ubuntu
  • Version 16.04

Get metrics from differents Squid Proxy servers

Hi!!!

This is more a question than a bug report. Is there any way to get metrics from different Squid Proxy servers with the same exporter instance? Or is mandatory to run an exporter for each Proxy Server??

Thank you in advance!!!

Deploying squid exporter with replicated squid instances

First, thanks for working on this project.

This is not really a bug, but rather a question.

I am currently deploying squid on kubernetes, and therefore I was considering increasing the replicas of the deployment in order to scale it up horizontally.

By looking at the code (e.g., https://github.com/boynux/squid-exporter/blob/master/collector/client.go#L165), it seems like it's meant to be deployed "together" with the squid instance (e.g., with a sidecar container).

But how do you think it would make sense to configure it in such use case, where I want to increase the replicas of a single squid deployment? Should metrics of each replica be gathered and then aggregated in order to have a single number for the entire deployment? I am aware of the fact that this is probably not supported at the moment, I am just wondering if such approach could make sense at all.

Thanks.

Add build instructions to README

Would it be possible to have some sort of build instructions on creating the binary?

I need to use the exporter but I don't use Go. I installed Go on two different machines and attempted to compile, but just get loads of errors around vendoring and other noise, example:

[root@dev01 squid-exporter]$make
go test -v ./...
vendor/github.com/prometheus/common/expfmt/encode.go:23:2: use of internal package not allowed
make: *** [test] Error 1

squid-exporter on kubernetes fails detect squid

squid-exporter is able to export matrics from a VM where squid is running, when i try the same in kubernetes infra, it fails

I use ubuntu docker image and install squid , prometheus squid exporter and expose 3128 and 9103 ports.
and in entrypoint.sh

#!/bin/bash
/usr/sbin/squid -f /etc/squid/squid.conf -NYCd 1
sleep 10
nohup /usr/bin/prometheus-squid-exporter -squid-hostname "localhost" -squid-port 3128 -listen ":9301" &

when i curl from another pod in same network i get other metrics exported but squid related nothig

example:

TYPE process_virtual_memory_bytes gauge

process_virtual_memory_bytes 1.106550784e+09

HELP process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes.

TYPE process_virtual_memory_max_bytes gauge

process_virtual_memory_max_bytes 1.8446744073709552e+19

HELP promhttp_metric_handler_requests_in_flight Current number of scrapes being served.

TYPE promhttp_metric_handler_requests_in_flight gauge

promhttp_metric_handler_requests_in_flight 1

TYPE promhttp_metric_handler_requests_total counter

promhttp_metric_handler_requests_total{code="200"} 4
promhttp_metric_handler_requests_total{code="500"} 0
promhttp_metric_handler_requests_total{code="503"} 0

HELP squid_exporter_build_info A metric with a constant '1' value labeled by version, revision, branch, and goversion from which squid_exporter was built.

TYPE squid_exporter_build_info gauge

squid_exporter_build_info{branch="debian/sid",goversion="go1.17.3",revision="1.10.0+ds-1",version="1.10.0+ds"} 1

HELP squid_up Was the last query of squid successful?

TYPE squid_up gauge

squid_up{host="localhost"} 0

here i am not running squid as service , it is a command. may be due to that squid-exporter is not detect squid process
also i tried to give IP address ( allowed acl in squid.conf ) it didnt work.

i am not sure how sidecar can work here. POD IP keeps changing when pod restarts/deployed.

**OS (Ubuntu)

  • Version [22.04]

Docker Hub tags problem

I wanted to highligth an issue on dockerhub about the tags.

In docker hub, there is no v1.10.0 tag eventhought there is a code version in github with this tag.
The digest of the tag "latest" does not correspond to the latest tag (v1.9.4) available on dockerhub. I assume the tag latest is actually the v1.10.0.
It would be useful to be able to use the version V1.10.0 without using the tag latest

Binary missing in last release v1.10.4

Hello,

I'm facing an unexpected 404 error when trying to download the latest release the same way I was doing before (I'm using script to automate my deployments).

Thank you, regards.

Describe the bug
The binary is missing in the lastest release v1.10.4

To Reproduce
Try to download the release using the URL https://github.com/boynux/squid-exporter/releases/download/v1.10.4/squid-exporter
get an 404 error instead of the squid-exporter binary.

Expected behavior
Download the squid-exporter binary like the other versions (for instance https://github.com/boynux/squid-exporter/releases/download/v1.9.4/squid-exporter).

OS (please complete the following information):

  • OS: Ubuntu 22.04
  • Version v1.10.4

Additional context
Not really a bug but an incomplete release.

Could not fetch metrics from squid instance unexpected EOF

Describe the bug
getting following error with squid version 5.7 with require-proxy-header set.
squid_exporter is failing to read squid metrics and return the following error:

systemctl status squid_exporter

Aug 01 14:50:03 xxxx.xx.xx.local squid-exporter[3883]: 2023/08/01 14:50:03 Could not fetch metrics from squid instance: unexpected EOF

when I looks at the squid log I see the following error. May be this one is causing this error.

error:transaction-end-before-headers NONE/0.0" 0 0 NONE_NONE:HIER_NONE

logs

curl localhost:19101 where squid exporter is listening. it is missing the squid metrics

HELP promhttp_metric_handler_requests_total Total number of scrapes by HTTP status code.

TYPE promhttp_metric_handler_requests_total counter

promhttp_metric_handler_requests_total{code="200"} 13214
promhttp_metric_handler_requests_total{code="500"} 0
promhttp_metric_handler_requests_total{code="503"} 0

HELP squid_exporter_build_info A metric with a constant '1' value labeled by version, revision, branch, and goversion from which squid_exporter was built.

TYPE squid_exporter_build_info gauge

squid_exporter_build_info{branch="HEAD",goversion="go1.13.8",revision="c60ca5a56e34783af00b8d7f959bc18d0d6aefa6",version="1.8.3"} 1

HELP squid_up Was the last query of squid successful?

TYPE squid_up gauge

squid_up{host="localhost"} 0

To Reproduce
Steps to reproduce the behavior:
make a request to destination via squid pproxy using loadbalancer
check the squid exporter log and notice the error:

Expected behavior
A clear and concise description of what you expected to happen.

OS (please complete the following information):

  • OS: Centos
  • Version 7

Additional context
Add any other context about the problem here.

Support proxy protocol access

If squid enabled require-proxy-header feature in the configuration, such as

  http_port 3128 require-proxy-header
  proxy_protocol_access allow localnet

then the exporter could not get the metrics data (request will be blocked).

The way to resolve is add the proxy header when requesting cache manager, e.g.:

......
	conn, err := connect(hostname, port)

	if err != nil {
		return nil, err
	}

	// set proxy proto header (version 1)
	// from: localhost:80
	// to: localhost: <port>
	header := &proxyproto.Header{
		Version:           1,
		Command:           proxyproto.PROXY,
		TransportProtocol: proxyproto.TCPv4,
		SourceAddr: &net.TCPAddr{
			IP:   net.ParseIP("127.0.0.1"),
			Port: 80,
		},
		DestinationAddr: &net.TCPAddr{
			IP:   net.ParseIP("127.0.0.1"),
			Port: port,
		},
	}
	// After the connection was created write the proxy headers first
	_, err = header.WriteTo(conn)
	if err != nil {
		return nil, err
	}

......

I did it in my fork at https://github.com/maxwheel/squid-exporter. And I wonder if you would like to support it in the future:)
Many thanks!

Add option to use non-proxy authorization

With this Squid version 3.5.27 configuration:

acl manager proto cache_object
http_access allow localhost manager
http_access deny manager
http_access allow localhost
cachemgr_passwd <password> all

When running squid-exporter from localhost, it is necessary to pass the cachemgr password using the Authorization header instead of Proxy-Authorization for a successful counter collection.

It would be great if we could optionally pass both Authorization credentials as well as Proxy-Authorization, with separate command-line arguments.

Outdated version in binary

Describe the bug
The last 4 releases show the exact same version (1.8) in the -version flag and in the squid_exporter_build_info metric, as the VERSION file hasn’t been updated before releases.

To Reproduce
squid_exporter_build_info metric has the version label set to 1.8 on newer releases; the -version flag shows the same.

Expected behavior
both metric and flag should show the actual release (e.g. ”1.8.3”)

Additional context
Outdated VERSION file

per requested client fqdn/url metrics

Thanks a lot for contributing this great work with the community and maintaining over time !

Describe the feature

I'm trying to get stats per requested FQDN (in the case of a CONNECT request) or Urls (in the case plain HTTP request) such as:

  • number of client requests
  • number of client kbytes received
  • number of client kbytes transferred
  • cache hits per url

Currently, I understand that available metrics don't have labels with fqdn or urls.

I'm not yet so familiar with what squid offers in terms of stats/metrics/reports that could be used. I did the following research in the documentation to learn a bit more below.

Is there prior work on this topic ?

Expected behavior

A new flag passed to the exporter to turn on a feature which adds metrics labels with client requested FQDN or Url

As to avoid prometheus cardinality explosion, the flag could select the k top FQDN/URL to surface as labels, and the group the long tail into an other category

Additional context
Add any other context about the problem here.

Research in squid documentation about per FQDN/Url stats/report available

A Cache Digest is a summary of the contents of an Internet Object Caching Server. It contains, in a compact (i.e. compressed) format, an indication of whether or not particular URLs are in the cache.

Enabling Cache Digests
If you wish to use Cache Digests (available in Squid version 2) you need to add a configure option, so that the relevant code is compiled in:
./configure --enable-cache-digests ...

the keys which are looked up in Cache Digests are actually formed by performing the MD5 [RFC 1321] digest function on the concatenation of:

  1. a numeric code for the HTTP method used, and
  2. the URL requested.
Squid report content

This is an example from a default build of Squid-3.2. Remember the menu varies with available features.

index Cache Manager Interface public
menu Cache Manager Menu public
offline_toggle Toggle offline_mode setting hidden
shutdown Shut Down the Squid Process hidden
reconfigure Reconfigure Squid hidden
rotate Rotate Squid Logs hidden
pconn Persistent Connection Utilization Histograms public
mem Memory Utilization public
diskd DISKD Stats public
squidaio_counts Async IO Function Counters public
config Current Squid Configuration hidden
comm_epoll_incoming comm_incoming() stats public
ipcache IP Cache Stats and Contents public
fqdncache FQDN Cache Stats and Contents public
idns Internal DNS Statistics public
redirector URL Redirector Stats public
external_acl External ACL stats public
http_headers HTTP Header Statistics public
info General Runtime Information public
service_times Service Times (Percentiles) public
filedescriptors Process Filedescriptor Allocation public
objects All Cache Objects public
vm_objects In-Memory and In-Transit Objects public
io Server-side network read() size histograms public
counters Traffic and Resource Counters public
peer_select Peer Selection Algorithms public
digest_stats Cache Digest and ICP blob public
5min 5 Minute Average of Counters public
60min 60 Minute Average of Counters public
utilization Cache Utilization public
histograms Full Histogram Counts public
active_requests Client-side Active Requests public
username_cache Active Cached Usernames public
openfd_objects Objects with Swapout files open public
store_digest Store Digest public
store_log_tags Histogram of store.log tags public
storedir Store Directory Stats public
store_io Store IO Interface Stats public
store_check_cachable_stats storeCheckCachable() Stats public
refresh Refresh Algorithm Statistics public
forward Request Forwarding Statistics public
cbdata Callback Data Registry Contents public
events Event Queue public
client_list Cache Client List public
asndb AS Number Database public
carp CARP information public
userhash peer userhash information public
sourcehash peer sourcehash information public
server_list Peer Cache Statistics public
config Current Squid Configuration hidden
store_log_tags Histogram of store.log tags public

https://wiki.squid-cache.org/Features/CacheManager/Index

Cache Manager objects or reports

The following table details SMP support for each Cache Manager object or report. Unless noted otherwise, an aggregated statistics is either a sum, arithmetic mean, minimum, or maximum across all kids, as appropriate to represent the “whole Squid” view.

Name Component Aggregated? Comments
menu all yes  
info Number of clients accessing cache yes, poorly Coordinator sums up the number of clients reported by each kid, which is usually wrong because most active clients will use more than one worker, leading to exaggerated values. Note that even without SMP, this statistics is exaggerated because the count goes down when Squid cleans up the internal client table and not when the last client connection closes. SMP amplifies that effect.
  UP Time yes The maximum uptime across all kids is reported
  other yes  
server_list all no, but can be If you work on aggregating these stats, please keep in mind that kids may have a different set of peers. The to-Coordinator responses should include, for each peer, a peer name and not just its “index”
mem all no, but can be If you work on aggregating these stats, please keep in mind that kids may have a different set of memory pools. The to-Coordinator responses should include, for each pool, a pool name and not just its “index”. Full stats may exceed typical UDS message size limits (16KB). If overflows are likely, it may be a good idea to create response messages so that overflowing items are not included (in the current sort order). Another alternative is to split mgr:mem into mgr:mem (with various aggregated totals) and mgr:pools (with non-aggregated per-pool details).
counters sample_time yes The latest (maximum) sample time across all kids is reported
refresh all no, but can be  
idns queue no and should not be The kids should probably report their own queues, especially since DNS query IDs are kid-specific.
  other no, but can be If you work on aggregating these stats, please keep in mind that kids may have a different set of name servers. The to-Coordinator responses should include, for each name server, a server address and not just its “index”.
histograms all no, but can be If you work on aggregating these stats, please keep typical UDS message size limits (16KB) in mind.
5min sample_start_time yes The earliest (minimum) sample time across all kids is reported
  sample_end_time yes The latest (maximum) sample time across all kids is reported.
  median yes, approximately The arithmetic mean over kids medians is reported. This is not a true median. True median reporting is possible but would require adding code to exchange and aggregate raw histograms.
  other yes  
60min all   See 5min rows for component details.
utilization all no, but can be If you work on aggregating these stats, please reuse or mimic mgr:5min/60min aggregation code.
other all varies TBD. In general, statistics inside "by kidK {...}" blobs are not aggregated while all others are.

[Proposal] A Squid Client Info gauge

Hi,

I have a use-case very similar to the one described here 0

In my scenario, I need to use a client_ip from the squid logs in concert with a prometheus metrics query within the same Grafana panel, which is currently impossible as described in the discussion 0 and with some more evidence of the issue here 1.

In order to, within the context of squid-exporter, eliminate the need for a mixed-datasource/dashboard-datasource query 2 entirely, I want to propose an exporter of a squid_client_info gauge quite similar in its properties to kube_pod_info 3 exported by kube-state-metrics 4.

This gauge must contain the HTTP source IP, but it may also be useful to add some other other useful HTTP information such as user-agent and/or response-code.

Your thoughts?


0: grafana/grafana#68516
1: grafana/grafana#63866 (comment)
2: https://grafana.com/docs/grafana/latest/datasources/#special-data-sources
3: https://github.com/kubernetes/kube-state-metrics/blob/main/docs/pod-metrics.md?plain=1#L6
4: https://github.com/kubernetes/kube-state-metrics

Version of Go used has critical vulnerability findings

Describe the bug

Go version 1.17.7 has a number of High, Medium, and Low security vulnerability findings against it. Many can be resolved by upgrading to version 1.17.13, but a handful require upgrading to version 1.19.7+ or 1.20.2+ to resolve.

To Reproduce

Scan the image with an image-based security scanner. We specifically used Twistlock, but I imagine other scanners will yield similar results.

Expected behavior

An image scan that finds no vulnerabilities.

OS (please complete the following information):
N/A

Additional context

See the attached file for the vulnerability report that our image vulnerability scanner generated.

vuln-report.txt

Statistics is gathered from the main process instead of a child

Describe the bug
bb147c8

I think there's no much point to get such statistics (max/open files, memory/cpu) from main process, because all the work is done by child.

Moreover statistics about open files and some of mem/cpu stats could be taken from cache_object.
For example, with squidclient:
Add to config:

acl manager proto cache_object
http_access allow manager localhost
http_access deny manager

do:

squidclient  mgr:info

Securely Export

Describe the bug
Just would like to know if there is any plan to incorporate SSL/TLS into this exporter.

To Reproduce
None

Expected behavior
Allow setting up secure connection to Prometheus server

OS (please complete the following information):

  • OS: [e.g. Ubuntu]
  • Version [e.g. v0.4]

Additional context
Add any other context about the problem here.

squid-exporter tagged release doesn't have some fixes

I've downloaded the new released binary but it does not seem to have the typo fix

$ wget https://github.com/boynux/squid-exporter/releases/download/v0.2/squid-exporter
..... redacted
$ chmod 755 squid-exporter
$ ./squid-exporter -h
Usage of ./squid-exporter:
  -listern-address string
    	Address to bind exporter (default "127.0.0.1")
  -listern-port int
    	Port to bind exporter (default 9301)
  -metrics-path string
    	Metrics path to expose prometheus metrics (default "/metrics")
  -squid-hostname string
    	Squid hostname (default "localhost")
  -squid-port int
    	Squid port to read metrics (default 3128)

So I'm not sure why #2 isn't included in the release

Connection Refused on port 9301

Describe the bug

When setting up Squid Exporter as a sidecar to squid on AWS Managed Kubernetes (EKS) the exported started up fine and was reachable via a service. Never the less the liveness probe on tcp or http always failed with:

Liveness probe failed: dial tcp x.x.x.2:9301: connect: connection refused

To Reproduce

Terraform deployment:

resource "kubernetes_deployment" "squid-proxy" {
  metadata {
    name = var.app-name
    labels = {
      app        = var.app-name
    }
    namespace = kubernetes_namespace.staging-proxy.metadata[0].name
  }

  spec {
    replicas = 3
    strategy {
      rolling_update {
        max_unavailable = "1"
      }
    }

    selector {
      match_labels = {
        app = var.app-name
      }
    }

    template {
      metadata {
        labels = {
          app        = var.app-name
          pipelineid = var.pipeline_label
        }
        annotations = {
          pipelineid = var.pipeline_label
          allowlist  = local.allowlist_sha1
          squidconfig = local.squid_config_sha1
        }
      }
      spec {
        container {
          image = "xxx.amazonaws.com/internet-proxy:${var.image-tag}"
          name  = "squid"
          resources {
            limits = {
              cpu    = "1"
              memory = "1Gi"
            }
            requests = {
              cpu    = "250m"
              memory = "512Mi"
            }
          }
          port {
            container_port = 3128
          }
          volume_mount {
            mount_path = "/etc/squid/squid-allowlist"
            name       = "allowlist"
            read_only  = true
          }
          volume_mount {
            mount_path = "/etc/squid/squid.conf"
            sub_path    = "squid.conf"
            name       = "squid-config"
            read_only  = true
          }
          liveness_probe {
            tcp_socket {
              port = "3128"
            }
          }
          readiness_probe {
            exec {
              command = ["squidclient", "-h", "localhost", "cache_object://localhost/counters"]
            }
          }
        }
        container {
          image = "xxx.amazonaws.com/squid-exporter:latest"
          name  = "squid-exporter"
          resources {
            limits = {
              cpu    = "200m"
              memory = "1Gi"
            }
            requests = {
              cpu    = "100m"
              memory = "212Mi"
            }
          }
          port {
            container_port = 3129
            name = "metrics"
          }
          env {
            name = "SQUID_HOSTNAME"
            value = "127.0.0.1"
          }
          env{
            name = "SQUID_PORT"
            value = "3128"
          }
      #    env{
      #      name = "SQUID_EXPORTER_LISTEN"
       #     value = ":3129"
       #   }

          liveness_probe {
            tcp_socket {
              port = 3129
            }
            failure_threshold = 2
            period_seconds = 15
            initial_delay_seconds = 15
          }


        }
        volume {
          name = "allowlist"
          config_map {
            name = kubernetes_config_map.allow-list.metadata[0].name
          }
        }
        volume {
          name = "squid-config"
          config_map {
            name = kubernetes_config_map.squid-config.metadata[0].name
          }
        }
      }
    }
  }
}

Expected behavior
Liveness probe should work

OS (please complete the following information):

  • OS: Amazon Linux for EKS 1.21

Solution

          env{
           name = "SQUID_EXPORTER_LISTEN"
           value = ":3129"
         }

Notes

Without specifying the exporter port the log said : listening on "10.11.12.13:9301" (node ip changed for this post)
With specifying the exporter port the log said : listening on ":3129"

get the speed of proxy

I want to get the speed of squid proxy, are there any plans to add this metric to the squid exporter?
if no, how can I calculate it by metrics?

[FEATURE REQUEST] Allow defining the host in an environment variable or evaluate environment variables in the command

My use case is that I have 1..n instances of squid running with hostnames such as squid-1, squid-2, etc.

As I will have one squid-exporter per squid, and squid-exporter is aware of its instance number, I'm trying to add the instance number as a parameter to the Docker command:
command: -squid-hostname squid-$INSTANCE_NUMBER -squid-port 3128 -listen :9301

This results in the following log:

2019-03-06T11:00:47.434Z [squid.exporter-1]: 2019/03/06 11:00:47 Scraping metrics from squid-$INSTANCE_NUMBER:3128
2019-03-06T11:00:47.435Z [squid.exporter-1]: 2019/03/06 11:00:47 Listening on :9301
2019-03-06T11:01:18.711Z [squid.exporter-1]: 2019/03/06 11:01:18 Could not fetch metrics from squid instance:  dial tcp: lookup squid-$INSTANCE_NUMBER: no such host

So I think I could overcome this limitation by being able to define the squid-hostname as an environment variable or alternatively the variable would be evaluated before passing it to the squid-exporter binary.

Bump Go version to 1.13

I'm considering submitting a PR to bump the Go version used to compile squid-exporter to 1.13. Since go modules are already being used in the project and go 1.13 started using them instead of vendor'd dependencies by default, I would like your opinion about deleting the vendor directory. In my opinion, it shouldn't be needed anymore. wdyt?

Add option to add prefix to metrics names...

The current metrics names exported have fairly generic names and in a large prometheus setup this could cause clashes between differing systems.

It might be nice to have an option --prefix=<string> that would add a prefix to the beginning of the metrics.

could not parse line

Have setup exporter, and getting alot of console messages every time I hit the metrics end point http://xxxxx:9301/metrics

2018/07/06 11:04:45 could not parse line
2018/07/06 11:04:45 could not parse line
2018/07/06 11:04:45 could not parse line
2018/07/06 11:04:45 could not parse line
2018/07/06 11:04:45 could not parse line
2018/07/06 11:04:46 could not parse line
2018/07/06 11:04:47 could not parse line
2018/07/06 11:04:48 could not parse line
2018/07/06 11:04:48 could not parse line
2018/07/06 11:04:48 could not parse line
2018/07/06 11:04:49 could not parse line
2018/07/06 11:04:49 could not parse line
2018/07/06 11:04:49 could not parse line
2018/07/06 11:04:50 could not parse line

Any idea what could be causing this to occur or additional debug I enable to assist?

Add documentation about http_access allow manager

First, thanks for this project! Saved me much time.

I ran into an issue where squid_up was returning 1, but I got no other squid_* items in my exporting.

My issue was that squid was running in a container, as was the squid exporter. Thus I needed to open up some more access in my squid.conf

#http_access allow manager localhost
acl prometheus src 172.20.0.0/16
http_access allow manager prometheus

I thought this might be helpful in the readme as a requirement or a troubleshooting FAQ.

servicec times - could not parse line: Service Time Percentiles

When i fire up the exporter and try to scrape i get the following in the logs:

2020/08/11 10:14:59 servicec times - could not parse line: Service Time Percentiles            5 min    60 min:

output for squidclient mgr:info

Squid Object Cache: Version 4.10
Build Info: Ubuntu linux
Service Name: squid
Start Time:	Mon, 27 Jul 2020 12:47:38 GMT
Current Time:	Tue, 11 Aug 2020 10:16:13 GMT
Connection information for squid:
	Number of clients accessing cache:	4
	Number of HTTP requests received:	3126
	Number of ICP messages received:	0
	Number of ICP messages sent:	0
	Number of queued ICP replies:	0
	Number of HTCP messages received:	0
	Number of HTCP messages sent:	0
	Request failure ratio:	 0.00
	Average HTTP requests per minute since start:	0.1
	Average ICP messages per minute since start:	0.0
	Select loop called: 3595989 times, 357.875 ms avg
Cache information for squid:
	Hits as % of all requests:	5min: 0.0%, 60min: 0.0%
	Hits as % of bytes sent:	5min: 78.3%, 60min: 29.3%
	Memory hits as % of hit requests:	5min: 0.0%, 60min: 0.0%
	Disk hits as % of hit requests:	5min: 0.0%, 60min: 0.0%
	Storage Swap size:	0 KB
	Storage Swap capacity:	 0.0% used,  0.0% free
	Storage Mem size:	216 KB
	Storage Mem capacity:	 0.1% used, 99.9% free
	Mean Object Size:	0.00 KB
	Requests given to unlinkd:	0
Median Service Times (seconds)  5 min    60 min:
	HTTP Requests (All):   0.00000  0.00000
	Cache Misses:          0.00000  0.00000
	Cache Hits:            0.00000  0.00000
	Near Hits:             0.00000  0.00000
	Not-Modified Replies:  0.00000  0.00000
	DNS Lookups:           0.00000  0.00394
	ICP Queries:           0.00000  0.00000
Resource usage for squid:
	UP Time:	1286914.932 seconds
	CPU Time:	102.564 seconds
	CPU Usage:	0.01%
	CPU Usage, 5 minute avg:	0.02%
	CPU Usage, 60 minute avg:	0.02%
	Maximum Resident Size: 118800 KB
	Page faults with physical i/o: 5
Memory accounted for:
	Total accounted:         2603 KB
	memPoolAlloc calls:   3129793
	memPoolFree calls:    3133735
File descriptor usage for squid:
	Maximum number of file descriptors:   1024
	Largest file desc currently in use:     26
	Number of file desc currently in use:   18
	Files queued for open:                   0
	Available number of file descriptors: 1006
	Reserved number of file descriptors:   100
	Store Disk files open:                   0
Internal Data Structures:
	    53 StoreEntries
	    53 StoreEntries with MemObjects
	     1 Hot Object Cache Items
	     0 on-disk objects

This squid instance is used as a forwarding proxy i want to monitor, so it has no backends if that might be the culprit

Failed to scrape metrcis due to scrape timeout

Describe the bug
I set up a scrape job for squid exporter and got scrape timeout occasionally

How can I troubleshoot this? I saw no logs either in squid exporter log or squid log.

What could be the cause of this timeout, what do you suggest to set the timeout to?

To Reproduce
start a exporter to exporter squid metrics

Expected behavior
scrape successfully, no timeout

actual: scrape is quite slow, response time sometimes can go up to 10s or more.

OS (please complete the following information):

  • OS: gentoo Linux misaka1 5.9.12-gentoo #2 SMP Tue Dec 8 11:59:44 CST 2020 x86_64 AMD EPYC 7K62 48-Core Processor AuthenticAMD GNU/Linux
  • Version 1.10.3

Additional context
Add any other context about the problem here.

squid_process_open_fds metrics not getting exposed.

Describe the bug
Provided the PID file but not exposing squid_process_open_fds metrics. The below metrics are getting exposed except squid_process_open_fds metrics.

squid_process_cpu_seconds_total
squid_process_max_fds
squid_process_resident_memory_bytes
squid_process_start_time_seconds
squid_process_virtual_memory_bytes
squid_process_virtual_memory_max_bytes

To Reproduce
You can enable the pid file option and test it.

Expected behavior
Expose squid_process_open_fds metrics

OS (please complete the following information):

  • OS: SUSE SLES12 SP4
  • Squid Version : 1.9.4

Additional context
Add any other context about the problem here.

Service Times are Counters instead of Gauges

Describe the bug
In the prometheus definition "A counter is a cumulative metric that represents a single monotonically increasing counter whose value can only increase or be reset to zero on restart".

Since service_time metrics are calculated on buckets that vary as the time passes, the values of those metrics increase and decrease with time, so I think they should be treated as Gauges.

The problem gets worse when using the exporter with the OpenTelemetry prometheus receiver as Counters are translated into monotonic cummulative metrics and every time there is a decrease in the.value the startTimestamp is reset, so openTelemetry treats it as a new sequence.

To Reproduce
Executing the exporter and looking at the metrics from the endpoint you can see the service_times are counters.

Scrapping the exporter with the OpenTelemetry Collector and the prometheus receiver you get the result that you can see in the image attached whenever there is a decrease in the value of those counters.

Expected behavior
I would consider those metrics as GAUGE.

OS (please complete the following information):

  • OS: all
  • Version 1.10.3

image

Could not fetch * metrics from squid instance: 403 error

hello.
I am getting errors. Please help.

  1. error messeges
    Printed every 30 seconds after squid-exporter starts.
    In Prometheus is searched.
    Only two metric
  • squid_exporter_build_info
  • squid_up

2024/03/15 06:16:44 Could not fetch counter metrics from squid instance: error getting counters: Non success code 403 while fetching metrics
2024/03/15 06:16:44 Could not fetch service times metrics from squid instance: error getting service times: Non success code 403 while fetching metrics
2024/03/15 06:16:44 Could not fetch info metrics from squid instance: error getting info: Non success code 403 while fetching metrics

  1. env
  1. firewall
  • prometheus -> squid : port 9301
  1. config
  • acl prometheus src 'server IP'
  • http_access allow manager prometheus
  1. start script
    If I set it to the default or 127.0.0.1, I couldn't communicate with Prometheus, so I changed it to 0.0.0.0.
  • squid-exporter -squid-hostname localhost -squid-port 8080 -listen 0.0.0.0:9301 &
  1. squid version
    Squid Cache: Version 6.1

  2. os version
    centos 7.9

Add Basic authentication

Considering adding Basic certification?

For security reasons, /metrics endpoint Access should be authenticated before it can be accessed

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.