Code Monkey home page Code Monkey logo

sidekiq-prometheus-exporter's Introduction

CI status Maintainability

Sidekiq Prometheus Exporter

โ€” Hey! Sidekiq dashboard stats looks like a Prometheus metrics!?

โ€” Indeed ... ๐Ÿค”

Grafana dashboard example

Open dashboard example file (grafana 7), then open https://<your grafana-url>/dashboard/import and paste the content of the file.


If you like the project and want to support me on my sleepless nights, you can

Support via PayPal ko-fi

Available metrics

(starting Sidekiq v4.1.0)

Standard

Name Type Description
sidekiq_processed_jobs_total counter The total number of processed jobs
sidekiq_failed_jobs_total counter The total number of failed jobs
sidekiq_workers gauge The number of workers across all the processes
sidekiq_processes gauge The number of processes
sidekiq_host_processes gauge The number of processes running on the host (labels: host, quiet)
sidekiq_busy_workers gauge The number of workers performing the job
sidekiq_enqueued_jobs gauge The number of enqueued jobs
sidekiq_scheduled_jobs gauge The number of jobs scheduled for a future execution
sidekiq_retry_jobs gauge The number of jobs scheduled for the next try
sidekiq_dead_jobs gauge The number of jobs being dead
sidekiq_queue_latency_seconds gauge The number of seconds between oldest job being pushed to the queue and current time (labels: name)
sidekiq_queue_max_processing_time_seconds gauge The number of seconds between oldest job of the queue being executed and current time (labels: name)
sidekiq_queue_enqueued_jobs gauge The number of enqueued jobs in the queue (labels: name)
sidekiq_queue_workers gauge The number of workers serving the queue (labels: name)
sidekiq_queue_processes gauge The number of processes serving the queue (labels: name)
sidekiq_queue_busy_workers gauge The number of workers performing the job for the queue (labels: name)
Click to expand for all available contribs
Name Type Description
sidekiq_scheduler_jobs gauge The number of recurring jobs
sidekiq_scheduler_enabled_jobs gauge The number of enabled recurring jobs
sidekiq_scheduler_time_since_last_run_minutes gauge The number of minutes since the last recurring job was executed and current time (labels: name)
Name Type Description
sidekiq_cron_jobs gauge The number of cron jobs

Installation

Add this line to your application's Gemfile:

gem 'sidekiq-prometheus-exporter', '~> 0.1'

And then execute:

$ bundle

Or install it yourself as:

$ gem install sidekiq-prometheus-exporter -v '~> 0.1'

Rack application

For a fresh new application to expose metrics create config.ru file with next code inside

require 'sidekiq'
require 'sidekiq/prometheus/exporter'

Sidekiq.configure_client do |config|
  config.redis = {url: 'redis://<your-redis-host>:6379/0'}
end

run Sidekiq::Prometheus::Exporter.to_app

Use your favorite server to start it up, like this

$ bundle exec rackup -p9292 -o0.0.0.0

and then curl https://0.0.0.0:9292/metrics

Rails application

When you have rails application, it's possible to mount exporter as a rack application in your routes.rb

Rails.application.routes.draw do
  # ... omitted ...

  # For more information please check here
  # https://api.rubyonrails.org/v5.1/classes/ActionDispatch/Routing/Mapper/Base.html#method-i-mount
  require 'sidekiq/prometheus/exporter'
  mount Sidekiq::Prometheus::Exporter => '/metrics'
end

Use rails server from bin folder to start it up, like this

$ ./bin/rails s -p 9292 -b 0.0.0.0

and then curl https://0.0.0.0:9292/metrics

Sidekiq Web (extream)

If you are ok with metrics being exposed via Sidekiq web dashboard because you have it inside your private network or only Prometheus scraper will have access to a machine/port/etc, then add a few lines into your web config.ru

require 'sidekiq/web'
require 'sidekiq/prometheus/exporter'

Sidekiq::Web.register(Sidekiq::Prometheus::Exporter)

and then curl https://<your-sidekiq-web-uri>/metrics

Docker

If we are talking about isolation you can run already prepared official rack application in the Docker container by using the public image (check out this README for more)

$ docker run -it --rm \
             -p 9292:9292 \
             -e REDIS_URL=redis://<your-redis-host>:6379/0 \
             strech/sidekiq-prometheus-exporter

and then curl https://0.0.0.0:9292/metrics

Helm

And finally the cloud solution (who don't these days). Easy to install, easy to use. A fully-functioning Helm-package based on official Docker image, comes with lots of configuration options

$ helm repo add strech https://strech.github.io/sidekiq-prometheus-exporter
"strech" has been added to your repositories

$ helm install strech/sidekiq-prometheus-exporter --name sidekiq-metrics

to curl your metrics, please follow the post-installation guide

Tips&Tricks

If you want to see at the exporter startup time a banner about which exporters are enabled add this call to your config.ru (but after exporter configure statement)

require 'sidekiq/prometheus/exporter'

puts Sidekiq::Prometheus::Exporter.banner

๐Ÿ’ข if you don't see your banner try to output into STDERR instead of STDOUT

Sidekiq Contribs

By default we try to detect as many as possible sidekiq contribs and add their metrics to the output. But you can change this behaviour by configuring exporters setting

require 'sidekiq/prometheus/exporter'

# Keep the default auto-detect behaviour
Sidekiq::Prometheus::Exporter.configure do |config|
  config.exporters = :auto_detect
end

# Keep only standard (by default) and cron metrics
Sidekiq::Prometheus::Exporter.configure do |config|
  config.exporters = %i(cron)
end

๐Ÿ’ก if you did't find the contrib you would like to see, don't hesitate to open an issue and describe what do you think we should export.

Contributing

Bug reports and pull requests to support earlier versions of Sidekiq are welcome on GitHub at https://github.com/Strech/sidekiq-prometheus-exporter/issues.

If you are missing your favourite Sidekiq contrib and want to contribute, please make sure that you are following naming conventions from Prometheus.

License

Please see LICENSE for licensing details.

sidekiq-prometheus-exporter's People

Contributors

ajinkyapisal avatar barthez avatar ckoenig avatar fabioaraujopt avatar jtmkrueger avatar magec avatar mikhailadvani avatar pauldn-wttj avatar petergoldstein avatar r38y avatar roobre avatar strech avatar t-lo avatar tmestdagh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

sidekiq-prometheus-exporter's Issues

Retry by queue gauge?

Is there anyway to have a time series the number of retries per queue? Thanks in advance.

TypeError: can't convert nil into Integer

When adding the exporter as standalone service I get the following error when running curl localhost:9292/metrics

[2019-05-20 17:01:12] INFO  WEBrick::HTTPServer#start: pid=4744 port=9293
TypeError: can't convert nil into Integer
        (erb):11:in `format'
        (erb):11:in `to_s'
        /usr/share/ruby/erb.rb:876:in `eval'
        /usr/share/ruby/erb.rb:876:in `result'
        /usr/local/share/gems/gems/sidekiq-prometheus-exporter-0.1.11/lib/sidekiq/prometheus/exporter/standard.rb:26:in `to_s'
        /usr/local/share/gems/gems/sidekiq-prometheus-exporter-0.1.11/lib/sidekiq/prometheus/exporter/exporters.rb:38:in `block in to_s'
        /usr/local/share/gems/gems/sidekiq-prometheus-exporter-0.1.11/lib/sidekiq/prometheus/exporter/exporters.rb:38:in `map'
        /usr/local/share/gems/gems/sidekiq-prometheus-exporter-0.1.11/lib/sidekiq/prometheus/exporter/exporters.rb:38:in `to_s'
        /usr/local/share/gems/gems/sidekiq-prometheus-exporter-0.1.11/lib/sidekiq/prometheus/exporter.rb:48:in `call'
        /usr/local/share/gems/gems/rack-2.0.7/lib/rack/urlmap.rb:68:in `block in call'
        /usr/local/share/gems/gems/rack-2.0.7/lib/rack/urlmap.rb:53:in `each'
        /usr/local/share/gems/gems/rack-2.0.7/lib/rack/urlmap.rb:53:in `call'
        /usr/local/share/gems/gems/rack-2.0.7/lib/rack/tempfile_reaper.rb:15:in `call'
        /usr/local/share/gems/gems/rack-2.0.7/lib/rack/lint.rb:49:in `_call'
        /usr/local/share/gems/gems/rack-2.0.7/lib/rack/lint.rb:37:in `call'
        /usr/local/share/gems/gems/rack-2.0.7/lib/rack/show_exceptions.rb:23:in `call'
        /usr/local/share/gems/gems/rack-2.0.7/lib/rack/common_logger.rb:33:in `call'
        /usr/local/share/gems/gems/rack-2.0.7/lib/rack/chunked.rb:54:in `call'
        /usr/local/share/gems/gems/rack-2.0.7/lib/rack/content_length.rb:15:in `call'
        /usr/local/share/gems/gems/rack-2.0.7/lib/rack/handler/webrick.rb:86:in `service'
        /usr/share/ruby/webrick/httpserver.rb:140:in `service'
        /usr/share/ruby/webrick/httpserver.rb:96:in `run'
        /usr/share/ruby/webrick/server.rb:307:in `block in start_thread'
127.0.0.1 - - [20/May/2019:17:01:40 +0000] "GET /metrics HTTP/1.1" 500 63791 0.1312
 cat config.ru
require 'sidekiq'
require 'sidekiq/prometheus/exporter'

Sidekiq.configure_client do |config|
  config.redis = {url: 'redis://redis:6379'}
end

run Sidekiq::Prometheus::Exporter.to_app

To start it: bundle exec rackup -p9292 -o0.0.0.0

Any ideas?

I'm using enterprise sidekiq.

NoMethodError at /metrics undefined method `sum' for nil:NilClass

Hello guys,

I'm having a weird error on sidekiq exporter:

NoMethodError: undefined method `first' for nil:NilClass
	/usr/local/bundle/gems/sidekiq-6.2.1/lib/sidekiq/api.rb:86:in `fetch_stats!'
	/usr/local/bundle/gems/sidekiq-6.2.1/lib/sidekiq/api.rb:11:in `initialize'
	/usr/local/bundle/gems/sidekiq-prometheus-exporter-0.1.15/lib/sidekiq/prometheus/exporter/standard.rb:19:in `new'
	/usr/local/bundle/gems/sidekiq-prometheus-exporter-0.1.15/lib/sidekiq/prometheus/exporter/standard.rb:19:in `initialize'
	/usr/local/bundle/gems/sidekiq-prometheus-exporter-0.1.15/lib/sidekiq/prometheus/exporter/exporters.rb:38:in `new'
	/usr/local/bundle/gems/sidekiq-prometheus-exporter-0.1.15/lib/sidekiq/prometheus/exporter/exporters.rb:38:in `block in to_s'
	/usr/local/bundle/gems/sidekiq-prometheus-exporter-0.1.15/lib/sidekiq/prometheus/exporter/exporters.rb:38:in `map'
	/usr/local/bundle/gems/sidekiq-prometheus-exporter-0.1.15/lib/sidekiq/prometheus/exporter/exporters.rb:38:in `to_s'
	/usr/local/bundle/gems/sidekiq-prometheus-exporter-0.1.15/lib/sidekiq/prometheus/exporter.rb:50:in `call'
	/usr/local/bundle/gems/rack-2.0.9/lib/rack/urlmap.rb:68:in `block in call'
	/usr/local/bundle/gems/rack-2.0.9/lib/rack/urlmap.rb:53:in `each'
	/usr/local/bundle/gems/rack-2.0.9/lib/rack/urlmap.rb:53:in `call'
	/usr/local/bundle/gems/rack-2.0.9/lib/rack/tempfile_reaper.rb:15:in `call'
	/usr/local/bundle/gems/rack-2.0.9/lib/rack/lint.rb:49:in `_call'
	/usr/local/bundle/gems/rack-2.0.9/lib/rack/lint.rb:37:in `call'
	/usr/local/bundle/gems/rack-2.0.9/lib/rack/show_exceptions.rb:23:in `call'
	/usr/local/bundle/gems/rack-2.0.9/lib/rack/common_logger.rb:33:in `call'
	/usr/local/bundle/gems/rack-2.0.9/lib/rack/chunked.rb:54:in `call'
	/usr/local/bundle/gems/rack-2.0.9/lib/rack/content_length.rb:15:in `call'
	/usr/local/bundle/gems/rack-2.0.9/lib/rack/handler/webrick.rb:86:in `service'
	/usr/local/lib/ruby/2.7.0/webrick/httpserver.rb:140:in `service'
	/usr/local/lib/ruby/2.7.0/webrick/httpserver.rb:96:in `run'
	/usr/local/lib/ruby/2.7.0/webrick/server.rb:307:in `block in start_thread'

Any ideia?

Per-queue `sidekiq_processed_jobs_total` metric

๐Ÿ‘‹๐Ÿป

In my experience using this project to monitor sidekiq, sidekiq_processed_jobs_total is a very useful metric because it's a counter. This means that it easy to get meaningful rates at which jobs are being churned, but unfortunately it does not provide a queue label.

sidekiq_queue_enqueued_jobs does provide a queue label, but it's a gauge rather than a counter, which means it provides only a "snapshot" in time, and it's not very useful to know which queues have more activity, for example.

Would it be possible to have a queue label in sidekiq_processed_jobs_total, or is this not an infromation that sidekiq provides?

Thanks!

Docker image does not work with Google Memorystore Redis instances

The Docker image is using Sidekiq 5.2.8. That version of Sidekiq still tries to set a client name for the Redis connection. This feature has been disabled by default in Sidekiq 6.0.7 by sidekiq/sidekiq#4479.

The problem is that SaaS providers, e.g. Google Cloud, might block calls to the CLIENT command in Redis. Thus sidekiq-prometheus-exporter fails to connect to Google Memorystore Redis instances.

There probably two options to fix this:

  1. Update the sidekiq gem version used by the Docker image. However, I don't know if there was a specific reason to pick that version.
  2. Update config.ru to handle some additional environment variable, e.g. REDIS_DISABLE_ID, and based on that add id: nil to the config hash for Sidekiq.

I'm happy to prepare a PR if needed. But I would need to know, what the preferred approach would be.

ARM support

Can you start building images for the arm arch?

What may cause the exporter to OOM?

We are seeing some spikes in RAM consumption with both 0.1.13 and 0.1.15 versions, wondering what could've caused them.

image

While running under our K8s setup policies (requiring resource limitations), these spikes causes sidekiq-prometheus-exporter pods to exit with OOM, then restart, but no significant logs to be observed. We've tried setting memory limits for the container from 500Mi to 4Gi, however even with 4Gi this still happens, which seems odd since we didn't expect an exporter to consume so much resource.

I'm submitting this issue hoping to understand what types of events could trigger the exporter to consume more RAM than usual.

Integration with official Prometheus ruby client

Thanks for the great gem. It worked like a charm.

I was, however, wondering if there was any best practice for integrating this with the official ruby client so that the /metrics endpoint exposes both Rails and Sidekiq metrics?

Appreciate any pointers.

Using exporter on kubernetes and redis is AWS elasticachegiving error ```Redis::ConnectionError: Connection lost (ECONNRESET)```

I want to use this sidekiq-prometheus-exporter on kubernetes in monittoring namespace. The redis which I have given environmanet varaibles values is in AWS elastic-cache, and so the exporter is giving error when I curl on pod curl http://127.0.0.1:9292/metrics as follows:

~ kubectl version Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.8", GitCommit:"9f2892aab98fe339f3bd70e3c470144299398ace", GitTreeState:"clean", BuildDate:"2020-08-13T16:12:48Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.8", GitCommit:"9f2892aab98fe339f3bd70e3c470144299398ace", GitTreeState:"clean", BuildDate:"2020-08-13T16:04:18Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}

Environment variables for redis which I am setting in values.yaml are:
REDIS_HOST: "some.endpoint.cache.amazonaws.com"
REDIS_PORT: 6379
REDIS_PASSWORD: "************"
REDIS_DB_NUMBER: "0"

127.0.0.1 - - [05/Mar/2021:08:38:36 +0000] "GET /metrics HTTP/1.1" 500 140876 0.0693 Redis::ConnectionError: Connection lost (ECONNRESET) /usr/local/bundle/gems/redis-4.2.5/lib/redis/client.rb:275:in rescue in io'
/usr/local/bundle/gems/redis-4.2.5/lib/redis/client.rb:267:in io' /usr/local/bundle/gems/redis-4.2.5/lib/redis/client.rb:279:in read'
/usr/local/bundle/gems/redis-4.2.5/lib/redis/client.rb:131:in block in call' /usr/local/bundle/gems/redis-4.2.5/lib/redis/client.rb:248:in block (2 levels) in process'
/usr/local/bundle/gems/redis-4.2.5/lib/redis/client.rb:389:in ensure_connected' /usr/local/bundle/gems/redis-4.2.5/lib/redis/client.rb:238:in block in process'
/usr/local/bundle/gems/redis-4.2.5/lib/redis/client.rb:325:in logging' /usr/local/bundle/gems/redis-4.2.5/lib/redis/client.rb:237:in process'
/usr/local/bundle/gems/redis-4.2.5/lib/redis/client.rb:131:in call' /usr/local/bundle/gems/redis-4.2.5/lib/redis/client.rb:113:in block in connect'
/usr/local/bundle/gems/redis-4.2.5/lib/redis/client.rb:313:in with_reconnect' /usr/local/bundle/gems/redis-4.2.5/lib/redis/client.rb:111:in connect'
/usr/local/bundle/gems/redis-4.2.5/lib/redis/client.rb:386:in ensure_connected' /usr/local/bundle/gems/redis-4.2.5/lib/redis/client.rb:238:in block in process'
/usr/local/bundle/gems/redis-4.2.5/lib/redis/client.rb:325:in logging' /usr/local/bundle/gems/redis-4.2.5/lib/redis/client.rb:237:in process'
/usr/local/bundle/gems/redis-4.2.5/lib/redis/client.rb:203:in call_pipelined' /usr/local/bundle/gems/redis-4.2.5/lib/redis/client.rb:170:in block in call_pipeline'
/usr/local/bundle/gems/redis-4.2.5/lib/redis/client.rb:313:in with_reconnect' /usr/local/bundle/gems/redis-4.2.5/lib/redis/client.rb:168:in call_pipeline'
/usr/local/bundle/gems/redis-4.2.5/lib/redis.rb:2445:in block in pipelined' /usr/local/bundle/gems/redis-4.2.5/lib/redis.rb:69:in block in synchronize'
/usr/local/lib/ruby/2.7.0/monitor.rb:202:in synchronize' /usr/local/lib/ruby/2.7.0/monitor.rb:202:in mon_synchronize'
/usr/local/bundle/gems/redis-4.2.5/lib/redis.rb:69:in synchronize' /usr/local/bundle/gems/redis-4.2.5/lib/redis.rb:2441:in pipelined'
/usr/local/bundle/gems/sidekiq-5.2.8/lib/sidekiq/api.rb:68:in block in fetch_stats!' /usr/local/bundle/gems/sidekiq-5.2.8/lib/sidekiq.rb:97:in block in redis'
/usr/local/bundle/gems/connection_pool-2.2.3/lib/connection_pool.rb:63:in block (2 levels) in with' /usr/local/bundle/gems/connection_pool-2.2.3/lib/connection_pool.rb:62:in handle_interrupt'
/usr/local/bundle/gems/connection_pool-2.2.3/lib/connection_pool.rb:62:in block in with' /usr/local/bundle/gems/connection_pool-2.2.3/lib/connection_pool.rb:59:in handle_interrupt'
/usr/local/bundle/gems/connection_pool-2.2.3/lib/connection_pool.rb:59:in with' /usr/local/bundle/gems/sidekiq-5.2.8/lib/sidekiq.rb:94:in redis'
/usr/local/bundle/gems/sidekiq-5.2.8/lib/sidekiq/api.rb:67:in fetch_stats!' /usr/local/bundle/gems/sidekiq-5.2.8/lib/sidekiq/api.rb:23:in initialize'
/usr/local/bundle/gems/sidekiq-prometheus-exporter-0.1.15/lib/sidekiq/prometheus/exporter/standard.rb:19:in new' /usr/local/bundle/gems/sidekiq-prometheus-exporter-0.1.15/lib/sidekiq/prometheus/exporter/standard.rb:19:in initialize'
/usr/local/bundle/gems/sidekiq-prometheus-exporter-0.1.15/lib/sidekiq/prometheus/exporter/exporters.rb:38:in new' /usr/local/bundle/gems/sidekiq-prometheus-exporter-0.1.15/lib/sidekiq/prometheus/exporter/exporters.rb:38:in block in to_s'
/usr/local/bundle/gems/sidekiq-prometheus-exporter-0.1.15/lib/sidekiq/prometheus/exporter/exporters.rb:38:in map' /usr/local/bundle/gems/sidekiq-prometheus-exporter-0.1.15/lib/sidekiq/prometheus/exporter/exporters.rb:38:in to_s'
/usr/local/bundle/gems/sidekiq-prometheus-exporter-0.1.15/lib/sidekiq/prometheus/exporter.rb:50:in call' /usr/local/bundle/gems/rack-2.0.9/lib/rack/urlmap.rb:68:in block in call'
/usr/local/bundle/gems/rack-2.0.9/lib/rack/urlmap.rb:53:in each' /usr/local/bundle/gems/rack-2.0.9/lib/rack/urlmap.rb:53:in call'
/usr/local/bundle/gems/rack-2.0.9/lib/rack/tempfile_reaper.rb:15:in call' /usr/local/bundle/gems/rack-2.0.9/lib/rack/lint.rb:49:in _call'
/usr/local/bundle/gems/rack-2.0.9/lib/rack/lint.rb:37:in call' /usr/local/bundle/gems/rack-2.0.9/lib/rack/show_exceptions.rb:23:in call'
/usr/local/bundle/gems/rack-2.0.9/lib/rack/common_logger.rb:33:in call' /usr/local/bundle/gems/rack-2.0.9/lib/rack/chunked.rb:54:in call'
/usr/local/bundle/gems/rack-2.0.9/lib/rack/content_length.rb:15:in call' /usr/local/bundle/gems/rack-2.0.9/lib/rack/handler/webrick.rb:86:in service'
/usr/local/lib/ruby/2.7.0/webrick/httpserver.rb:140:in service' /usr/local/lib/ruby/2.7.0/webrick/httpserver.rb:96:in run'
/usr/local/lib/ruby/2.7.0/webrick/server.rb:307:in block in start_thread'

I have added servicemonitor as I use prometheus-operator and it is sowing down targets as well. with error server returned HTTP status 500 Internal Server Error --

How can I resolve this issue?

"FrozenError" in Rack 2.2

Using this exporter on rack 2.2.2, I got the following stack trace on every request -

FrozenError (can't modify frozen Hash):
rack (2.2.2) lib/rack/etag.rb:35:in `call'
rack (2.2.2) lib/rack/conditional_get.rb:27:in `call'
rack (2.2.2) lib/rack/head.rb:12:in `call'
(rest of the stack removed for brevity)

Now, according to rack's changelog for 2.2.0,

Etag will continue sending ETag even if the response should not be cached.

The problem is that Rack::Etag modifies a response's headers, and this exporter freezes its headers. The no-cache header used to prevent these two pieces of code from conflicting, but that's no longer the case.

Individual applications can work around this by deleting their Rack::ETag middleware, but that's probably not ideal. I would instead propose that we either make the exporter return non-frozen headers, or set a Last-Modified header equal to the current time. I can open a PR, but let me know if there's a specific approach you prefer.

Slashes (`/`) in redis password break URI generation

If the redis password contains a forward slash (/), an invalid URI is generated:

/usr/local/lib/ruby/3.2.0/uri/rfc3986_parser.rb:66:in `split': bad URI(is not URI?): "redis://redis:passwordwitha/slash@redishostname:6379/0" (URI::InvalidURIError)
    from /usr/local/lib/ruby/3.2.0/uri/rfc3986_parser.rb:71:in `parse'
    from /usr/local/lib/ruby/3.2.0/uri/common.rb:193:in `parse'
    from /usr/local/bundle/gems/sidekiq-6.5.8/lib/sidekiq/redis_connection.rb:133:in `log_info'
    from /usr/local/bundle/gems/sidekiq-6.5.8/lib/sidekiq/redis_connection.rb:103:in `create'
    from /usr/local/bundle/gems/sidekiq-6.5.8/lib/sidekiq.rb:205:in `redis='
    from /app/config.ru:42:in `block (2 levels) in <main>'
    from /usr/local/bundle/gems/sidekiq-6.5.8/lib/sidekiq.rb:152:in `configure_client'
    from /app/config.ru:42:in `block in <main>'
    from /usr/local/bundle/gems/rack-2.2.7/lib/rack/builder.rb:116:in `eval'
    from /usr/local/bundle/gems/rack-2.2.7/lib/rack/builder.rb:116:in `new_from_string'
    from /usr/local/bundle/gems/rack-2.2.7/lib/rack/builder.rb:105:in `load_file'
    from /usr/local/bundle/gems/rack-2.2.7/lib/rack/builder.rb:66:in `parse_file'
    from /usr/local/bundle/gems/rack-2.2.7/lib/rack/server.rb:349:in `build_app_and_options_from_config'
    from /usr/local/bundle/gems/rack-2.2.7/lib/rack/server.rb:249:in `app'
    from /usr/local/bundle/gems/rack-2.2.7/lib/rack/server.rb:422:in `wrapped_app'
    from /usr/local/bundle/gems/rack-2.2.7/lib/rack/server.rb:312:in `block in start'
    from /usr/local/bundle/gems/rack-2.2.7/lib/rack/server.rb:379:in `handle_profiling'
    from /usr/local/bundle/gems/rack-2.2.7/lib/rack/server.rb:311:in `start'
    from /usr/local/bundle/gems/rack-2.2.7/lib/rack/server.rb:168:in `start'
    from /usr/local/bundle/gems/rack-2.2.7/bin/rackup:5:in `<top (required)>'
    from /usr/local/bundle/bin/rackup:25:in `load'
    from /usr/local/bundle/bin/rackup:25:in `<main>'

Couldn't find handler

Since the release yesterday, the Docker container is erroring and restarting:

/usr/local/bundle/gems/rack-2.2.6.4/lib/rack/handler.rb:45:in `pick': Couldn't find handler for: puma, thin, falcon, webrick. (LoadError)
	from /usr/local/bundle/gems/rack-2.2.6.4/lib/rack/handler.rb:60:in `default'
	from /usr/local/bundle/gems/rack-2.2.6.4/lib/rack/server.rb:334:in `server'
	from /usr/local/bundle/gems/rack-2.2.6.4/lib/rack/server.rb:327:in `start'
	from /usr/local/bundle/gems/rack-2.2.6.4/lib/rack/server.rb:168:in `start'
	from /usr/local/bundle/gems/rack-2.2.6.4/bin/rackup:5:in `<top (required)>'
	from /usr/local/bundle/bin/rackup:25:in `load'
	from /usr/local/bundle/bin/rackup:25:in `<main>'

I've updated the REDIS_URL to append /0 but that hasn't worked. I'm not sure if #57 is related to this?

My environment variables:
image

Thanks for any help you can provide.

Support usernames in Redis

Redis 6 and later has improved ACL with more granular access: https://redis.io/docs/management/security/acl/
However, the exporter doesn't support the new schema.
It would be really cool to add username support:

  • probably an env variable REDIS_USERNAME
  • and a new url format: #{scheme}://#{username}:#{password}@#{host}:#{port}/#{db_number}

Operator to allow auto-monitoring of redis clusters defined in configs in K8s

Problem:
When having multiple redis clusters in your infrastructure deploying the prometheus exporter efficiently can become a challenge. Redis configurations have to be duplicated for the helm-chart or whatever deployment mechanism and maintained.

Solution:
Kubernetes operator to watch Redis configurations used by applications

If we have labels/annotations on configmap/secret to allow/signal the operator to discover the redis configurations and create a set of resources(deployment, service, servicemonitor, etc) and wire into the prometheus-operator. Another advantage of the operator approach is to have the ability to handle updates & cleanups as well.

Thoughts?

If the solution plus #26 #27 is getting too kubernetes specific and you would like to move it to a separate repo, please advise how you would like to take this forward

configuring with Rails

I'm trying to get this to work with a Rails app, and it's not totally clear how I need to set things up to get the /metrics route. I've figured out how to get /sidekiq/metrics with those configuration options. Do you have any insight into rails configuration?

Expose as docker container

While running sidekiq workers in containerised environments, it would be helpful to have the exporter run within a container native to the rest of the application ecosystem.

Couldn't find handler for: puma, thin, falcon, webrick. (LoadError)

Hi,

I deployed the exporter using Helm. The deployment fails to load:

/usr/local/bundle/gems/rack-2.2.6.4/lib/rack/handler.rb:45:in `pick': Couldn't find handler for: puma, thin, falcon, webrick. (LoadError)

Helm values:

serviceMonitor:
  enabled: true
  interval: 15s

envFrom:
  type: secretRef
  name: sidekiq-metrics

Image: docker.io/strech/sidekiq-prometheus-exporter:0.2.0

It's connecting using TLS.

Do you have any idea what's happening? :)

redis-namespace endless warnings

Hi @Strech,

First of all, thanks for this great project, really appreciate your work here!
Second, we started to use the out of the box docker you provided, but on our production env
we are seeing endless amount of warnings about that Redis#exists(key) will return an Integer in redis-rb 4.3 .

As part of my investigation, it seems that redis-namespace gem is the root cause for those warnings and
it seems that it was already fixed. Is there any chance to update the version of redis-namepspace to the latest one 1.8.1?

Sources:

  1. Sidekiq issue describing the warning message.
  2. The fix redis-namespace pushed.
  3. The latest version of redis-namespace.
  4. redis-namespace version on docker file that I wish we can change.

Our logs with the warnings:
Screen Shot 2021-03-22 at 17 03 24

Host helm chart repository

Installing the chart from source in an automated way is not straightforward. It would be great to host the chart as an artefact somewhere like github pages. I am evaluating the solution and will share it shortly. Please share if you would like to host it in a different way

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.