Code Monkey home page Code Monkey logo

sidekiq-limit_fetch's Introduction

Description

This project has been taken over by @deanpcmad. Original code by @brainopia.

Sidekiq strategy to support a granular queue control – limiting, pausing, blocking, querying.

CI Gem Version

Installation

Add this line to your application's Gemfile:

gem 'sidekiq-limit_fetch'

Then bundle install.

Limitations

Important note: At this moment, sidekiq-limit_fetch is incompatible with

  • sidekiq pro's reliable_fetch
  • sidekiq-rate-limiter
  • any other plugin that rewrites fetch strategy of sidekiq.

Usage

If you are using this with Rails, you don't need to require it as it's done automatically.

To use this Gem in other Ruby projects, just add require 'sidekiq-limit_fetch'.

Limits

Specify limits which you want to place on queues inside sidekiq.yml:

:limits:
  queue_name1: 5
  queue_name2: 10

Or set it dynamically in your code:

Sidekiq::Queue['queue_name1'].limit = 5
Sidekiq::Queue['queue_name2'].limit = 10

In these examples, tasks for the queue_name1 will be run by at most 5 workers at the same time and the queue_name2 will have no more than 10 workers simultaneously.

Ability to set limits dynamically allows you to resize worker distribution among queues any time you want.

Limits per process

If you use multiple sidekiq processes then you can specify limits per process:

:process_limits:
  queue_name: 2

Or set it in your code:

Sidekiq::Queue['queue_name'].process_limit = 2

Busy workers by queue

You can see how many workers currently handling a queue:

Sidekiq::Queue['name'].busy # number of busy workers

Pauses

You can also pause your queues temporarily. Upon continuing their limits will be preserved.

Sidekiq::Queue['name'].pause # prevents workers from running tasks from this queue
Sidekiq::Queue['name'].paused? # => true
Sidekiq::Queue['name'].unpause # allows workers to use the queue
Sidekiq::Queue['name'].pause_for_ms(1000) # will pause for a second

Blocking queue mode

If you use strict queue ordering (it will be used if you don't specify queue weights) then you can set blocking status for queues. It means if a blocking queue task is executing then no new task from lesser priority queues will be ran. Eg,

:queues:
  - a
  - b
  - c
:blocking:
  - b

In this case when a task for b queue is ran no new task from c queue will be started.

You can also enable and disable blocking mode for queues on the fly:

Sidekiq::Queue['name'].block
Sidekiq::Queue['name'].blocking? # => true
Sidekiq::Queue['name'].unblock

Advanced blocking queues

You can also block on array of queues. It means when any of them is running only queues higher and queues from their blocking group can run. It will be easier to understand with an example:

:queues:
  - a
  - b
  - c
  - d
:blocking:
  - [b, c]

In this case tasks from d will be blocked when a task from queue b or c is executed.

You can dynamically set exceptions for queue blocking:

Sidekiq::Queue['queue1'].block_except 'queue2'

Dynamic queues

You can support dynamic queues (that are not listed in sidekiq.yml but that have tasks pushed to them (usually with Sidekiq::Client.push)).

To use this mode you need to specify a following line in sidekiq.yml:

:dynamic: true

or

  :dynamic:
    :exclude:
      - excluded_queue

to exclude excluded_queue from dynamic queue

Dynamic queues will be ran at the lowest priority.

Maintenance

If you use flushdb, restart the sidekiq process to re-populate the dynamic configuration.

sidekiq-limit_fetch's People

Contributors

alexey-yanchenko avatar alpaca-tc avatar bekicot avatar bfcoder avatar bitdeli-chef avatar bobbymcwho avatar cgunther avatar dany1468 avatar davidbiehl avatar deanpcmad avatar dlanileonardo avatar evgeniradev avatar gazay avatar jcsrb avatar jer0m avatar jlestavel avatar josepjaume avatar joshmosh avatar lukechesser avatar matt-domsch-sp avatar nepalez avatar nisyuu avatar novozhenets avatar peikk0 avatar petergoldstein avatar rubyconvict avatar spajus avatar stamm avatar v-kolesnikov avatar voke avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sidekiq-limit_fetch's Issues

Cannot start sidekiq with :limits: key in sidekiq.yml

After upgrading to latest version of sidekiq-limit-fetch, I get an error when i attempt to setup a limit for a queue yield. To make this feature work again, i have to remove to :limits: key in sidekiq.yml and setup the limit dynamically.

2014-04-07T20:41:21Z 9325 TID-ovng9cd7k INFO: Starting processing, hit Ctrl-C to stop
2014-04-07T20:41:21Z 9325 TID-ovnhoslvg WARN: {:context=>"Manager#processor_died died"}
2014-04-07T20:41:21Z 9325 TID-ovnhoslvg WARN: undefined method `async' for nil:NilClass
2014-04-07T20:41:21Z 9325 TID-ovnhoslvg WARN: /Users/mourad/.rvm/gems/ruby-2.0.0-p451@ifg/gems/sidekiq-2.17.6/lib/sidekiq/manager.rb:188:in `dispatch'
/Users/mourad/.rvm/gems/ruby-2.0.0-p451@ifg/gems/sidekiq-2.17.6/lib/sidekiq/manager.rb:103:in `block in processor_died'
/Users/mourad/.rvm/gems/ruby-2.0.0-p451@ifg/gems/sidekiq-2.17.6/lib/sidekiq/util.rb:15:in `watchdog'
/Users/mourad/.rvm/gems/ruby-2.0.0-p451@ifg/gems/sidekiq-2.17.6/lib/sidekiq/manager.rb:94:in `processor_died'
/Users/mourad/.rvm/gems/ruby-2.0.0-p451@ifg/gems/celluloid-0.15.2/lib/celluloid/actor.rb:357:in `handle_exit_event'
/Users/mourad/.rvm/gems/ruby-2.0.0-p451@ifg/gems/celluloid-0.15.2/lib/celluloid/actor.rb:340:in `block in handle_system_event'
/Users/mourad/.rvm/gems/ruby-2.0.0-p451@ifg/gems/celluloid-0.15.2/lib/celluloid/actor.rb:416:in `block in task'
/Users/mourad/.rvm/gems/ruby-2.0.0-p451@ifg/gems/celluloid-0.15.2/lib/celluloid/tasks.rb:55:in `block in initialize'
/Users/mourad/.rvm/gems/ruby-2.0.0-p451@ifg/gems/celluloid-0.15.2/lib/celluloid/tasks/task_fiber.rb:13:in `block in create'
2014-04-07T20:41:21Z 9325 T

Our sidekiq configuration look likes


---
:queues:
  - default
  - mailer
  - batch

:limits:
  - batch 1 

We use sidekiq in combination with the following gems

  • sidekiq (~> 2.17.0)
  • sidekiq-failures (~> 0.3)
  • sidekiq-limit_fetch (~> 2.2.3)
  • sidetiq (~> 0.5.0)

Sidekiq doesn't seem to listen to limits

Hello!

I've been trying to use the limits but for some reason Sidekiq doesn't listen to the limits imposed.

Here's the Sidekiq-config-file, which says that there should be at most two parse-jobs running - however, I got currently 9 parse-jobs running in the web-interface, and can't see a reason why.

The log sees the limits as well:

2013-04-03T06:58:52Z 9012 TID-tx6os DEBUG: {:queues=>["deletegenotype", "deleteg
enotype", "deletegenotype", "deletegenotype", "deletegenotype", "deletegenotype"
, "deletegenotype", "deletegenotype", "deletegenotype", "deletegenotype", "mailn
ewgenotype", "mailnewgenotype", "mailnewgenotype", "mailnewgenotype", "mailnewge
notype", "mailnewgenotype", "mailnewgenotype", "mailnewgenotype", "mailnewgenoty
pe", "mailnewgenotype", "zipgenotyping", "zipgenotyping", "zipgenotyping", "zipg
enotyping", "zipgenotyping", "zipgenotyping", "zipgenotyping", "zipgenotyping", 
"zipgenotyping", "zipgenotyping", "zipfulldata", "zipfulldata", "zipfulldata", "
zipfulldata", "zipfulldata", "zipfulldata", "zipfulldata", "zipfulldata", "zipfu
lldata", "zipfulldata", "preparse", "preparse", "preparse", "parse", "parse", "p
arse", "fitbit", "frequency", "genomegov", "mendeley_details", "mendeley", "pgp"
, "plos_details", "plos", "snpedia", "fixphenotypes"], :concurrency=>10, :requir
e=>".", :environment=>"production", :timeout=>8, :profile=>false, :verbose=>true
, :pidfile=>"/tmp/sidekiq.pid", :logfile=>"./log/sidekiq.log", :global=>true, :l
imits=>{"recommendvariations"=>1, "recommendphenotypes"=>1, "preparse"=>2, "pars
e"=>2, "zipgenotyping"=>1, "zipfulldata"=>1, "fitbit"=>3, "frequency"=>10, "geno
megov"=>1, "mailnewgenotype"=>1, "mendeley_details"=>1, "mendeley"=>1, "pgp"=>1,
 "plos_details"=>1, "plos"=>1, "snpedia"=>1, "fixphenotypes"=>1}, :strict=>false
, :config_file=>"config/sidekiq.yml", :tag=>"snpr"}

The commands from the gem don't work either:

Sidekiq::Queue['parse'].busy

results in

NoMethodError: undefined method `[]' for Sidekiq::Queue:Class

I'm running Rails 3.2.13 on Ruby 1.9.2-p290.

Thanks!

Edit: I'm starting the workers using

bundle exec sidekiq -e production -C config/sidekiq.yml

Queue stops processing

sidekiq (3.3.1) & sidekiq-limit_fetch (2.3.0)

I have the following queues:

:queues:
  - [default]
  - [recurring]
  - [carrierwave]
  - [accounting]
  - [itunes]
:limits:
    accounting: 1
    itunes: 1

The itunes queue runs every 24hrs and processes around 200 individual jobs.

Accounting is constantly processing jobs, but stops processing jobs after the itunes queue is run... then the jobs in accounting are enqueued but never processed. Restart sidekiq gets everything going again.

Have you seen this behaviour before? Have I configured something wrong?

Queue with limit 1 slow to poll

I have a queue with a limit of 1 thread that is needed for in-line processing.

Sidekiq::Queue['inline'].limit = 1

The execution of each of the jobs takes less than a second, but due to the polling frequency the queue is very slow to empty out. It will take a minute or so to clear a queue of 50 jobs.

The wait time in-between each execution is killing me.
Is there any way to adjust this time?

Question about how limits work

I'm trying to understand how this gem interacts with Sidekiq's own settings. Here's an example config and what I think it would mean.

Assume I have two machines, each running a Sidekiq process.

# sidekiq.yml
# Native Sidekiq setting.
# Each of my 2 processes will have 4 worker threads, for a total of 8.
:concurrency: 4
# These limits are global, across all processes/machines
:limits:
  # Only 1 worker at a time on this queue, period, no matter how many
  # processes/machines we have
  :queue_1: 1
# Limits per process
:process_limits:
  #2 workers per process on this queue; with 2 processes, 4 workers
  :queue_2: 2
  # Theoretically allows 6 x 2 = 12 workers, but concurrency setting above
  # means only 8 will be available (minus those working other queues)
  :queue_3: 6

Is this correct?

limit not being followed

I've tried to cleanly reproduce this bug on the demo limit_fetch and cannot, so I'm hoping someone can help me here:

Loading development environment (Rails 4.2.3)
irb(main):001:0>  Sidekiq::Queue['mws'].limit
=> 1
irb(main):002:0>  Sidekiq::Queue['mws'].busy
=> 0
irb(main):003:0>  Sidekiq::Queue['mws'].explain
=> "Current sidekiq process: ac408564-7643-415f-a93f-89e934ff9028\n\n  All processes:\nf3795e71-0d3c-4517-abd7-02c73c6a928f\n\n  Stale processes:\nf3795e71-0d3c-4517-abd7-02c73c6a928f\n\n  Locked queue processes:\n\n\n  Busy queue processes:\n\n\n  Limit:\n1\n\n  Process limit:\nnil\n\n  Blocking:\n\n"

Sidekiq

But sidekiq is currently running two processes in the mws queue at the same time

2015-07-31T17:52:23.080Z 92753 TID-ovaymmcic INFO: Running in ruby 2.0.0p247 (2013-06-27 revision 41674) [x86_64-darwin13.0.0]
2015-07-31T17:52:23.080Z 92753 TID-ovaymmcic INFO: See LICENSE and the LGPL-3.0 for licensing details.
2015-07-31T17:52:23.081Z 92753 TID-ovaymmcic INFO: Upgrade to Sidekiq Pro for more features and support: http://sidekiq.org/pro
2015-07-31T17:52:23.081Z 92753 TID-ovaymmcic INFO: Booting Sidekiq 3.4.0 with redis options {:url=>nil}
2015-07-31T17:52:23.081Z 92753 TID-ovaymmcic INFO: Starting processing, hit Ctrl-C to stop
2015-07-31T17:52:23.236Z 92753 TID-ovb02evzc DEBUG: {:queues=>["default", "default", "default", "default", "default", "shipments", "shipments", "shipments", "shipments", "shipments", "shipments", "shipments", "shipments", "shipments", "product_lookup", "product_lookup", "product_lookup", "product_lookup", "product_lookup", "product_lookup", "product_lookup", "mws", "mws", "mws", "mws", "mws", "mws"], :labels=>[], :concurrency=>25, :require=>".", :environment=>nil, :timeout=>30, :poll_interval_average=>nil, :average_scheduled_poll_interval=>15, :error_handlers=>[#<Sidekiq::ExceptionHandler::Logger:0x007f9eb3bbccf0>, #<Proc:0x007f9ec09c37e8@/Users/slatem/.rbenv/versions/2.0.0-p247/lib/ruby/gems/2.0.0/gems/rollbar-1.3.0/lib/rollbar/sidekiq.rb:29>], :lifecycle_events=>{:startup=>[], :quiet=>[], :shutdown=>[]}, :dead_max_jobs=>10000, :dead_timeout_in_seconds=>15552000, :verbose=>true, :limits=>{"mws"=>1}, :test=>{:pidfile=>"tmp/pids/sidekiq.pid"}, :development=>{:pidfile=>"tmp/pids/sidekiq.pid"}, :staging=>{:pidfile=>"/var/app/containerfiles/pids/sidekiq.pid"}, :production=>{:pidfile=>"/var/app/containerfiles/pids/sidekiq.pid"}, :strict=>false, :config_file=>"config/sidekiq.yml", :fetch=>Sidekiq::RateLimiter::Fetch, :tag=>"boxfox"}
2015-07-31T17:52:23.816Z 92753 TID-ovazcqlv0 BulkMwsJob JID-72202a77fdaf4388885156af:1 INFO: start
2015-07-31T17:52:23.818Z 92753 TID-ovaz2wgrs BulkMwsJob JID-556fe53b8d9e0dbf0d97199e:1 INFO: start

Sidekiq.yml

:verbose: true
:concurrency: 25
# Set timeout to 8 on Heroku, longer if you manage your own systems.
:timeout: 30
:queues:
  - [default, 5]
  - [shipments, 9]
  - [product_lookup, 7]
  - [mws, 6]
:limits:
  mws: 1
:test:  
  :pidfile: tmp/pids/sidekiq.pid
:development:
  :pidfile: tmp/pids/sidekiq.pid
:staging:
  :pidfile: /var/app/containerfiles/pids/sidekiq.pid
:production:
  :pidfile: /var/app/containerfiles/pids/sidekiq.pid

Using the following add ons in Rails 4.2.3

sidekiq (3.4.0)
sidekiq-limit_fetch (2.4.2)
  sidekiq (>= 2.6.5, < 4.0)
sidekiq-rate-limiter (0.1.0)
  sidekiq (>= 2.0, < 4.0)
sidekiq-status (0.5.4)
  sidekiq (>= 2.7)
sidekiq-superworker (1.2.0)
  sidekiq (>= 2.1.0)

Any help is appreciated.

Redis Connection Error

Hi there,

One thing we're seeing is that Sidekiq fails to connect to Redis (seemingly). Redis is on a remote server (with the entire machine to itself, it definitely has plenty of resources), but here's the interesting part: Disabling limit_fetch makes the errors go away (and it happily processes many jobs at ~25 jobs at once). When we enable limit_fetch, (and the queue is only processing 3 jobs at a time), the errors come streaming in.

It reports that it was unable to connect to Redis for a portion of time (I've found the section in Sidekiq that does this), but I can't see that as being the real issue (since it goes away when we take limit_fetch out of the equation).

Could there be an issue where the blocking portion of limit_fetch is somehow making Sidekiq think that Redis took too long to respond?

If I can provide any other details, please let me know.

Thanks!
Tye

Here is an example error that just occurred:

2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: Error fetching message: Connection timed out
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/redis-3.0.7/lib/redis/client.rb:222:in `rescue in io'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/redis-3.0.7/lib/redis/client.rb:220:in `io'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/redis-3.0.7/lib/redis/client.rb:228:in `read'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/redis-3.0.7/lib/redis/client.rb:96:in `block in call'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/redis-3.0.7/lib/redis/client.rb:201:in `block (2 levels) in process'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/redis-3.0.7/lib/redis/client.rb:309:in `ensure_connected'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/redis-3.0.7/lib/redis/client.rb:191:in `block in process'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/redis-3.0.7/lib/redis/client.rb:270:in `logging'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/redis-3.0.7/lib/redis/client.rb:190:in `process'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/redis-3.0.7/lib/redis/client.rb:96:in `call'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/redis-3.0.7/lib/redis/client.rb:179:in `block in call_with_timeout'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/redis-3.0.7/lib/redis/client.rb:244:in `with_socket_timeout'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/redis-3.0.7/lib/redis/client.rb:178:in `call_with_timeout'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/redis-3.0.7/lib/redis.rb:1038:in `block in _bpop'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/redis-3.0.7/lib/redis.rb:37:in `block in synchronize'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/monitor.rb:211:in `mon_synchronize'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/redis-3.0.7/lib/redis.rb:37:in `synchronize'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/redis-3.0.7/lib/redis.rb:1035:in `_bpop'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/redis-3.0.7/lib/redis.rb:1080:in `brpop'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/sidekiq-limit_fetch-2.1.3/lib/sidekiq/limit_fetch.rb:42:in `block in redis_brpop'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/sidekiq-limit_fetch-2.1.3/lib/sidekiq/limit_fetch/redis.rb:10:in `block (2 levels) in nonblocking_redis'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/celluloid-0.15.2/lib/celluloid/calls.rb:25:in `call'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/celluloid-0.15.2/lib/celluloid/calls.rb:25:in `public_send'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/celluloid-0.15.2/lib/celluloid/calls.rb:25:in `dispatch'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/celluloid-0.15.2/lib/celluloid/calls.rb:67:in `dispatch'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/celluloid-0.15.2/lib/celluloid/future.rb:14:in `block in new'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/celluloid-0.15.2/lib/celluloid/thread_handle.rb:13:in `block in initialize'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/celluloid-0.15.2/lib/celluloid/internal_pool.rb:100:in `call'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/celluloid-0.15.2/lib/celluloid/internal_pool.rb:100:in `block in create'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: (celluloid):0:in `remote procedure call'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/celluloid-0.15.2/lib/celluloid/future.rb:104:in `value'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/celluloid-0.15.2/lib/celluloid/future.rb:68:in `value'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/sidekiq-limit_fetch-2.1.3/lib/sidekiq/limit_fetch/redis.rb:10:in `block in nonblocking_redis'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/sidekiq-limit_fetch-2.1.3/lib/sidekiq/limit_fetch/redis.rb:17:in `block in redis'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/connection_pool-1.2.0/lib/connection_pool.rb:55:in `with'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/sidekiq-2.17.4/lib/sidekiq.rb:67:in `redis'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/sidekiq-limit_fetch-2.1.3/lib/sidekiq/limit_fetch/redis.rb:17:in `redis'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/sidekiq-limit_fetch-2.1.3/lib/sidekiq/limit_fetch/redis.rb:8:in `nonblocking_redis'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/sidekiq-limit_fetch-2.1.3/lib/sidekiq/limit_fetch.rb:45:in `redis_brpop'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/sidekiq-limit_fetch-2.1.3/lib/sidekiq/limit_fetch.rb:35:in `fetch_message'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/sidekiq-limit_fetch-2.1.3/lib/sidekiq/limit_fetch.rb:28:in `retrieve_work'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/sidekiq-2.17.4/lib/sidekiq/fetch.rb:34:in `block in fetch'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/sidekiq-2.17.4/lib/sidekiq/util.rb:15:in `watchdog'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/sidekiq-2.17.4/lib/sidekiq/fetch.rb:30:in `fetch'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/celluloid-0.15.2/lib/celluloid/calls.rb:25:in `public_send'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/celluloid-0.15.2/lib/celluloid/calls.rb:25:in `dispatch'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/celluloid-0.15.2/lib/celluloid/calls.rb:122:in `dispatch'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/celluloid-0.15.2/lib/celluloid/actor.rb:322:in `block in handle_message'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/celluloid-0.15.2/lib/celluloid/actor.rb:416:in `block in task'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/celluloid-0.15.2/lib/celluloid/tasks.rb:55:in `block in initialize'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c ERROR: /Library/Ruby/Gems/2.0.0/gems/celluloid-0.15.2/lib/celluloid/tasks/task_fiber.rb:13:in `block in create'
2014-03-03T18:37:51Z 16416 TID-ovhv5x16c INFO: Redis is online, 0.13068318367004395 sec downtime

Description => dependencies

It would be great if you add a note about dependencies on Redis 2.6 to description.

'EVAL and EVALSHA are used to evaluate scripts using the Lua interpreter built into Redis starting from version 2.6.0.'

I spent a lot of time to understand what the problem is, when upgraded from version 1.4 to 1.7.

Limit for multiple Sidekiq instances

Hi. First I want to say thank you because your work is really helping me doing my job.

Now I want to use multiple Sidekiq process to use more cores of my CPU. I want to know what happens if I set a queue 'test' with limit 1 and start 2 sidekiq instances that use that same queue.

I think each instance will have its own limit, so 2 'test' workers can execute at the same time right?

If I start that 'test' queue only in one Sidekiq instance will it work as expected? One instance process the queue and the other will not process it, right? Is the limit going to be respected in this case?

Limit being ignored

sidekiq 3.3.2
sidekiq-limit_fetch 2.3.0

I put Sidekiq::Queue['some_queue'].limit = 1 in config/initializers/sidekiq.rb. Then I start rails console (sidekiq is not started) and do this in console:

2.1.2 :001 > Sidekiq::Queue.new('future_trades').limit
 => 1

Then I start sidekiq, and after sidekiq has successfully started I get

2.1.2 :001 > Sidekiq::Queue.new('future_trades').limit
 => nil

And limit is being totally ignored

Limits not being picked up from sidekiq.yml file?

Here's my yaml file:

:verbose: false
:concurrency: 4
:queues: 
  - soprano
  - sphinx4
:limits:
  soprano: 1
  sphinx4: 1

I have purposely set it to limit to 1 worker just to test. When I queue jobs for either soprano or sphinx4 they use all available workers up to 4. What am I missing here?

Possible performance issue calling `keys`

We were seeing some connection timeouts and problems with redis. We tracked it down to some slow queries, and the keys call with processor:*was the slowest:

https://github.com/brainopia/sidekiq-limit_fetch/blob/f1e83a3384f42867b42ff8a8667fe47dc9b6124d/lib/sidekiq/limit_fetch/global/monitor.rb#L35

We have 3,040,206 keys in redis, so this command was taking 0.6s which is apparently an eternity in redis that will cause other connections to fail during that time.

I'm not familiar with what that method does, but wanted to report it so we could brainstorm refactoring it. Thanks!

CPU Issue

sidekiq-limit_fetch 3.0.1 (with sidekiq 4) is throwing my CPU into overdrive - to 105%.

sidekiq 4 without sidekiq-limit_fetch doesn't do this, and sidekiq 3 with sidekiq-limit_fetch 2.4.2 works fine as well. It's only 4 + limit_fetch.

I'm running El Capitan. Let me know if there is anything I can test for you to help with this. I love this gem! Thanks

Not processing jobs enqueued from another job

My problem is that some jobs are enqueued and never processed. I am able to reproduce this problem with the following setup:

:concurrency: 3
:queues:
  - a_queue
  - b_queue
  - c_queue
:limits:
    a_queue: 1
    b_queue: 1
    c_queue: 1

and jobs:

class AJob
  include Sidekiq::Worker

  sidekiq_options :queue => :a_queue, :retry => false

  def perform
    puts 'PERFORMING JOB A'
    sleep(rand(10))
    BJob.perform_async
    puts 'FINISHED JOB A'
  end
end
class BJob
  include Sidekiq::Worker

  sidekiq_options :queue => :b_queue, :retry => false

  def perform
    puts 'PERFORMING JOB B'
    sleep(rand(10))
    CJob.perform_async
    puts 'FINISHED JOB B'
  end
end
class CJob
  include Sidekiq::Worker

  sidekiq_options :queue => :c_queue, :retry => false

  def perform
    puts 'PERFORMING JOB C'
    sleep(rand(10))
    puts 'FINISHED JOB C'
  end
end

I run:
AJob.perform_async

and sidekiq log:

2015-02-10T14:14:54.277Z 25577 TID-owz3fuf4w AJob JID-deab0799a7c2563a6e2c7f93 INFO: start
PERFORMING JOB A
FINISHED JOB A
2015-02-10T14:14:56.282Z 25577 TID-owz3fuf4w AJob JID-deab0799a7c2563a6e2c7f93 INFO: done: 2.005 sec

BJob is enqueued but never processed

Satiate my curiosity

https://github.com/brainopia/sidekiq-limit_fetch/blob/master/lib/sidekiq/limit_fetch.rb#L46-L47

If ANY of this Sidekiq instance's currently busy queues do NOT match the LIST of queues we're checking - then do "non_blocking_redis" which also doesn't make sense, since non_blocking_redis waits for the value to return anyway (uses a Celluloid future, but calls value on the future thus blocking until it returns)

ASSUMING I'm reading that line of code correctly, Under what circumstance does one ever use "if any one item from this set does not match X" logic?

task processing is slow with many limit=1 queues

in my case I have 80 queues each with a limit of 1, the tasks comes in at ~1 per second, but only one worker is started ~5 seconds, and number of busy threads is always 1

the log is as below:

2014-12-26T11:12:52.924Z 15311 TID-376 Mario::Worker::AddSenseWorker JID-adbfe1d933f6ba9a588f9eb0 INFO: done: 5.079 sec
2014-12-26T11:12:52.948Z 15311 TID-37e Mario::Worker::AddSenseWorker JID-e72ff569c62564681b24be40 INFO: start
2014-12-26T11:12:57.982Z 15311 TID-37e Mario::Worker::AddSenseWorker JID-e72ff569c62564681b24be40 INFO: done: 5.034 sec
2014-12-26T11:12:57.997Z 15311 TID-37m Mario::Worker::AddSenseWorker JID-1bb7985cdbdf229d4f2ec4b2 INFO: start
2014-12-26T11:13:03.104Z 15311 TID-37m Mario::Worker::AddSenseWorker JID-1bb7985cdbdf229d4f2ec4b2 INFO: done: 5.107 sec
2014-12-26T11:13:03.116Z 15311 TID-37u Mario::Worker::AddSenseWorker JID-8843904864bd2e76506c2e27 INFO: start
2014-12-26T11:13:07.893Z 15311 TID-37u Mario::Worker::AddSenseWorker JID-8843904864bd2e76506c2e27 INFO: done: 4.777 sec
2014-12-26T11:13:07.906Z 15311 TID-382 Mario::Worker::AddSenseWorker JID-b58fe9f4762d6790cea04fa4 INFO: start
2014-12-26T11:13:12.662Z 15311 TID-382 Mario::Worker::AddSenseWorker JID-b58fe9f4762d6790cea04fa4 INFO: done: 4.756 sec
2014-12-26T11:13:12.677Z 15311 TID-38a Mario::Worker::AddSenseWorker JID-3bc2c635f96d8e59bd93b585 INFO: start
2014-12-26T11:13:17.276Z 15311 TID-38a Mario::Worker::AddSenseWorker JID-3bc2c635f96d8e59bd93b585 INFO: done: 4.599 sec
2014-12-26T11:13:17.306Z 15311 TID-38i Mario::Worker::AddSenseWorker JID-cecae61fe7c37566f8b5033b INFO: start
2014-12-26T11:13:22.633Z 15311 TID-38i Mario::Worker::AddSenseWorker JID-cecae61fe7c37566f8b5033b INFO: done: 5.327 sec
2014-12-26T11:13:22.647Z 15311 TID-38q Mario::Worker::AddSenseWorker JID-6d16c592723e238181e732a4 INFO: start
2014-12-26T11:13:27.320Z 15311 TID-38q Mario::Worker::AddSenseWorker JID-6d16c592723e238181e732a4 INFO: done: 4.673 sec
2014-12-26T11:13:27.339Z 15311 TID-38y Mario::Worker::AddSenseWorker JID-cc878386f1035c7df58a634c INFO: start
2014-12-26T11:13:32.175Z 15311 TID-38y Mario::Worker::AddSenseWorker JID-cc878386f1035c7df58a634c INFO: done: 4.836 sec
2014-12-26T11:13:32.194Z 15311 TID-396 Mario::Worker::AddSenseWorker JID-4a5c6fa03ae4f8088c888ff7 INFO: start
2014-12-26T11:13:37.585Z 15311 TID-396 Mario::Worker::AddSenseWorker JID-4a5c6fa03ae4f8088c888ff7 INFO: done: 5.391 sec
2014-12-26T11:13:37.598Z 15311 TID-39e Mario::Worker::AddSenseWorker JID-75ff5639f7cb101140f458c6 INFO: start

I will dig into source code of this plugin and report back soon

Limits are being set but not followed

I have no idea why the limits aren't being followed, but no matter how I specify the limits (in sidekiq.yml or in the initializer using ruby, they aren't actually being followed. Each of the queues shows the right limit when I call Sidekiq::Queue['name'].limit, but the limit is totally ignored. I'm running the latest version of sidekiq-limit_fetch and sidekiq. I've tried everything that I can think of to debug. Any ideas about how I could track down the issue?

Incompatibility with sidekiq pro's `reliable_fetch`

Hey guys,

First of all, amazing work you did with this gem. It's helping us control avoid deadlocks caused with massive concurrency in particular jobs in the videogame we're building. We've already consumed 70M jobs in the last 4 months and it worked perfectly fine.

We purchased a license of sidekiq-pro though, and the reliable_fetch functionality seems to be incompatible. It stopped showing the queue active workers and limiting their concurrency.

So, well, just a heads up in case anyone has the same issue. I kinda think this should be built in sidekiq though. Has @mperham showed any interest?

Jobs created with push not processing?

If I create a job with this:

StripeDataWorker.perform_async(10, "w", "BalanceTransaction")

It processes correctly. If I create that same job with this:

Sidekiq::Client.push('class' => StripeDataWorker, 'args' => [10, "w", "BalanceTransaction"], 'queue' => 'w')

It never gets processed.

I posted this issue over in the Sidekiq repo and @mperham mentioned this:

perform_async and push use the same basic Sidekiq::Client API so that is very, very odd. Perhaps someone is doing some evil monkeypatching?

The only thing I use that would be modifying how Sidekiq works is sidekiq-limit_fetch.

So is it possible this is causing the issue?

Advanced blocked queues not working as expected.

Either my understanding of advanced blocked queues is not correct, or I've found a bug.

config/sidekiq.yml

:limits:
  :a: 2
  :b: 2
  :c: 1
:queues:
  - a
  - b
  - c
:blocking:
  - [a, b]

If I queue task b, then c, c won't start until b is finished.

If however, I queue a, then c, c starts before a is finished.

Any ideas what could be happening here?

PS. Awesome gem 👍

Set dynamic queue priority

Currently dynamic queues have lowest priority. Is it possible to set a specific priority to all dynamic queues? My use case needs the dynamic queue to have the highest priority.

License missing from gemspec

RubyGems.org doesn't report a license for your gem. This is because it is not specified in the gemspec of your last release.

via e.g.

spec.license = 'MIT'
# or
spec.licenses = ['MIT', 'GPL-2']

Including a license in your gemspec is an easy way for rubygems.org and other tools to check how your gem is licensed. As you can imagine, scanning your repository for a LICENSE file or parsing the README, and then attempting to identify the license or licenses is much more difficult and more error prone. So, even for projects that already specify a license, including a license in your gemspec is a good practice. See, for example, how rubygems.org uses the gemspec to display the rails gem license.

There is even a License Finder gem to help companies/individuals ensure all gems they use meet their licensing needs. This tool depends on license information being available in the gemspec. This is an important enough issue that even Bundler now generates gems with a default 'MIT' license.

I hope you'll consider specifying a license in your gemspec. If not, please just close the issue with a nice message. In either case, I'll follow up. Thanks for your time!

Appendix:

If you need help choosing a license (sorry, I haven't checked your readme or looked for a license file), GitHub has created a license picker tool. Code without a license specified defaults to 'All rights reserved'-- denying others all rights to use of the code.
Here's a list of the license names I've found and their frequencies

p.s. In case you're wondering how I found you and why I made this issue, it's because I'm collecting stats on gems (I was originally looking for download data) and decided to collect license metadata,too, and make issues for gemspecs not specifying a license as a public service :). See the previous link or my blog post about this project for more information.

evalsha command is locking our Redis

We are noticing a low latency on one of our Redis server.

We narrowed it down to this evalsha command that seems to be made by this gem.

"evalsha" "7b91ed9f4cba40689cea7172d1fd3e08b2efd8c9" "0" "sidekiq:" "12476987-d704-408c-9a1a-b2c215b0545a" "foo" "bar" "baz" ....

This command seems to take about 4 seconds. The same code on production works fine so I don't know what to think.

Any idea on how to debug this?

Setting limit in configure_server block does not work in newer versions

Hi. First I have to say I am using your gem with success for more than a year. Thanks for everything.

So, I updated my Sidekiq and Sidekiq_limit-fetch versions:

gem 'sidekiq',  '3.2.6'
gem 'sidekiq-limit_fetch', '2.2.6'

and the limit was not getting set inside configure_server block with:

Sidekiq::Queue['allocation'].limit = 1

But it works if I set the limit in sidekiq.yml file.

Sidekiq::Queue['name'].busy != number of active worker

Have a strange situation. Not sure this gem is reason on 100%, but looks Iike start to get this effect after we start to use this gem.

[96] pry(main)> Sidekiq::Queue['shop_worker'].limit
=> 8
[97] pry(main)> Sidekiq::Queue['shop_worker'].busy
=> 8
[98] pry(main)> Sidekiq::Client.registered_workers.size
=> 4
[99] pry(main)> Sidekiq.redis do |conn| conn.smembers('workers') end.size
=> 4
[100] pry(main)> Sidekiq::Queue.new("shop_worker").size
=> 444

If I understand right - for queue shop_worker I have limit 8 worker. And Sidekiq::Queue['shop_worker'].busy show that I have 8 busy workers. But
If I check number of worker from web interface or from console - there are only 4 active worker. And queue is not empty.

I can not (yet...) provide way to reproduce this problem, but may be you can tell me:

  1. Is it looks like a bug, or I just dont understand how Sidekiq and your gem work?
  2. May be some suggestions where to start to debug this problem?

my config file

:verbose: false
:pidfile: ./tmp/pids/sidekiq.pid
:concurrency: 10
:queues:
  - [locker_worker, 10]
  - [shop_worker, 1]
:limits:
  shop_worker: 8
  locker_worker: 2

we use

    sidekiq (2.13.1)
      celluloid (>= 0.14.1)
      connection_pool (>= 1.0.0)
      json
      redis (>= 3.0)
      redis-namespace
    sidekiq-limit_fetch (2.1.3)
      sidekiq (>= 2.6.5, < 3)
    sidekiq-unique-jobs (2.7.1)
      mock_redis
      sidekiq (~> 2.6)

Problem: stack level too deep

Hello.
I am fighting with this issue for long time.
When using your gem, i am getting this exception "stack level too deep" after some time, or after some jobs are done (cca one hour/50 jobs done). Without your gem it is absolutly fine.
LOC where this exception is raised is random. When i reset sidekiq next time it is on another line (but sime file).
This file is just wrapper around calling https://github.com/intridea/oauth2 gem. Sometimes it is on initialize method, then on some case, then in if... Pretty random.

My setup is
System: Debian 7.2
Redis 2.6.14
Ruby 2.1.3
Rails 4.1.6
sidekiq (3.2.6)
sidekiq-limit_fetch (2.2.6)

Any idea, or any other information i should provide?

Sidekiq 2.14.0 + limit_fetch do not fetch jobs anymore.

Just notice that my sidekiq daemons were not fetching jobs anymore.
Removing sidekiq-limit_fetch solved the problem. Maybe due to late changes in Sidekiq ?

Sorry but I'm just reporting it, I hadn't time to investigate further yet.

Sidekiq 3.2 ?

Your gemspec specifies < 3.2.

Do you plan to update that soon?

Thanks.

Per Process Limits Too Limited?

Hey there,

I really appreciate everything you've done with this gem, it's been a big help for our project.

Originally, we had setup the limits to be per queue (by specifying them with ":limits:" in the sidekiq yaml file) and everything was working great. What we realized we really needed were per-process limits.

For example, we have a worker ("HardWorker") that a machine should work on at one time. So for that worker ("HardWorker"), within that queue ("HardWork"), the limit should be 1. If we have a second machine that is subscribed to the same queue ("HardWork") though, we had hoped that it means that 2 "HardWorker" jobs from the ("HardWork") would be worked on at any given time.

We tried this by simply changing the ":limits:" section in the sidekiq yaml to be ":process_limits:". Everything is still limiting, but even though the example queue has ~100 jobs in it, only one machine or the other is working on a given job at the same time.

Example Sidekiq YAML:

:queues:
  - HardWork
:process_limits:
  HardWork: 1

Example Worker:

class HardWorker
  include Sidekiq::Worker
  sidekiq_options :queue => "HardWork"

  def perform
    # Do hard things
  end
end

Have we missed a vital step in the configuration for this mode? Could it be possible that the "old" overall queue limits that we had before are still being enforced?

(I saw this issue which is where it was originally implemented: #8 )

Thanks in advance for your help!
Tye

Sidekiq suddenly quitting and not processing any enqueued jobs

Hello,
Using:

sidekiq-failures (0.4.4)
sidekiq-limit_fetch (2.4.2)
sidekiq (>= 2.6.5, < 4.0)

I am using Limit Fetch and it's great but I have tow recurring issues:

  1. Suddenly all queues are stopped and now new jobs are processed - rather a biggy.
  2. Limit does not always respect the concurrency setting.

Any ideas?

Thanks,

j

freeze queue after cleaning working workers

case one:
if i limit it to 3, and I clean the workers list while there are three workers is working.
Then following jobs enqueue into that queue would not be picked up to work.

case two:
shutting down sidekiq while there are some working workers.
restart sidekiq and the queue freeze.

version:

  • rails-4.0.2
  • sidekiq-2.17.0
  • sidekiq-limit_fetch-2.1.2

Redis Timeouts

Upon app restart, we consistently see a lot of redis connection timeouts (that generally never resolve until after a few restarts).

2013-11-19T08:13:38.937709+00:00 app[web.9]: E, [2013-11-19T08:13:38.937051 #592] ERROR -- : Timed out connecting to Redis on production-[snip].cache.amazonaws.com:6379 (Redis::CannotConnectError)
2013-11-19T08:13:38.937709+00:00 app[web.9]: /app/vendor/bundle/ruby/2.0.0/gems/redis-3.0.5/lib/redis/client.rb:276:in `rescue in establish_connection'
2013-11-19T08:13:38.937709+00:00 app[web.9]: /app/vendor/bundle/ruby/2.0.0/gems/redis-3.0.5/lib/redis/client.rb:273:in `establish_connection'
2013-11-19T08:13:38.937709+00:00 app[web.9]: /app/vendor/bundle/ruby/2.0.0/gems/redis-3.0.5/lib/redis/client.rb:69:in `connect'
2013-11-19T08:13:38.937709+00:00 app[web.9]: /app/vendor/bundle/ruby/2.0.0/gems/redis-3.0.5/lib/redis/client.rb:292:in `ensure_connected'
2013-11-19T08:13:38.937709+00:00 app[web.9]: /app/vendor/bundle/ruby/2.0.0/gems/redis-3.0.5/lib/redis/client.rb:179:in `block in process'
2013-11-19T08:13:38.937709+00:00 app[web.9]: /app/vendor/bundle/ruby/2.0.0/gems/redis-3.0.5/lib/redis/client.rb:258:in `logging'
2013-11-19T08:13:38.937709+00:00 app[web.9]: /app/vendor/bundle/ruby/2.0.0/gems/redis-3.0.5/lib/redis/client.rb:178:in `process'
2013-11-19T08:13:38.937709+00:00 app[web.9]: /app/vendor/bundle/ruby/2.0.0/gems/redis-3.0.5/lib/redis/client.rb:84:in `call'
2013-11-19T08:13:38.937709+00:00 app[web.9]: /app/vendor/bundle/ruby/2.0.0/gems/redis-3.0.5/lib/redis.rb:677:in `block in set'
2013-11-19T08:13:38.938134+00:00 app[web.9]: /app/vendor/bundle/ruby/2.0.0/gems/redis-3.0.5/lib/redis.rb:36:in `block in synchronize'
2013-11-19T08:13:38.938134+00:00 app[web.9]: /app/vendor/ruby-2.0.0/lib/ruby/2.0.0/monitor.rb:211:in `mon_synchronize'
2013-11-19T08:13:38.938134+00:00 app[web.9]: /app/vendor/bundle/ruby/2.0.0/gems/redis-3.0.5/lib/redis.rb:36:in `synchronize'
2013-11-19T08:13:38.938134+00:00 app[web.9]: /app/vendor/bundle/ruby/2.0.0/gems/redis-3.0.5/lib/redis.rb:673:in `set'
2013-11-19T08:13:38.938134+00:00 app[web.9]: /app/vendor/bundle/ruby/2.0.0/gems/sidekiq-limit_fetch-2.1.2/lib/sidekiq/limit_fetch/global/semaphore.rb:21:in `block in limit='
2013-11-19T08:13:38.938134+00:00 app[web.9]: /app/vendor/bundle/ruby/2.0.0/gems/sidekiq-limit_fetch-2.1.2/lib/sidekiq/limit_fetch/redis.rb:17:in `block in redis'
2013-11-19T08:13:38.938134+00:00 app[web.9]: /app/vendor/bundle/ruby/2.0.0/gems/connection_pool-1.1.0/lib/connection_pool.rb:49:in `with'
2013-11-19T08:13:38.938134+00:00 app[web.9]: /app/vendor/bundle/ruby/2.0.0/gems/sidekiq-2.15.2/lib/sidekiq.rb:67:in `redis'
2013-11-19T08:13:38.938134+00:00 app[web.9]: /app/vendor/bundle/ruby/2.0.0/gems/sidekiq-limit_fetch-2.1.2/lib/sidekiq/limit_fetch/redis.rb:17:in `redis'
2013-11-19T08:13:38.938134+00:00 app[web.9]: /app/vendor/bundle/ruby/2.0.0/gems/sidekiq-limit_fetch-2.1.2/lib/sidekiq/limit_fetch/global/semaphore.rb:21:in `limit='
2013-11-19T08:13:38.938402+00:00 app[web.9]: ./config/unicorn.rb:40:in `block in reload'
2013-11-19T08:13:38.938402+00:00 app[web.9]: /app/vendor/bundle/ruby/2.0.0/gems/unicorn-4.6.3/lib/unicorn/http_server.rb:591:in `call'
2013-11-19T08:13:38.938402+00:00 app[web.9]: /app/vendor/bundle/ruby/2.0.0/gems/unicorn-4.6.3/lib/unicorn/http_server.rb:591:in `init_worker_process'
2013-11-19T08:13:38.938402+00:00 app[web.9]: /app/vendor/bundle/ruby/2.0.0/gems/unicorn-4.6.3/lib/unicorn/http_server.rb:615:in `worker_loop'
2013-11-19T08:13:38.938402+00:00 app[web.9]: /app/vendor/bundle/ruby/2.0.0/gems/newrelic_rpm-3.6.8.168/lib/new_relic/agent/instrumentation/unicorn_instrumentation.rb:22:in `call'
2013-11-19T08:13:38.938402+00:00 app[web.9]: /app/vendor/bundle/ruby/2.0.0/gems/newrelic_rpm-3.6.8.168/lib/new_relic/agent/instrumentation/unicorn_instrumentation.rb:22:in `block (4 levels) in <top (required)>'
2013-11-19T08:13:38.938402+00:00 app[web.9]: /app/vendor/bundle/ruby/2.0.0/gems/unicorn-4.6.3/lib/unicorn/http_server.rb:500:in `spawn_missing_workers'
2013-11-19T08:13:38.938402+00:00 app[web.9]: /app/vendor/bundle/ruby/2.0.0/gems/unicorn-4.6.3/lib/unicorn/http_server.rb:511:in `maintain_worker_count'
2013-11-19T08:13:38.938402+00:00 app[web.9]: /app/vendor/bundle/ruby/2.0.0/gems/unicorn-4.6.3/lib/unicorn/http_server.rb:277:in `join'
2013-11-19T08:13:38.938402+00:00 app[web.9]: /app/vendor/bundle/ruby/2.0.0/gems/unicorn-4.6.3/bin/unicorn_rails:209:in `<top (required)>'
2013-11-19T08:13:38.939029+00:00 app[web.9]: /app/vendor/bundle/ruby/2.0.0/bin/unicorn_rails:23:in `load'
2013-11-19T08:13:38.939029+00:00 app[web.9]: /app/vendor/bundle/ruby/2.0.0/bin/unicorn_rails:23:in `<main>'
2013-11-19T08:13:38.961606+00:00 app[web.9]: E, [2013-11-19T08:13:38.961312 #2] ERROR -- : reaped #<Process::Status: pid 592 exit 1> worker=3
2013-11-19T08:13:39.382160+00:00 app[web.9]: ** [NewRelic][11/19/13 08:13:39 +0000 04538f0a-ceca-4dc5-a7c6-05aaba01efe5 (634)] INFO : Reporting to: https://rpm.newrelic.com/accounts/[snip]
`

Problem with forking servers and initializers

Sidekiq 2.9.0 upwards connects to Redis lazily, to avoid problems with forking server such as unicorn. Previously, these connected to Redis before the server is forked, and then needed reconnecting in the forked process, by running Sidekiq.configure_client in the after_fork block.

If you configure a queue in an initializer (like so: Sidekiq::Queue['image_fetch'].limit = 4), it looks like sidekiq-limit_fetch interacts with Sidekiq, triggering Redis to connect before the fork. This then causes problems in the forked worker, throwing Redis::InheritedError (Tried to use a connection from a child process without reconnecting. You need to reconnect to Redis after forking.).

If you move the limit setting into the after_fork block in the unicorn configuration, it behaves as expected.

I've not tried configuring the limits in sidekiq.yml to see if this has the same effect.

I'm not sure of the best way to resolve this, but possibly the configuration should be lazily evaluated similar to Sidekiq.

Locks may be left in redis over restart?

Not sure exactly how this happens, but I noticed that a few of my queues weren't being worked on, even though they weren't blocked and no workers were working on them- this persisted through multiple restarts of the sidekiq process.
Maybe previous crashes or shutdowns of the sidekiq process didn't clean up the limit_fetch locks properly?

Going into redis and manually deleting the limit_fetch:busy:QUEUE_NAME keys got things working again

Please bump the gem version

We saw the following dependency issue in our project:

Bundler could not find compatible versions for gem "sidekiq":
  In Gemfile:
    sidekiq-limit_fetch (~> 2.2.3) ruby depends on
      sidekiq (< 3.1, >= 2.6.5) ruby

    sidekiq-pro (~> 1.6.0) ruby depends on
      sidekiq (3.1.3)

The latest commit on master shows that the gem depends on sidekiq < 3.2
https://github.com/brainopia/sidekiq-limit_fetch/blob/master/sidekiq-limit_fetch.gemspec#L18

We are dealing with this by pointing to the SHA, but it'd be nice to be able to point to a gem version number instead. Bumping the gem version number can help here.

Blocking for groups of queues?

This is really a feature request- what do you think about blocking on groups of queues?

eg.

:queues:
- a
- b
- c
- d
:blocking:
- [b, c]

In this case, queues b and c could both execute at the same time (and a too), but d would not execute until both b and c were empty

Compatibility with sidekiq-dynamic-queues

Hello,

I tried to use sidekiq-limit_fetch with sidekiq-dynamic-queues and it didn't work: the limit is not applied.

Do you know if it would be possible to use them together?

My use case is a webgame where some tasks must be completely isolated from each other. For instance, each fight needs to have its own queue (like fights.42) so there is no concurrency problem between orders issued by the players.

For now, I use only sidekiq-limit_fetch and I my Sidekiq server explicitly process the queues fights.1 to fights.3, and each fight-related job is randomly sent on one of theses queues. It does the job (ah ah) but it's not optimal).

However thanks for this great plugin!

error starting sidekiq limit fetch on with rails 4 on heroku

It's throwing a redis error. I don't know enough to have much of an idea of what to do.

2013-05-27T21:56:29.875605+00:00 app[cloud_worker.1]: 2013-05-27T21:56:29Z 2 TID-ow3aonyws ERROR: Error fetching message: ERR unknown command 'evalsha'
2013-05-27T21:56:29.875884+00:00 app[cloud_worker.1]: 2013-05-27T21:56:29Z 2 TID-ow3aonyws ERROR: /app/vendor/bundle/ruby/2.0.0/gems/redis-3.0.4/lib/redis/client.rb:85:in `call'

Doesn't start workers on heroku

It's tricky to debug but after adding the plugin it completely stop workers on heroku. No exception is thrown.
After removing the plugin everything works fine.

How to clear busy?

I have a couple of queues that are stuck with processes that aren't there:

irb(main):041:0> Sidekiq::Queue['importer'].busy
=> 7

Is there any way to clear/reset the queue?

sidekiq dependancy

looks like the sidekiq-limit_fetch gem is requiring a newer version of sidekiq than exists…

#Lastet sidekiq version
2.17.5

#sidekiq-limit_fetch
gem.add_dependency 'sidekiq', '>= 2.6.5', '< 3'

Configuration not working via sidekiq options just after redis db was flushed

my config:

    workers:
      verbose: true
      concurrency: 40
      :queues:
        - heartbeat
        - default
        - seldom
      :process_limits:
        heartbeat: 1
        default: 38
        seldom: 1

how to reproduce (assuming sidekiq is using db nr 10):

redis-cli
select 10
flushdb

put somewhere in the initializer (after sidekiq setup)

  Sidekiq.options[:process_limits] = Settings.workers.try(:[], :":process_limits") if Settings.workers.include?(:":process_limits")
  puts Sidekiq.options
  puts "limit fetch: default -> #{Sidekiq::Queue['default'].process_limit}"

or omit Settings setup and just put

  Sidekiq.options[:process_limits] = {:heartbeat=>1, :default=>38, :seldom=>1}
  puts Sidekiq.options
  puts "limit fetch: default -> #{Sidekiq::Queue['default'].process_limit}"

outcome

{:queues=>["heartbeat", "default", "seldom"], :labels=>[], :concurrency=>40, 
[OMITTED] :fetch=>Sidekiq::LimitFetch, :verbose=>true, :process_limits=>{:heartbeat=>1, :default=>38, :seldom=>1}}
limit fetch: default ->

Setting it dynamically is fine, or restarting the process, so maybe there should be some warning (at least in README.md) or can I get info how to do it properly it this case?

Sidekiq 3.0?

This gem is currently not supporting Sidekiq 3.0 - any hope this will change soon?

Thanks!

Internal timeout?

I'm using sidekiq-limit_fetch gem for making sure that certain types of worker are executed one by one, in other words I don't want two workers from the same queue to be execute at the same time (mainly for resources consumption).

The gem is supposedly perfect for achieving our goal, but it seems like it is subject to some kind of timeout (my guess is about 1 hour) such as eventually queued jobs expire. This happens since the latest version (2.2.6), with the previous ones queued jobs simply kicked in, even if the first didn't finish.

Can you enlighten me on the way it works?

If there is such timeout, is there a way to configure it? Otherwise, does it make sense to add this feature?

Thanks in advance!

Limits not limiting

It seems queues just aren't being limited. Here's my sidekiq.yml file:

:concurrency: 81
:queues:
  - [fast, 10]
  - [default, 3]
  - [slow, 1]
  - [a, 5]
  - [b, 5]
  - [c, 5]
  - [d, 5]
:limits:
  slow: 45
  fast: 10
  default: 20
  a: 2
  b: 2
  c: 2
  d: 2

And this is how I'm running sidekiq on Heroku (from my Procfile):
worker: env DB_POOL=100 bundle exec sidekiq -e $RACK_ENV -C config/sidekiq.yml

Looking in Sidekiq's list of Busy workers, queue b has 8 workers...even though it's limited to 2.

I have 3 2X workers in Heroku.

Any ideas?

Sidekiq 2.17.6
sidekiq-limit_fetch 2.1.3

Monitor conflicting with Sidekiq 3.0

Hi,
sidekiq-limit_fetch's monitor is apparently conflicting with Sidekiq 3.0's monitor. They're both using the processes key.

Is there any specific reason for the monitor to continue existing in sidekiq-limit_fetch?

https://github.com/brainopia/sidekiq-limit_fetch/blob/567d4018306c866c266748af1d43c5b726048412/lib/sidekiq/limit_fetch/global/monitor.rb#L7

https://github.com/mperham/sidekiq/blob/580f6fcbec2361709f11394ad56e3ddaaa57d462/lib/sidekiq/launcher.rb#L75

Request: custom queue limit identifier

Hi there,

I'm making heavy use of sidekiq-limit_fetch in a cluster of Sidekiq worker servers, and a requirement has cropped up:

I have some CPU-intensive jobs, which I've relegated to their own queue, let's call it :encoding, and I have placed a limit of 10 on that queue. Unless I'm mistaken, I believe this means that globally only 10 of these jobs can be running at any given time.

My problem is that I have multiple Sidekiq servers, and so I'd like that to be a per-server limit, not global. This will allow me to scale the :encoding queue by adding more servers without ever overloading a single server.

I was picturing some sort of unique key I could use to identify each server (possibly the hostname) that would be used when checking how many workers are busy on a queue.

I'd be happy to fork and implement this myself, but wanted to check first to ensure this isn't already possible. If not, any advice on where in the code I should look to get started would be most welcome.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.