Code Monkey home page Code Monkey logo

resque's Introduction

Resque

Gem Version Build Status

Introduction

Resque (pronounced like "rescue") is a Redis-backed library for creating background jobs, placing those jobs on multiple queues, and processing them later.

For the backstory, philosophy, and history of Resque's beginnings, please see the blog post (2009).

Background jobs can be any Ruby class or module that responds to perform. Your existing classes can easily be converted to background jobs or you can create new classes specifically to do work. Or, you can do both.

Resque is heavily inspired by DelayedJob (which rocks) and comprises three parts:

  1. A Ruby library for creating, querying, and processing jobs
  2. A Rake task for starting a worker which processes jobs
  3. A Sinatra app for monitoring queues, jobs, and workers.

Resque workers can be given multiple queues (a "queue list"), distributed between multiple machines, run anywhere with network access to the Redis server, support priorities, are resilient to memory bloat / "leaks," tell you what they're doing, and expect failure.

Resque queues are persistent; support constant time, atomic push and pop (thanks to Redis); provide visibility into their contents; and store jobs as simple JSON packages.

The Resque frontend tells you what workers are doing, what workers are not doing, what queues you're using, what's in those queues, provides general usage stats, and helps you track failures.

Resque now supports Ruby 2.3.0 and above. We will also only be supporting Redis 3.0 and above going forward.

Note on the future of Resque

Would you like to be involved in Resque? Do you have thoughts about what Resque should be and do going forward? There's currently an open discussion here on just that topic, so please feel free to join in. We'd love to hear your thoughts and/or have people volunteer to be a part of the project!

Example

Resque jobs are Ruby classes (or modules) which respond to the perform method. Here's an example:

class Archive
  @queue = :file_serve

  def self.perform(repo_id, branch = 'master')
    repo = Repository.find(repo_id)
    repo.create_archive(branch)
  end
end

The @queue class instance variable determines which queue Archive jobs will be placed in. Queues are arbitrary and created on the fly - you can name them whatever you want and have as many as you want.

To place an Archive job on the file_serve queue, we might add this to our application's pre-existing Repository class:

class Repository
  def async_create_archive(branch)
    Resque.enqueue(Archive, self.id, branch)
  end
end

Now when we call repo.async_create_archive('masterbrew') in our application, a job will be created and placed on the file_serve queue.

Later, a worker will run something like this code to process the job:

klass, args = Resque.reserve(:file_serve)
klass.perform(*args) if klass.respond_to? :perform

Which translates to:

Archive.perform(44, 'masterbrew')

Let's start a worker to run file_serve jobs:

$ cd app_root
$ QUEUE=file_serve rake resque:work

This starts one Resque worker and tells it to work off the file_serve queue. As soon as it's ready it'll try to run the Resque.reserve code snippet above and process jobs until it can't find any more, at which point it will sleep for a small period and repeatedly poll the queue for more jobs.

Installation

Add the gem to your Gemfile:

gem 'resque'

Next, install it with Bundler:

$ bundle

Rack

In your Rakefile, or some other file in lib/tasks (ex: lib/tasks/resque.rake), load the resque rake tasks:

require 'resque'
require 'resque/tasks'
require 'your/app' # Include this line if you want your workers to have access to your application

Rails

To make resque specific changes, you can override the resque:setup job in lib/tasks (ex: lib/tasks/resque.rake). GitHub's setup task looks like this:

task "resque:setup" => :environment do
  Grit::Git.git_timeout = 10.minutes
end

We don't want the git_timeout as high as 10 minutes in our web app, but in the Resque workers it's fine.

Running Workers

Resque workers are rake tasks that run forever. They basically do this:

start
loop do
  if job = reserve
    job.process
  else
    sleep 5 # Polling frequency = 5
  end
end
shutdown

Starting a worker is simple:

$ QUEUE=* rake resque:work

Or, you can start multiple workers:

$ COUNT=2 QUEUE=* rake resque:workers

This will spawn two Resque workers, each in its own process. Hitting ctrl-c should be sufficient to stop them all.

Priorities and Queue Lists

Resque doesn't support numeric priorities but instead uses the order of queues you give it. We call this list of queues the "queue list."

Let's say we add a warm_cache queue in addition to our file_serve queue. We'd now start a worker like so:

$ QUEUES=file_serve,warm_cache rake resque:work

When the worker looks for new jobs, it will first check file_serve. If it finds a job, it'll process it then check file_serve again. It will keep checking file_serve until no more jobs are available. At that point, it will check warm_cache. If it finds a job it'll process it then check file_serve (repeating the whole process).

In this way you can prioritize certain queues. At GitHub we start our workers with something like this:

$ QUEUES=critical,archive,high,low rake resque:work

Notice the archive queue - it is specialized and in our future architecture will only be run from a single machine.

At that point we'll start workers on our generalized background machines with this command:

$ QUEUES=critical,high,low rake resque:work

And workers on our specialized archive machine with this command:

$ QUEUE=archive rake resque:work

Running All Queues

If you want your workers to work off of every queue, including new queues created on the fly, you can use a splat:

$ QUEUE=* rake resque:work

Queues will be processed in alphabetical order.

Or, prioritize some queues above *:

# QUEUE=critical,* rake resque:work

Running All Queues Except for Some

If you want your workers to work off of all queues except for some, you can use negation:

$ QUEUE=*,!low rake resque:work

Negated globs also work. The following will instruct workers to work off of all queues except those beginning with file_:

$ QUEUE=*,!file_* rake resque:work

Note that the order in which negated queues are specified does not matter, so QUEUE=*,!file_* and QUEUE=!file_*,* will have the same effect.

Process IDs (PIDs)

There are scenarios where it's helpful to record the PID of a resque worker process. Use the PIDFILE option for easy access to the PID:

$ PIDFILE=./resque.pid QUEUE=file_serve rake resque:work

Running in the background

There are scenarios where it's helpful for the resque worker to run itself in the background (usually in combination with PIDFILE). Use the BACKGROUND option so that rake will return as soon as the worker is started.

$ PIDFILE=./resque.pid BACKGROUND=yes QUEUE=file_serve rake resque:work

Polling frequency

You can pass an INTERVAL option which is a float representing the polling frequency. The default is 5 seconds, but for a semi-active app you may want to use a smaller value.

$ INTERVAL=0.1 QUEUE=file_serve rake resque:work

When INTERVAL is set to 0 it will run until the queue is empty and then shutdown the worker, instead of waiting for new jobs.

The Front End

Resque comes with a Sinatra-based front end for seeing what's up with your queue.

The Front End

Standalone

If you've installed Resque as a gem running the front end standalone is easy:

$ resque-web

It's a thin layer around rackup so it's configurable as well:

$ resque-web -p 8282

If you have a Resque config file you want evaluated just pass it to the script as the final argument:

$ resque-web -p 8282 rails_root/config/initializers/resque.rb

You can also set the namespace directly using resque-web:

$ resque-web -p 8282 -N myapp

or set the Redis connection string if you need to do something like select a different database:

$ resque-web -p 8282 -r localhost:6379:2

Passenger

Using Passenger? Resque ships with a config.ru you can use. See Phusion's guide:

Apache: https://www.phusionpassenger.com/library/deploy/apache/deploy/ruby/ Nginx: https://www.phusionpassenger.com/library/deploy/nginx/deploy/ruby/

Rack::URLMap

If you want to load Resque on a subpath, possibly alongside other apps, it's easy to do with Rack's URLMap:

require 'resque/server'

run Rack::URLMap.new \
  "/"       => Your::App.new,
  "/resque" => Resque::Server.new

Check examples/demo/config.ru for a functional example (including HTTP basic auth).

Rails

You can also mount Resque on a subpath in your existing Rails app by adding require 'resque/server' to the top of your routes file or in an initializer then adding this to routes.rb:

mount Resque::Server.new, :at => "/resque"

Jobs

What should you run in the background? Anything that takes any time at all. Slow INSERT statements, disk manipulating, data processing, etc.

At GitHub we use Resque to process the following types of jobs:

  • Warming caches
  • Counting disk usage
  • Building tarballs
  • Building Rubygems
  • Firing off web hooks
  • Creating events in the db and pre-caching them
  • Building graphs
  • Deleting users
  • Updating our search index

As of writing we have about 35 different types of background jobs.

Keep in mind that you don't need a web app to use Resque - we just mention "foreground" and "background" because they make conceptual sense. You could easily be spidering sites and sticking data which needs to be crunched later into a queue.

Persistence

Jobs are persisted to queues as JSON objects. Let's take our Archive example from above. We'll run the following code to create a job:

repo = Repository.find(44)
repo.async_create_archive('masterbrew')

The following JSON will be stored in the file_serve queue:

{
    'class': 'Archive',
    'args': [ 44, 'masterbrew' ]
}

Because of this your jobs must only accept arguments that can be JSON encoded.

So instead of doing this:

Resque.enqueue(Archive, self, branch)

do this:

Resque.enqueue(Archive, self.id, branch)

This is why our above example (and all the examples in examples/) uses object IDs instead of passing around the objects.

While this is less convenient than just sticking a marshaled object in the database, it gives you a slight advantage: your jobs will be run against the most recent version of an object because they need to pull from the DB or cache.

If your jobs were run against marshaled objects, they could potentially be operating on a stale record with out-of-date information.

send_later / async

Want something like DelayedJob's send_later or the ability to use instance methods instead of just methods for jobs? See the examples/ directory for goodies.

We plan to provide first class async support in a future release.

Failure

If a job raises an exception, it is logged and handed off to the Resque::Failure module. Failures are logged either locally in Redis or using some different backend. To see exceptions while developing, see details below under Logging.

For example, Resque ships with Airbrake support. To configure it, put the following into an initialisation file or into your rake job:

# send errors which occur in background jobs to redis and airbrake
require 'resque/failure/multiple'
require 'resque/failure/redis'
require 'resque/failure/airbrake'

Resque::Failure::Multiple.classes = [Resque::Failure::Redis, Resque::Failure::Airbrake]
Resque::Failure.backend = Resque::Failure::Multiple

Keep this in mind when writing your jobs: you may want to throw exceptions you would not normally throw in order to assist debugging.

Rails example

If you are using ActiveJob here's how your job definition will look:

class ArchiveJob < ApplicationJob
  queue_as :file_serve

  def perform(repo_id, branch = 'master')
    repo = Repository.find(repo_id)
    repo.create_archive(branch)
  end
end
class Repository
  def async_create_archive(branch)
    ArchiveJob.perform_later(self.id, branch)
  end
end

It is important to run ArchiveJob.perform_later(self.id, branch) rather than Resque.enqueue(Archive, self.id, branch). Otherwise Resque will process the job without actually doing anything. Even if you put an obviously buggy line like 0/0 in the perform method, the job will still succeed.

Configuration

Redis

You may want to change the Redis host and port Resque connects to, or set various other options at startup.

Resque has a redis setter which can be given a string or a Redis object. This means if you're already using Redis in your app, Resque can re-use the existing connection.

String: Resque.redis = 'localhost:6379'

Redis: Resque.redis = $redis

For our rails app we have a config/initializers/resque.rb file where we load config/resque.yml by hand and set the Redis information appropriately.

Here's our config/resque.yml:

development: localhost:6379
test: localhost:6379
staging: redis1.se.github.com:6379
fi: localhost:6379
production: <%= ENV['REDIS_URL'] %>

And our initializer:

rails_root = ENV['RAILS_ROOT'] || File.dirname(__FILE__) + '/../..'
rails_env = ENV['RAILS_ENV'] || 'development'
config_file = rails_root + '/config/resque.yml'

resque_config = YAML::load(ERB.new(IO.read(config_file)).result)
Resque.redis = resque_config[rails_env]

Easy peasy! Why not just use RAILS_ROOT and RAILS_ENV? Because this way we can tell our Sinatra app about the config file:

$ RAILS_ENV=production resque-web rails_root/config/initializers/resque.rb

Now everyone is on the same page.

Also, you could disable jobs queueing by setting 'inline' attribute. For example, if you want to run all jobs in the same process for cucumber, try:

Resque.inline = ENV['RAILS_ENV'] == "cucumber"

Logging

Workers support basic logging to STDOUT.

You can control the logging threshold using Resque.logger.level:

# config/initializers/resque.rb
Resque.logger.level = Logger::DEBUG

If you want Resque to log to a file, in Rails do:

# config/initializers/resque.rb
Resque.logger = Logger.new(Rails.root.join('log', "#{Rails.env}_resque.log"))

Namespaces

If you're running multiple, separate instances of Resque you may want to namespace the keyspaces so they do not overlap. This is not unlike the approach taken by many memcached clients.

This feature is provided by the redis-namespace library, which Resque uses by default to separate the keys it manages from other keys in your Redis server.

Simply use the Resque.redis.namespace accessor:

Resque.redis.namespace = "resque:GitHub"

We recommend sticking this in your initializer somewhere after Redis is configured.

Storing Statistics

Resque allows to store count of processed and failed jobs.

By default it will store it in Redis using the keys stats:processed and stats:failed.

Some apps would want another stats store, or even a null store:

# config/initializers/resque.rb
class NullDataStore
 def stat(stat)
   0
 end

 def increment_stat(stat, by)
 end

 def decrement_stat(stat, by)
 end

 def clear_stat(stat)
 end
end

Resque.stat_data_store = NullDataStore.new

Plugins and Hooks

For a list of available plugins see https://github.com/resque/resque/wiki/plugins.

If you'd like to write your own plugin, or want to customize Resque using hooks (such as Resque.after_fork), see docs/HOOKS.md.

Additional Information

Resque vs DelayedJob

How does Resque compare to DelayedJob, and why would you choose one over the other?

  • Resque supports multiple queues
  • DelayedJob supports finer grained priorities
  • Resque workers are resilient to memory leaks / bloat
  • DelayedJob workers are extremely simple and easy to modify
  • Resque requires Redis
  • DelayedJob requires ActiveRecord
  • Resque can only place JSONable Ruby objects on a queue as arguments
  • DelayedJob can place any Ruby object on its queue as arguments
  • Resque includes a Sinatra app for monitoring what's going on
  • DelayedJob can be queried from within your Rails app if you want to add an interface

If you're doing Rails development, you already have a database and ActiveRecord. DelayedJob is super easy to setup and works great. GitHub used it for many months to process almost 200 million jobs.

Choose Resque if:

  • You need multiple queues
  • You don't care / dislike numeric priorities
  • You don't need to persist every Ruby object ever
  • You have potentially huge queues
  • You want to see what's going on
  • You expect a lot of failure / chaos
  • You can setup Redis
  • You're not running short on RAM

Choose DelayedJob if:

  • You like numeric priorities
  • You're not doing a gigantic amount of jobs each day
  • Your queue stays small and nimble
  • There is not a lot failure / chaos
  • You want to easily throw anything on the queue
  • You don't want to setup Redis

In no way is Resque a "better" DelayedJob, so make sure you pick the tool that's best for your app.

Forking

On certain platforms, when a Resque worker reserves a job it immediately forks a child process. The child processes the job then exits. When the child has exited successfully, the worker reserves another job and repeats the process.

Why?

Because Resque assumes chaos.

Resque assumes your background workers will lock up, run too long, or have unwanted memory growth.

If Resque workers processed jobs themselves, it'd be hard to whip them into shape. Let's say one is using too much memory: you send it a signal that says "shutdown after you finish processing the current job," and it does so. It then starts up again - loading your entire application environment. This adds useless CPU cycles and causes a delay in queue processing.

Plus, what if it's using too much memory and has stopped responding to signals?

Thanks to Resque's parent / child architecture, jobs that use too much memory release that memory upon completion. No unwanted growth.

And what if a job is running too long? You'd need to kill -9 it then start the worker again. With Resque's parent / child architecture you can tell the parent to forcefully kill the child then immediately start processing more jobs. No startup delay or wasted cycles.

The parent / child architecture helps us keep tabs on what workers are doing, too. By eliminating the need to kill -9 workers we can have parents remove themselves from the global listing of workers. If we just ruthlessly killed workers, we'd need a separate watchdog process to add and remove them to the global listing - which becomes complicated.

Workers instead handle their own state.

at_exit Callbacks

Resque uses Kernel#exit! for exiting its workers' child processes. So any at_exit callback defined in your application won't be executed when the job is finished and the child process exits.

You can alter this behavior by setting the RUN_AT_EXIT_HOOKS environment variable.

Parents and Children

Here's a parent / child pair doing some work:

$ ps -e -o pid,command | grep [r]esque
92099 resque: Forked 92102 at 1253142769
92102 resque: Processing file_serve since 1253142769

You can clearly see that process 92099 forked 92102, which has been working since 1253142769.

(By advertising the time they began processing you can easily use monit or god to kill stale workers.)

When a parent process is idle, it lets you know what queues it is waiting for work on:

$ ps -e -o pid,command | grep [r]esque
92099 resque: Waiting for file_serve,warm_cache

Signals

Resque workers respond to a few different signals:

  • QUIT - Wait for child to finish processing then exit
  • TERM / INT - Immediately kill child then exit
  • USR1 - Immediately kill child but don't exit
  • USR2 - Don't start to process any new jobs
  • CONT - Start to process new jobs again after a USR2

If you want to gracefully shutdown a Resque worker, use QUIT.

If you want to kill a stale or stuck child, use USR1. Processing will continue as normal unless the child was not found. In that case Resque assumes the parent process is in a bad state and shuts down.

If you want to kill a stale or stuck child and shutdown, use TERM

If you want to stop processing jobs, but want to leave the worker running (for example, to temporarily alleviate load), use USR2 to stop processing, then CONT to start it again. It's also possible to pause all workers.

Heroku

When shutting down processes, Heroku sends every process a TERM signal at the same time. By default this causes an immediate shutdown of any running job leading to frequent Resque::TermException errors. For short running jobs, a simple solution is to give a small amount of time for the job to finish before killing it.

Resque doesn't handle this out of the box (for both cedar-14 and heroku-16), you need to install the resque-heroku-signals addon which adds the required signal handling to make the behavior described above work. Related issue: #1559

To accomplish this set the following environment variables:

  • RESQUE_PRE_SHUTDOWN_TIMEOUT - The time between the parent receiving a shutdown signal (TERM by default) and it sending that signal on to the child process. Designed to give the child process time to complete before being forced to die.

  • TERM_CHILD - Must be set for RESQUE_PRE_SHUTDOWN_TIMEOUT to be used. After the timeout, if the child is still running it will raise a Resque::TermException and exit.

  • RESQUE_TERM_TIMEOUT - By default you have a few seconds to handle Resque::TermException in your job. RESQUE_TERM_TIMEOUT and RESQUE_PRE_SHUTDOWN_TIMEOUT must be lower than the heroku dyno timeout.

Pausing all workers

Workers will not process pending jobs if the Redis key pause-all-workers is set with the string value "true".

Resque.redis.set('pause-all-workers', 'true')

Nothing happens to jobs that are already being processed by workers.

Unpause by removing the Redis key pause-all-workers.

Resque.redis.del('pause-all-workers')

Monitoring

god

If you're using god to monitor Resque, we have provided example configs in examples/god/. One is for starting / stopping workers, the other is for killing workers that have been running too long.

monit

If you're using monit, examples/monit/resque.monit is provided free of charge. This is not used by GitHub in production, so please send patches for any tweaks or improvements you can make to it.

Mysql::Error: MySQL server has gone away

If your workers remain idle for too long they may lose their MySQL connection. Depending on your version of Rails, we recommend the following:

Rails

In your perform method, add the following line:

class MyTask
  def self.perform
    ActiveRecord::Base.verify_active_connections!
    # rest of your code
  end
end

The Rails doc says the following about verify_active_connections!:

Verify active connections and remove and disconnect connections associated with stale threads.
Rails 4.x

In your perform method, instead of verify_active_connections!, use:

class MyTask
  def self.perform
    ActiveRecord::Base.clear_active_connections!
    # rest of your code
  end
end

From the Rails docs on clear_active_connections!:

Returns any connections in use by the current thread back to the pool, and also returns connections to the pool cached by threads that are no longer alive.

Development

Want to hack on Resque?

First clone the repo and run the tests:

git clone git://github.com/resque/resque.git
cd resque
rake test

If the tests do not pass make sure you have Redis installed correctly (though we make an effort to tell you if we feel this is the case). The tests attempt to start an isolated instance of Redis to run against.

Also make sure you've installed all the dependencies correctly. For example, try loading the redis-namespace gem after you've installed it:

$ irb
>> require 'rubygems'
=> true
>> require 'redis/namespace'
=> true

If you get an error requiring any of the dependencies, you may have failed to install them or be seeing load path issues.

Demo

Resque ships with a demo Sinatra app for creating jobs that are later processed in the background.

Try it out by looking at the README, found at examples/demo/README.markdown.

Contributing

Read CONTRIBUTING.md first.

Once you've made your great commits:

  1. Fork Resque
  2. Create a topic branch - git checkout -b my_branch
  3. Push to your branch - git push origin my_branch
  4. Create a Pull Request from your branch

Questions

Please add them to the FAQ or open an issue on this repo.

Meta

This project uses Semantic Versioning

Author

Chris Wanstrath :: [email protected] :: @defunkt

resque's People

Contributors

alce avatar corincerami avatar defunkt avatar dependabot[bot] avatar dylanahsmith avatar fw42 avatar gravis avatar hoffmanc avatar hone avatar humancopy avatar iloveitaly avatar jamster avatar jeremy avatar jonhyman avatar josh-m-sharpe avatar kjg avatar magec avatar mishina2228 avatar mrduncan avatar patricktulskie avatar quirkey avatar rafaelfranca avatar raykrueger avatar rcarver avatar rochesterinnyc avatar sirupsen avatar steveklabnik avatar tarcieri avatar wuputah avatar yaauie avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

resque's Issues

undefined method `redis=' for Resque:Module in worker

I'm setting redis' settings in initializer, as shown in documentation. Web UI and Rails application can connect to it just fine.

However, Worker throws following error:

undefined method `redis=' for Resque:Module

worker failure causes job to be lost...

The job is popped directly from the queue, so if the worker fails, the job is totally lost. if the job fails, then it is put on the error queue, which is awesome, but the worker has no failsafe.

I'm not sure what the best approach is, but a thought is:

  • pop/push jobs onto a processing queue, and have each item tied to a worker and the original queue
  • every now and then, a worker checks the processing queue to make sure that each processing job's worker still exists (and hasn't been killed off by god for being too stale)
  • if so, great, if not, put it back onto its original queue

It looks like the worker is already doing some admin work anyway; is this too much for all workers to do?

Why redis?

I've a question, why you chose redis? It's not better mongodb + mongomapper?

Added job scheduling, looking for feedback.

I forked and hacked together some light weight job scheduling on to resque.

http://github.com/bvandenbos/resque/tree/resque-scheduler

My motivation was that we've got a ton of backend jobs on a bunch of different machines to do all kinds of things (flush caches, populate leaderboards, index documents, etc). Currently, they are all crons which invoke ruby script/runner. Now that we're switching to resque (which rocks my face off btw), I thought it'd be awesome to also be able to put jobs in the queue based on a schedule. The win for us is a centralized list of jobs and when they run, plus the ability to stuff one in the queue from resque-web.

The process that does the actual queuing of scheduled jobs is just another rake task (resque:scheduler) similar to resque:work. It schedules work using rufus-scheduler.

Here's a shot of the addition to resque-web
http://skitch.com/bennyhana/nm35j/resque-scheduler

Anyways, I've got it running in a branch and I thought you might give some feedback.

thanks!

ben

create_once behavior

My send_once branch adds a feature that allows you to add a job to a queue once and only once. Repeated calls to create_once will result in just one enqueued job - this is handy when a long-running job might be triggered by several system events, but you only want it to run once.

http://github.com/bpo/resque/tree/send_once

async helper

I want the async helper to be first class. Something like this:

class Repository < ActiveRecord::Base
  include Resque::AsyncHelper
end

This adds an async instance method and a perform class method.

The async method looks like this:

def async(method, *args)
  Resque.enqueue(self.class, id, method, *args)
end

The perform method looks like this:

def self.perform(id, method, *args)
  find(id).send(method, *args)
end

Of course, you can define your own self.perform and have it still
work beautifully. We might override ours in GitHub to look like this:

def self.perform(id, method, *args)
  return unless repo = cached_by_id(id)
  repo.send(method, *args)
end

Then: @repo.async(:update_disk_usage) issues a job equivalent to:

Resque.enqueue(Repository, 44, :update_disk_usage)

Booyah.

Add abbility to postpone a job

the run_at feature of DJ can be a time saver, and really needed for some processes / projects.
If I have a training management webapp, I'd like to process all results right after a course is done. I can create the course some days before, and don't have to take care if firing the process_results method manually right after the class.

Thanks !

Doc fix for Worker#work

The following commit fixes a typo in the docs for Worker#work ("integered" should be "integer") and also cleans up the documentation on that method.

mrduncan/resque@4b4d75bb2526aa6fd0c5040d1b9e2a92a5caa515

Error when using json gem instead of yajl

Here's the error message:

Error message: undefined method `[]' for #

And part of the backtrace:

"/usr/local/lib/ruby19/gems/1.9.1/gems/activesupport-2.3.5/lib/active_support/json/encoders/hash.rb:45:in `as_json'"
"/usr/local/lib/ruby19/gems/1.9.1/gems/activesupport-2.3.5/lib/active_support/json/encoders/hash.rb:34:in `to_json'"
"/usr/local/lib/ruby19/gems/1.9.1/gems/json-1.2.0/lib/json/common.rb:183:in `generate'"
"/usr/local/lib/ruby19/gems/1.9.1/gems/resque-1.0.0/lib/resque/helpers.rb:15:in `encode'"
"/usr/local/lib/ruby19/gems/1.9.1/gems/resque-1.0.0/lib/resque.rb:59:in `push'"
"/usr/local/lib/ruby19/gems/1.9.1/gems/resque-1.0.0/lib/resque/job.rb:47:in `create'"
"/usr/local/lib/ruby19/gems/1.9.1/gems/resque-1.0.0/lib/resque.rb:141:in `enqueue'"

From some internet searching I found that to be some kind of issue introduced by ActiveSupport after 2.3.4. Modifying two lines on helpers.rb (from JSON(object) to object.to_json) resulted on my script running without errors, but broken 15 of resque's tests.

Workers control + status

First, let me thank you for great work :)

I find the design of resque a bit problematic, mainly it's the inability of workers to report any status to server, and inability of server to control workers in any way. When running on jruby, there's no (sane) way to make worker skip a stale job.

It's a common requirement that I need to see the progress of my jobs, and in the best case, this can extensible. I tried to pass some context object to #perform method, so workers can update whatever information they need there, and it will get stored in the same way as "args", and then displayed in web app.

I failed. Mostly because of how jobs are stored: lists, no id's, no way to access particular job. IF resque would come with some basic reporting on working workers, like progress(0-1) and last line of output, that would solve most issues.

unsupported signal SIGQUIT (Resque::Worker on Windows)

Hello,

I'm experimenting with creating Resque workers that run on Windows (my use case is Excel spreadsheets generation using Windows specific libraries, either with MRI or IronRuby, where the Redis queue runs on Ubuntu on another machine).

So far I cannot run a rake resque:work on Windows, I get an unsupported signal SIGQUIT - full stack trace at http://gist.github.com/285731.

If someone else already got this working, I'll welcome the patch, otherwise I'm going to try make this work.

Is it something you're willing to incorporate (Windows support) if it works ?

Namespacing

Now that I'm dev-ing multiple apps using Resque on the same machine, thier queues and hence the workers are overlapping. I was thinking it would be pretty trivial to add an extra namespace that you can define per app. This would prevent workers from picking up jobs from other apps and would give the benefit of displaying "namespace:queue" etc. in resque-web.

My question is really about API. Do you think its better to have a Resque.namespace accessor or would it make more sense to have the Resque.redis accessor accept a namespace, e.g.

Resque.redis = 'localhost:6777:namespace'

Capistrano recipe to deploy/restart workers?

First of all, resque is great, nice work!

One thing I wasn't entirely clear on from the readme was how to deploy and restart workers, especially if they are on separate machines.

It looks like in your config/resque.yml file you can set the ip of the box running redis in production. But if someone can provide some clues as to the next steps that would be awesome, thanks!

Fail if missing perform method

Having come from delayed_job, I ignorantly created an instance method named "perform" rather than a class method. Oddly, this didn't cause a failure in resque, but it certainly didn't work properly. It'd be nice if this were caught as a failure.

Destroying jobs?

Hey folks,

This seems like the sort of thing that is either so simple that it must either be implemented already and I don't know how to do it, or it can't be implemented for a various obvious reason that is beyond me, but is it possible to delete jobs once you put them on the queue? I've got queues with 100,000 jobs, and the only way to get rid of them is to create workers that "process" them by doing nothing.

I'd be willing to get my fingers wet with redis if that would help...but only if it's because my keyboard is crying tears of joy.

Wolfpack

Instead of the parent / child forking model, move to a master / worker model similar to Unicorn's.

On an ideal setup (REE + Linux) you'll have 2N Resque processes running at any time: N parents and N children.

We can cut this number down to N+1 by moving from a parent / child architecture to a master / workers architecture.

Then we don't need god to monitor stale workers. Instead the master can.

Example Rake tasks failing with uninitialized constant Resque::Version (patch incl.)

When running rake resque:work taks for the example demo app, it fails with following error:

uninitialized constant Resque::Version
/.../rake.rb:2503:in `const_missing'
/.../resque/examples/demo/../../lib/resque/worker.rb:423:in `procline'

This is probably due to commit http://github.com/defunkt/resque/commit/5cdf011241142296eb87f37d6ce2055f48aebdf9

There is patch which puts the require into resque.rb at http://github.com/karmi/resque/tree/fix_version_error

Thanks!,

--karmi

Potential race condition

Hi, I'm not a ruby programmer but as far as I understand the following in worker.rb has a race condition:

# Attempts to grab a job off one of the provided queues. Returns
# nil if no job can be found.
def reserve
    queues.each do |queue|
        log! "Checking #{queue}"
        if job = Resque::Job.reserve(queue)
            log! "Found job on #{queue}"
            return job
     end
end

I'm a python programmer, so say you have to following code:

for queue in queues:
    item = get_item_from_queue(queue)
    if item:
         return item

What if one of the queues was very busy and always yielded and item?
E.g. get_item_from_queue behaved like this:

def get_item_from_queue(queue):
     return "some message"

Even with just two queues, this is a potential problem, the other queues would never have a chance to get their items processed.

Or is my ruby-fu too weak?

Unused instance variable

The @watched_queues instance variable in resque.rb is read but never written to.

The following commit removes it entirely:
mrduncan/resque@298125b9b1aa29d6214e64d44e7803cbacc17193

Doesn't work with Nginx Passenger

Is the web interface subbosed to work with passenger-2.2.9 and nginx/0.7.64
Because I can't make the web interface work.
What I get is a: 404 Not Found while with the same setup other Sinatra apps work.
Has something to do with require 'sinatra/base' instead of just 'sinatra' ??

I am using the default config.ru file

#!/usr/bin/env ruby
require 'logger'

$LOAD_PATH.unshift ::File.expand_path(::File.dirname(__FILE__) + '/lib')
require 'resque/server'

# Set the RESQUECONFIG env variable if you've a `resque.rb` or similar
# config file you want loaded on boot.
if ENV['RESQUECONFIG'] && ::File.exists?(::File.expand_path(ENV['RESQUECONFIG']))
  load ::File.expand_path(ENV['RESQUECONFIG'])
end

use Rack::ShowExceptions
run Resque::Server.new

And if you want to easily reproduce my environment just follow this quick tutorial.
http://blog.peepcode.com/tutorials/2010/nginx-passenger-script

resque/stats/keys view doesn't use URL helper

Here's a patch.

diff --git a/lib/resque/server/views/stats.erb b/lib/resque/server/views/stats.erb
index fc194cb..228888f 100644
--- a/lib/resque/server/views/stats.erb
+++ b/lib/resque/server/views/stats.erb
@@ -49,7 +49,7 @@
   <% for key in resque.keys.sort %>
     <tr>
       <th>
-        <a href="/stats/keys/<%= key %>"><%= key %></a>
+        <a href="<%= url "/stats/keys/#{key}" %>"><%= key %></a>
       </th>
       <td><%= redis_get_type key %></td>
       <td><%= redis_get_size key %></td>

Passing hash stringifies keys

Because resque calls to_json on the object passed to the perform method, hash keys are stringified. Is there a reason you use JSON instead of something else (e.g. YAML) that could preserve the ruby object better?

Reque-web should refresh workers

there should be a button or something to refresh workers list. If I run 2 workers, I'll see 2 workers in resque-web. Now shut down the workers, I still see 2 workers in resque-web, whereas none is existing.

Improve performance of pruning dead workers

The Worker#prune_dead_workers runs the ps command (in the worker_pids method) once for every worker in Redis to get a list of running pids on the current machine. The following patch should improve the performance of Worker#prune_dead_workers by calling ps only once.

mrduncan/resque@b9bf9871c369bcc605d4e82a49f380f6c2567c3d

IronRuby Resque::Worker progress

Hi,

I noticed two behaviours for IronRuby workers (that I didn't get with MRI workers on Windows btw).
1 - the worker seems to work fine, but disappear from the workers list in resque-web
or 2 - the worker raises uninitialized constant Errno::EAGAIN

I'm just tracking these down here in case someone tries to follow the same path, and will complete the issue with my findings.

Workers don't handle USR1/TERM signals properly when redis hangs

We've been seeing workers hang and stop responding to -USR1 signals every once in a while. Here's a relevant snippet from the logs with backtrace:

** [03:55:27 2010-01-12] 24383: Checking high
** [03:55:27 2010-01-12] 24383: Checking low** [03:55:27 2010-01-12] 24383: Found job on low
** [03:55:27 2010-01-12] 24383: got: (Job{low} | Graphite | ["<redacted>", "<redacted>"])
** [03:55:27 2010-01-12] 24383: resque: Forked 29087 at 1263297327
** [03:55:27 2010-01-12] 24383: Checking critical
** [03:55:27 2010-01-12] 24383: Checking high
** [03:55:27 2010-01-12] 24383: Checking low
** [03:55:27 2010-01-12] 24383: Found job on low
** [03:55:27 2010-01-12] 24383: got: (Job{low} | GitHub::Jobs::UpdateNetworkGraph | ["<redacted>", "<redacted>"])
rake aborted!
SIGHUP
/data/github/current/vendor/gems/redis_rb/lib/redis.rb:211:in `write'
/data/github/current/vendor/gems/redis_rb/lib/redis.rb:211:in `process_command'
/data/github/current/vendor/gems/redis_rb/lib/redis.rb:205:in `raw_call_command'
/data/github/current/vendor/gems/redis_rb/lib/redis.rb:220:in `synchronize'
/data/github/current/vendor/gems/redis_rb/lib/redis.rb:220:in `maybe_lock'
/data/github/current/vendor/gems/redis_rb/lib/redis.rb:205:in `raw_call_command'
/data/github/current/vendor/gems/redis_rb/lib/redis.rb:171:in `call_command'
/data/github/current/vendor/gems/redis_rb/lib/redis.rb:255:in `incr'
/data/github/current/vendor/gems/redis_rb/lib/redis/namespace.rb:99:in `send'
/data/github/current/vendor/gems/redis_rb/lib/redis/namespace.rb:99:in `method_missing'
/data/github/current/vendor/plugins/resque/lib/resque/stat.rb:27:in `incr'
/data/github/current/vendor/plugins/resque/lib/resque/stat.rb:32:in `<<'
/data/github/current/vendor/plugins/resque/lib/resque/worker.rb:320:in `processed!'
/data/github/current/vendor/plugins/resque/lib/resque/worker.rb:309:in `done_working'
/data/github/current/vendor/plugins/resque/lib/resque/worker.rb:286:in `unregister_worker'
/data/github/current/vendor/plugins/resque/lib/resque/worker.rb:139:in `work'
/data/github/current/vendor/plugins/resque/tasks/../lib/resque/tasks.rb:24
/usr/lib/ruby/1.8/rake.rb:546:in `call'
<snip rake stuff>
** [03:55:27 2010-01-12] 24383: resque: Forked 29098 at 1263297327
** [04:01:34 2010-01-12] 24383: Checking critical
** [04:11:58 2010-01-12] 24383: Exiting...
** [04:12:20 2010-01-12] 24383: Exiting...
** [04:12:20 2010-01-12] 24383: Exiting...
** [04:12:20 2010-01-12] 24383: Exiting...
** [04:12:21 2010-01-12] 24383: Exiting...
** [04:12:21 2010-01-12] 24383: Exiting...
** [04:12:24 2010-01-12] 24383: Exiting...

Killing the process with SIGTERM, SIGINT, or SIGUSR1 has no effect. Killing with SIGHUP causes the backtrace to be written and the worker to exit since there's no handler registered for that signal.

The root cause seems to be a hang writing data to redis immediately after a job completes. The incr call never returns and so the @shutdown variable set when the SIGUSR1 comes in is not checked.

The redis client supports timeouts but only when the system_timer gem is installed under ruby 1.8.x. That would probably fix us but we should consider doing something in resque, like exiting in the signal handler instead of setting the @shutdown variable.

Resque workers should try to reconnect to redis instead of crashing on no connection to Redis

Hey All,

Thanks a lot for this excellent product. I'm trying to use it for my small apps backend.

Resque workers currently crashes once the connection breaks between the workers and Redis.

This is very problematic in the sense that in a distributed environment one need to restart all the workers in all the deployments.

Wouldn't it be better to have workers attempting reconnection to Redis automatically?

cheers
dg

COUNT= n not working for spawning multiple workers

Hello all,
This is my first post to the abode of Git :).

I'm trying to use resque for backend job processing in my Rails app ( v 2.0.2). I've installed rescue from the git as Rails plugin. But even if I mention the count variable only one worker is spawned.

COUNT= x QUEUE=request_queue rake environment resque:work

This I'm verifying from the rescue-web interface.

Can somebody help me out on this?

Another query is how can I use god module that comes with Resque. There is not much information available on this.

Thanks all in advance
dg

Fork support detection not working for IronRuby

Hi,

I managed to start a worker with IronRuby (1.0-RC1), which is already great :)

Now when the job must be processed, Resque::Worker tries to detect if Kernel::fork is supported or not. In the case of IronRuby, it looks like Kernel.fork doesn't exist at all, so it raises an error:

undefined method fork' for Kernel:Module ./gems/resque/lib/resque/worker.rb:192:infork'
./gems/resque/lib/resque/worker.rb:116:in `work'

We could either detect RUBY_PLATFORM == 'ironruby' or test Kernel.methods.include?('fork') to ensure this doesn't blow up (I'm going to test RUBY_PLATFORM in my case, see if everything else runs or not).

Thoughts ?

default queue

is there any reason not to provide a default queue for jobs when none is specified?

it could even be configurable:
Resque.default_queue = :default

Resque config file not getting loaded by resque-web

It looks like resque-web is no longer loading it's configuration.

$ cat "puts 'LOADED' > test.rb
$ resque-web test.rb -F 

Doesn't output "LOADED".

I think the culprit is the Vegas change. In resque-web, the :resque_config lambda that is passed to Vegas::Runner is not being executed. Looking at the Vegas code, I think what was intended was to pass the lambda as a block to the Vegas::Runner initializer (rather than a parameter) so it can do something with the options, but I'm not sure.

Can workers set their queue or is it fixed?

Does each class decide it's queue up front with the @Queue class variable, or can it change it in the perform method or some other way?

I was hoping to keep all the logic in one class, but occasionally set one to a high priority.

Rack's File Class is imcompatible when using Ruby 1.9

Here's the error:

With Ruby 1.9.1:

irb(main):001:0> require 'rack'
=> true
irb(main):002:0> File.dirname("a")
=> "."
irb(main):003:0> Rack::Builder.new { File.dirname("a") }
NoMethodError: undefined method `dirname' for Rack::File:Class
    from (irb):3:in `block in irb_binding'
    from /usr/lib/ruby19/gems/1.9.1/gems/rack-1.0.1/lib/rack/builder.rb:29:in   `instance_eval'
    from /usr/lib/ruby19/gems/1.9.1/gems/rack-1.0.1/lib/rack/builder.rb:29:in   `initialize'
    from (irb):3:in `new'
    from (irb):3
    from /usr/bin/irb:12:in `<main>'
irb(main):004:0> Rack::Builder.new { ::File.dirname("a") }
=> #<Rack::Builder:0xb9541edc @ins=[]>
irb(main):005:0>

With Ruby 1.8.7, it works fine:

irb(main):001:0> require 'rack'
=> true
irb(main):002:0> Rack::Builder.new { File.dirname("a") }
=> #<Rack::Builder:0x1016ac208 @ins=[]>
irb(main):003:0> Rack::Builder.new { ::File.dirname("a") }
=> #<Rack::Builder:0x1016a2820 @ins=[]>
irb(main):004:0>

And, here's the patch: http://github.com/arthurgeek/resque/commit/afee4695e68283132d5104d6cf522dfa5e27164e

Serious issue: workers crashes with each disk write by redis

Hey nevans,

happy new year!!. I'm trying to use Resque for my backend and it has hit a serious problem.

I deployed redis and resqued-rails app on ubuntu 8.04 VM (2GB RAM) . I am using 16 workers at this moment. But in the last two three days i've seen multiple times workers crashing w/o a trace.

Have u faced any similar scenario at ur end? And in the meantime can you suggest me how to run God for the workers ? I'm providing the redis log in ther post for ur reference.

Thanks a lot in advance for your help so far.

You can have a look into the redis log:

06 Jan 07:02:28 . 14 clients connected (0 slaves), 19414 bytes in use, 0 shared objects
06 Jan 07:02:33 . DB 0: 29 keys (0 volatile) in 32 slots HT.
06 Jan 07:02:33 . 14 clients connected (0 slaves), 19414 bytes in use, 0 shared objects
06 Jan 07:02:38 . DB 0: 29 keys (0 volatile) in 32 slots HT.
06 Jan 07:02:38 . 14 clients connected (0 slaves), 19414 bytes in use, 0 shared objects
06 Jan 07:02:43 . DB 0: 29 keys (0 volatile) in 32 slots HT.
06 Jan 07:02:43 . 14 clients connected (0 slaves), 19414 bytes in use, 0 shared objects
06 Jan 07:02:48 . DB 0: 29 keys (0 volatile) in 32 slots HT.
06 Jan 07:02:48 . 14 clients connected (0 slaves), 19414 bytes in use, 0 shared objects
06 Jan 07:02:53 . DB 0: 29 keys (0 volatile) in 32 slots HT.
06 Jan 07:02:53 . 14 clients connected (0 slaves), 19414 bytes in use, 0 shared objects
06 Jan 07:02:58 . DB 0: 29 keys (0 volatile) in 32 slots HT.
06 Jan 07:02:58 . 14 clients connected (0 slaves), 19414 bytes in use, 0 shared objects
06 Jan 07:03:03 . DB 0: 29 keys (0 volatile) in 32 slots HT.
06 Jan 07:03:03 . 14 clients connected (0 slaves), 19414 bytes in use, 0 shared objects
06 Jan 07:03:06 . Accepted 127.0.0.1:51600
06 Jan 07:03:06 . Accepted 127.0.0.1:51601
06 Jan 07:03:06 - 1 changes in 3600 seconds. Saving...
06 Jan 07:03:06 - DB saved on disk
06 Jan 07:03:07 - Background saving started by pid 7120
06 Jan 07:03:07 . Client closed connection
06 Jan 07:03:07 . Client closed connection
06 Jan 07:03:07 . Client closed connection
06 Jan 07:03:07 . Client closed connection
06 Jan 07:03:07 . Client closed connection
06 Jan 07:03:07 . Client closed connection
06 Jan 07:03:07 . Client closed connection
06 Jan 07:03:07 . Client closed connection
06 Jan 07:03:07 . Client closed connection
06 Jan 07:03:07 . Client closed connection
06 Jan 07:03:07 . Client closed connection
06 Jan 07:03:07 . Client closed connection
06 Jan 07:03:07 . Client closed connection
06 Jan 07:03:07 . Client closed connection
06 Jan 07:03:07 . Accepted 127.0.0.1:51602
06 Jan 07:03:07 . Accepted 127.0.0.1:51603
06 Jan 07:03:07 . Client closed connection
06 Jan 07:03:07 . Client closed connection
06 Jan 07:03:07 . Client closed connection
06 Jan 07:03:07 . Client closed connection
06 Jan 07:03:08 - Background saving terminated with success
06 Jan 07:03:09 . DB 0: 13 keys (0 volatile) in 64 slots HT.
06 Jan 07:03:09 . 0 clients connected (0 slaves), 19522 bytes in use, 0 shared objects
06 Jan 07:03:14 . DB 0: 13 keys (0 volatile) in 64 slots HT.
06 Jan 07:03:14 . 0 clients connected (0 slaves), 19522 bytes in use, 0 shared objects
06 Jan 07:03:19 . DB 0: 13 keys (0 volatile) in 64 slots HT.
06 Jan 07:03:19 . 0 clients connected (0 slaves), 19522 bytes in use, 0 shared objects
06 Jan 07:03:24 . DB 0: 13 keys (0 volatile) in 64 slots HT.
06 Jan 07:03:24 . 0 clients connected (0 slaves), 19522 bytes in use, 0 shared objects
06 Jan 07:03:29 . DB 0: 13 keys (0 volatile) in 64 slots HT.

Status monitoring per job

To provide feedback to a user about his just-queued job, I've added a simple mechanism in my branch at http://github.com/pietern/resque/commits/job_status .

This allows you to set an ivar @monitor in the class you wish to queue. Resque then adds an ID to the payload and sets the job status to "queued". When the job is started, the status changes to "started" and then to either "done" or "failed". Inbetween, the perform method can modify the jobs status via Resque.current_job.status.

Because Resque forks a new child for every jobs, it is possible to use this accessor on the Resque module and not introduce race conditions.

Statuses expire after a configurable delay that currently defaults to 5 minutes, to prevent Redis from running out of memory.

When a new job is created that needs to be monitored, Resque returns the job ID on Resque#enqueue.

To provide easy access to setting the status, there is a mixin called Resque::Mixin::Status. This mixin automatically sets the @monitor ivar.

Upon completion, the job status only gets set to "done" when the performing method has not touched the status.

This means that a custom status like "50/50" stays intact.
class SomeJob
include Resque::Mixin::Status

  def self.perform(*args)
    resque_status "00/50"
    # ...
    resque_status "50/50"
  end
end

Resque.enqueue(SomeJob)

A jobs status can be retrieved by Resque::Job.status(job_id).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.