smallseo.info

resque

Resque is a Redis-backed Ruby library for creating background jobs, placing them on multiple queues, and processing them later. Introducing Resque · GitHub

Rescue : Connection refused - Unable to connect to Redis on localhost:6379

I have followed the instructions to install resque, but now when I try to spawn a worker with this command I get a connection error:

$ QUEUE=mailer rake environment resque:work --trace

this is the error that I get:

Connection refused - Unable to connect to Redis on localhost:6379


Source: (StackOverflow)

Resque vs. Sidekiq [closed]

I'm working on a project with a friend where we've been using Resque for processing various commands from data input inside our rails application on a minute to minute basis.

We've been messing around with the idea of using Sidekiq because it is multi-threaded and won't be a memory hog and won't need to boot a ruby env for each worker.

I'm hoping to gather some thoughts and opinions from people that use Resque and Sidekiq in real time and explain the differences.

So, what are the pros and cons of Sidekiq against Resque?


Source: (StackOverflow)

How do I clear stuck/stale Resque workers?

As you can see from the attached image, I've got a couple of workers that seem to be stuck. Those processes shouldn't take longer than a couple of seconds.

enter image description here

I'm not sure why they won't clear or how to manually remove them.

I'm on Heroku using Resque with Redis-to-Go and HireFire to automatically scale workers.


Source: (StackOverflow)

Error with resque-web: Couldn't get a file descriptor referring to the console

I'm trying start resque-web, but this error occurs:

[Sun Mar 06 05:27:48 +0000 2011] Starting 'resque-web'...
[Sun Mar 06 05:27:48 +0000 2011] trying port 8281...
Couldn't get a file descriptor referring to the console

This error occurred with Ubuntu 10.04 and 10.10.

Resque Web only starts with -F option (don't daemonize, run in the foreground). So, it must be something when the process is daemonized.

Any idea, how can I solve it?

Regards,


Source: (StackOverflow)

Node.js workers/background processes

How can I create and use background jobs in node.js?

I've come across two libs (node-resque and node-worker) but would like to know if there's something more used.


Source: (StackOverflow)

Resque, Devise and admin authentication

Using Resque and Devise, i have roles for User, like:

User.first.role #=> admin
User.last.role #=> regular

I want to setup an authentication for Resque. So, inside config/routes.rb i have:

namespace :admin do
  mount Resque::Server.new, :at => "/resque", :as => :resque
end

And, of course it's accessible for all logged in users.

Is there any way to use a role from User.role? It should be accessible only by users with 'admin' role.

Thanks a lot.


Source: (StackOverflow)

delayed_jobs vs resque vs beanstalkd?

Here is my needs:

  • Enqueue_in(10.hours, ... ) (DJ syntax is perfect.)
  • Multiply workers, concurrently. (Resque or beanstalkd are good for this, but not DJ)
  • Must handle push and pop of 100 jobs a second. (I will need to run a test to make sure, but I think DJ can't handle this many jobs)

Resque and beanstalkd don't do the enqueue_in.

There is a plugin (resque_scheduler) that does it, but I'm not sure of how stable it is.

Our enviroment is on amazon, and they rolled out the beanstalkd for free for who has amazon instances, that is a plus for us, but I'm still not sure what is the best option here.

We run rails 2.3 but we are bringing it to speed to rails 3.0.3 soon.

But what is my best choice here? Am I missing another gem that does this job better?

I feel my only option that actually works now is the resque_scheduler.

Edit:

Sidekiq (https://github.com/mperham/sidekiq) is another option that you should check it out.


Source: (StackOverflow)

Run resque in background

I have a working rails app with a resque queue system which works very well. However, I lack a good way of actually demonizing the resque workers.

I can start them just fine by going rake resque:work QUEUE="*" but I guess it's not the point that you should have your workers running in the foreground. For some reason nobody seems to adress this issue. On the official resque github page the claim you can do something like this:

PIDFILE=./resque.pid BACKGROUND=yes QUEUE="*" rake resque:work

well - it doesn't fork into the background here at least.


Source: (StackOverflow)

Resque.enqueue failing on second run

I am trying to port an app from Rails 3.0.3 to Rails 3.1rc... I don't think I've missed out anything, in terms of configuration. The process works perfectly in Rails 3.0.X and not in 3.1rc.

In console, I do:

Resque.enqueue(EncodeSong, Song.find(20).id, Song.find(20).unencoded_url)

Everything works so far. Resque-web reports no failed jobs. And, I get the two 'puts' from module EncodeSong.

However, running Resque.enqueue(EncodeSong, Song.find(20).id, Song.find(20).unencoded_url) a second time will return the following error in resque-web (below). To make the error go away, I would have to close the process thats running: QUEUE=* rake environment resque:work and rerun it in the console window. But the problem comes back after trying to Resque.enqueue() after the first time.

Class
    EncodeSong
Arguments

    20
    "https://bucket_name.s3.amazonaws.com/unencoded/users/1/songs/test.mp3"

Exception
    PGError
Error
    server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.

    /Users/Chris/.rvm/gems/ruby-1.9.2-p136@railspre/gems/activerecord-3.1.0.rc1/lib/active_record/connection_adapters/postgresql_adapter.rb:272:in `exec'
    /Users/Chris/.rvm/gems/ruby-1.9.2-p136@railspre/gems/activerecord-3.1.0.rc1/lib/active_record/connection_adapters/postgresql_adapter.rb:272:in `block in clear_cache!'
    /Users/Chris/.rvm/gems/ruby-1.9.2-p136@railspre/gems/activerecord-3.1.0.rc1/lib/active_record/connection_adapters/postgresql_adapter.rb:271:in `each_value'
    /Users/Chris/.rvm/gems/ruby-1.9.2-p136@railspre/gems/activerecord-3.1.0.rc1/lib/active_record/connection_adapters/postgresql_adapter.rb:271:in `clear_cache!'
    /Users/Chris/.rvm/gems/ruby-1.9.2-p136@railspre/gems/activerecord-3.1.0.rc1/lib/active_record/connection_adapters/postgresql_adapter.rb:299:in `disconnect!'
    /Users/Chris/.rvm/gems/ruby-1.9.2-p136@railspre/gems/activerecord-3.1.0.rc1/lib/active_record/connection_adapters/abstract/connection_pool.rb:191:in `block in disconnect!'
    /Users/Chris/.rvm/gems/ruby-1.9.2-p136@railspre/gems/activerecord-3.1.0.rc1/lib/active_record/connection_adapters/abstract/connection_pool.rb:190:in `each'
    /Users/Chris/.rvm/gems/ruby-1.9.2-p136@railspre/gems/activerecord-3.1.0.rc1/lib/active_record/connection_adapters/abstract/connection_pool.rb:190:in `disconnect!'
    /Users/Chris/.rvm/gems/ruby-1.9.2-p136@railspre/gems/activesupport-3.1.0.rc1/lib/active_support/core_ext/module/synchronization.rb:35:in `block in disconnect_with_synchronization!'
    /Users/Chris/.rvm/rubies/ruby-1.9.2-p136/lib/ruby/1.9.1/monitor.rb:201:in `mon_synchronize'
    /Users/Chris/.rvm/gems/ruby-1.9.2-p136@railspre/gems/activesupport-3.1.0.rc1/lib/active_support/core_ext/module/synchronization.rb:34:in `disconnect_with_synchronization!'
    /Users/Chris/.rvm/gems/ruby-1.9.2-p136@railspre/gems/activerecord-3.1.0.rc1/lib/active_record/connection_adapters/abstract/connection_pool.rb:407:in `remove_connection'
    /Users/Chris/.rvm/gems/ruby-1.9.2-p136@railspre/gems/activerecord-3.1.0.rc1/lib/active_record/connection_adapters/abstract/connection_specification.rb:116:in `remove_connection'
    /Users/Chris/.rvm/gems/ruby-1.9.2-p136@railspre/gems/activerecord-3.1.0.rc1/lib/active_record/connection_adapters/abstract/connection_specification.rb:79:in `establish_connection'
    /Users/Chris/.rvm/gems/ruby-1.9.2-p136@railspre/gems/activerecord-3.1.0.rc1/lib/active_record/connection_adapters/abstract/connection_specification.rb:60:in `establish_connection'
    /Users/Chris/.rvm/gems/ruby-1.9.2-p136@railspre/gems/activerecord-3.1.0.rc1/lib/active_record/connection_adapters/abstract/connection_specification.rb:55:in `establish_connection'
    /Users/Chris/Sites/site_name/lib/tasks/resque.rake:17:in `block (2 levels) in <top (required)>'
    /Users/Chris/.rvm/gems/ruby-1.9.2-p136@railspre/gems/resque-1.16.1/lib/resque/worker.rb:355:in `call'
    /Users/Chris/.rvm/gems/ruby-1.9.2-p136@railspre/gems/resque-1.16.1/lib/resque/worker.rb:355:in `run_hook'
    /Users/Chris/.rvm/gems/ruby-1.9.2-p136@railspre/gems/resque-1.16.1/lib/resque/worker.rb:162:in `perform'
    /Users/Chris/.rvm/gems/ruby-1.9.2-p136@railspre/gems/resque-1.16.1/lib/resque/worker.rb:130:in `block in work'
    /Users/Chris/.rvm/gems/ruby-1.9.2-p136@railspre/gems/resque-1.16.1/lib/resque/worker.rb:116:in `loop'
    /Users/Chris/.rvm/gems/ruby-1.9.2-p136@railspre/gems/resque-1.16.1/lib/resque/worker.rb:116:in `work'
    /Users/Chris/.rvm/gems/ruby-1.9.2-p136@railspre/gems/resque-1.16.1/lib/resque/tasks.rb:27:in `block (2 levels) in <top (required)>'
    /Users/Chris/.rvm/gems/ruby-1.9.2-p136@railspre/gems/rake-0.9.0/lib/rake/task.rb:205:in `call'
    /Users/Chris/.rvm/gems/ruby-1.9.2-p136@railspre/gems/rake-0.9.0/lib/rake/task.rb:205:in `block in execute'
    /Users/Chris/.rvm/gems/ruby-1.9.2-p136@railspre/gems/rake-0.9.0/lib/rake/task.rb:200:in `each'
    /Users/Chris/.rvm/gems/ruby-1.9.2-p136@railspre/gems/rake-0.9.0/lib/rake/task.rb:200:in `execute'
    /Users/Chris/.rvm/gems/ruby-1.9.2-p136@railspre/gems/rake-0.9.0/lib/rake/task.rb:158:in `block in invoke_with_call_chain'
    /Users/Chris/.rvm/rubies/ruby-1.9.2-p136/lib/ruby/1.9.1/monitor.rb:201:in `mon_synchronize'
    /Users/Chris/.rvm/gems/ruby-1.9.2-p136@railspre/gems/rake-0.9.0/lib/rake/task.rb:151:in `invoke_with_call_chain'
    /Users/Chris/.rvm/gems/ruby-1.9.2-p136@railspre/gems/rake-0.9.0/lib/rake/task.rb:144:in `invoke'
    /Users/Chris/.rvm/gems/ruby-1.9.2-p136@railspre/gems/rake-0.9.0/lib/rake/application.rb:112:in `invoke_task'
    /Users/Chris/.rvm/gems/ruby-1.9.2-p136@railspre/gems/rake-0.9.0/lib/rake/application.rb:90:in `block (2 levels) in top_level'
    /Users/Chris/.rvm/gems/ruby-1.9.2-p136@railspre/gems/rake-0.9.0/lib/rake/application.rb:90:in `each'
    /Users/Chris/.rvm/gems/ruby-1.9.2-p136@railspre/gems/rake-0.9.0/lib/rake/application.rb:90:in `block in top_level'
    /Users/Chris/.rvm/gems/ruby-1.9.2-p136@railspre/gems/rake-0.9.0/lib/rake/application.rb:129:in `standard_exception_handling'
    /Users/Chris/.rvm/gems/ruby-1.9.2-p136@railspre/gems/rake-0.9.0/lib/rake/application.rb:84:in `top_level'
    /Users/Chris/.rvm/gems/ruby-1.9.2-p136@railspre/gems/rake-0.9.0/lib/rake/application.rb:62:in `block in run'
    /Users/Chris/.rvm/gems/ruby-1.9.2-p136@railspre/gems/rake-0.9.0/lib/rake/application.rb:129:in `standard_exception_handling'
    /Users/Chris/.rvm/gems/ruby-1.9.2-p136@railspre/gems/rake-0.9.0/lib/rake/application.rb:59:in `run'
    /Users/Chris/.rvm/gems/ruby-1.9.2-p136@railspre/gems/rake-0.9.0/bin/rake:31:in `<top (required)>'
    /Users/Chris/.rvm/gems/ruby-1.9.2-p136@railspre/bin/rake:19:in `load'
    /Users/Chris/.rvm/gems/ruby-1.9.2-p136@railspre/bin/rake:19:in `<main>'

Here is the rest of my relevant code:

/config/initializers/resque.rb

require 'resque'

uri = URI.parse(APP_CONFIG['redis_to_go_url'])
Resque.redis = Redis.new(:host => uri.host, :port => uri.port, :password => uri.password)

# Load all jobs at /app/jobs
Dir["#{Rails.root}/app/jobs/*.rb"].each { |file| require file }

/app/jobs/encode_song.rb

module EncodeSong
  @queue = :encode_song

  def self.perform(media_id, s3_file_url)
    begin
      media = Song.find(media_id)
      puts 'foo11111'
      puts media.id
    rescue
      puts "Error #{$!}"
    end
  end
end

lib/tasks/resque.rake

require 'resque/tasks'

task "resque:setup" => :environment do
  ENV['QUEUE'] = '*'

  # ONLY on Heroku, since they are still running PostgreSql 8 on their shared plan.
  # This block of code is not needed on PostgreSql 9, as tested on local environment.
  # Issue: My best guess is that master resque process establishes connection to db,
  # while loading rails app classes, models, etc, and that connection becomes corrupted
  # in fork()ed process (on exit?). Possible fix is to reestablish the connection the AR
  # after a Resque fork.
  Resque.after_fork do |job|
    ActiveRecord::Base.establish_connection
  end

end

desc "Alias for resque:work (To run workers on Heroku)"
task "jobs:work" => "resque:work"

Not very sure, but it may be somewhat related to this issue. My guess is that master resque process establishes connection to db, while loading rails app classes, models, etc, and that connection becomes corrupted in fork()ed process (on exit?).

Any help / direction will be appreciated.

EDIT:

If I remove the following block from lib/tasks/resque.rake:

  Resque.after_fork do |job|
    ActiveRecord::Base.establish_connection
  end

And in console, run Resque.enqueue(EncodeSong, Song.find(20).id, Song.find(20).unencoded_url)

I get a new error (in console where QUEUE=* rake environment resque:work was run):

Error PGError: ERROR:  prepared statement "a3" already exists
: SELECT  "songs".* FROM "songs"  WHERE "songs"."id" = $1 LIMIT 1

It seems, this may be a bug with the adapter? Could be wrong here. Your thoughts?


Source: (StackOverflow)

How to destroy jobs enqueued by resque workers?

I'm using Resque on a rails-3 project to handle jobs that are scheduled to run every 5 minutes. I recently did something that snowballed the creation of these jobs and the stack has hit over 1000 jobs. I fixed the issue that caused that many jobs to be queued and now the problem I have is that the jobs created by the bug are still there and therefore It becomes difficult to test something since a job is added to a queue with 1000+ jobs. I can't seem to stop these jobs. I have tried removing the queue from the redis-cli using the flushall command but it didn't work. Am I missing something? coz I can't seem to find a way of getting rid of these jobs.


Source: (StackOverflow)

How to deploy resque workers in production?

The GitHub guys recently released their background processing app which uses Redis: http://github.com/defunkt/resque http://github.com/blog/542-introducing-resque

I have it working locally, but I'm struggling to get it working in production. Has anyone got a:

  1. Capistrano recipe to deploy workers (control number of workers, restarting them, etc)
  2. Deployed workers to separate machine(s) from where the main app is running, what settings were needed here?
  3. gotten redis to survive a reboot on the server (I tried putting it in cron but no luck)
  4. how did you work resque-web (their excellent monitoring app) into your deploy?

Thanks!

P.S. I posted an issue on Github about this but no response yet. Hoping some SO gurus can help on this one as I'm not very experienced in deployments. Thank you!


Source: (StackOverflow)

How to bridge the testing using Resque with Rspec examples?

I've a confusion while implementing Resque in parallel with Rspec examples. The following is a class with expensive method .generate(self) class SomeClass ... ChangeGenerator.generate(self) ... end

After implementing resque, the above class changed to the following and added a ChangeRecorderJob class.

class SomeClass
  ...
  Resque.enqueue(ChangeRecorderJob, self.id)
  ...
end

class ChangeRecorderJob
  @queue = :change_recorder_job

  def self.perform(noti_id)
    notification = Notification.find(noti_id)    
    ChangeGenerator.generate(notification)
  end
end

It works perfectly. But I have 2 concerns.

Before, my example spec used to test the whole stack of .generate(self) method. But now since I pushed that into Resque job, how can I bridge my examples to make that same test green without isolating? Or do I have to isolate the test??

And lastly, if I have 10 jobs to enque, do I have to create 10 separate job classes with self.perform method?


Source: (StackOverflow)

Gems/Services for autoscaling Heroku's dynos and workers

I want to know if there are any good solutions for autoscaling dynos AND workers on Heroku in a production environment (probably a different solution for each of those, as they are pretty unrelated). What are you/companies using, regarding this?

I found lots of options, but none of them seem really mature for a production environment. There is Heroscale, which seem to introduce some latency as it does not run locally, and I also heard of some downtime. There are modifications of delayed_jobs, which have not been updated for a long time, and there are some issues with current bundlers. There is also some alternatives related to reque, which seem not to handle very well some HTTP exceptions, which results in app crashing, and others which seem to need an always-running worker to schedule other workers, and may also suffer from some HTTP exceptions problems.

Well. In the end. What is being used, nowadays, for autoscaling Heroku's dynos and workers on a production environment with Rails3?

Thanks in advance.


Source: (StackOverflow)

Rails Resque workers fail with PGError: server closed the connection unexpectedly

I have site running rails application and resque workers running in production mode, on Ubuntu 9.10, Rails 2.3.4, ruby-ee 2010.01, PostgreSQL 8.4.2

Workers constantly raised errors: PGError: server closed the connection unexpectedly.

My best guess is that master resque process establishes connection to db (e.g. authlogic does that when use User.acts_as_authentic), while loading rails app classes, and that connection becomes corrupted in fork()ed process (on exit?), so next forked children get kind of broken global ActiveRecord::Base.connection

I could reproduce very similar behaviour with this sample code imitating fork/processing in resque worker. (AFAIK, users of libpq recommended to recreate connections in forked process anyway, otherwise it's not safe )

But, the odd thing is that when I use pgbouncer or pgpool-II instead of direct pgsql connection, such errors do not appear.

So, the question is where and how should I dig to find out why it is broken for plain connection and is working with connection pools? Or reasonable workaround?


Source: (StackOverflow)

Redis with Resque and Rails: ERR command not allowed when used memory > 'maxmemory'

When using redis, it gives me the error:

ERR command not allowed when used memory > 'maxmemory'

The info command reveals:

redis 127.0.0.1:6379> info
redis_version:2.4.10
redis_git_sha1:00000000
redis_git_dirty:0
arch_bits:64
multiplexing_api:kqueue
gcc_version:4.2.1
process_id:1881
uptime_in_seconds:116
uptime_in_days:0
lru_clock:1222663
used_cpu_sys:0.04
used_cpu_user:0.04
used_cpu_sys_children:0.00
used_cpu_user_children:0.00
connected_clients:1
connected_slaves:0
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
used_memory:930912
used_memory_human:909.09K
used_memory_rss:1269760
used_memory_peak:931408
used_memory_peak_human:909.58K
mem_fragmentation_ratio:1.36
mem_allocator:libc
loading:0
aof_enabled:0
changes_since_last_save:4
bgsave_in_progress:0
last_save_time:1333432389
bgrewriteaof_in_progress:0
total_connections_received:1
total_commands_processed:2
expired_keys:0
evicted_keys:0
keyspace_hits:0
keyspace_misses:0
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:0
vm_enabled:0
role:master

Is the used_memory high? I'm a complete redis noob. If so, how does this problem occur and how should I proceed from here? This same error is all occurring in production (Heroku), so any help is greatly appreciated. Thank you.


Source: (StackOverflow)