Ruby and background jobs
An overview of Ruby background jobs solutions.
I have posted a few times about this topic so here is a larger entry with more examples and comparisons between Redis based solutions and ones based on some kind of messaging queues. The goal is to expose a bit more the landscape of event handling (rather than just job processing) and give hints to a different way to approach the architecture of a product.
Most of us started with Redis based solutions when we built Ruby backends. There are a few big projects :
All three are open source but Sidekiq also comes as a service with additional features for paying customers.
The principles are similar in each case : you setup a Redis instance or cluster, add a gem to your project, define your jobs, add queuing calls and make sure to start at least one worker process.
Redis is used as the queue for those three, it’s simple to setup and run and efficient.
Usually the idea is to have your workers code inside the main monolith but you can also stretch a bit the concept and just have separate worker code bases and run instances of those.
The three projects come with a web UI to monitor your queues, requeue jobs etc…
For those of us with a bit of Java background RabbitMQ was popular already before the rise of RubyOnRails. It was heavily relied upon as message broker with a resilient architecture.
There was not much interest among rubyists for RabbitMQ and I only saw it used in one of the companies I worked for. It works well but needs attention if you have a lot of messages coming through.
Ruby has a least one RabbitMQ client : Bunny.
In this case, one will require to setup a cluster of RabbitMQ instances to insure resiliency. After that your backend can start sending messages onto queues and workers can process those.
Bunny is a RabbitMQ client so you will have to work a bit to write your workers with possibly some minimal abstractions. But the documentation of both RabbitMQ and Bunny is full of details of how to integrate so it should not be much trouble.
update : There is also Sneakers which is closer to what we know with Resque or Sidekiq. It's actually very
close to that.
RabbitMQ is open source and has a large community, it’s well maintained and offers a lot of features including monitoring the health of the cluster and the queues.
AWS SNS, SQS, Kinesis, EC2, Fargate and Lambda
Readers of this blog have possibly noticed that Imfiny relies heavily on AWS. It’s a provider that suits our needs and the ones of our clients many times. One of the reasons why it does is the set of services that help us avoid setting up monitoring heavy infrastructure corner stones such as message brokers.
Any mix of SNS + SQS and Kinesis with some Lambda can do a great service to your team. They are easy to setup through Terraform and the code bases can be spread across teams and languages without trouble.
Yet, if you do want to rely on a fleet of containers processing messages from SNS, SQS or Kinesis you can still do that too.
There are reasons why one would want to rely on SNS, SQS and Kinesis : it’s easy to use them to spread events across multiple services or lambdas.
There is a Sidekiq inspired project that is meant to be a “drop in place” replacement for it : Shoryuken.
So once we setup a SNS topic and a SQS queue subscribed to it our jobs can process jobs in a very similar way than with Sidekiq.
If we want to instead use SNS and Lambda subscribers the jobs is even easier. You will need to setup your Lambda tasks and get them subscribed to the SNS topic (through Terraform of course) and they will automatically receive the events and process them.
Again, if one requires more power and time to process data then pools of EC2 instances (with or without Fargate) running containers can consume events through a SQS queue on their own.
Relying on SNS, SQS or Kinesis requires a bit more work in terms of monitoring of the events and their treatment (exception handling, volume, …) but you don’t have to worry much about the maintenance of the infrastructure receiving and dispatching the events. Since those are very important for the health of your application as a whole, it’s one of those corner stones you don’t want to see crash and burn.
Compared to Redis or RabbitMQ based setups I would be more at ease with SNS or Kinesis but that do come with some caveats too (mostly pricing).
Kafka, Kafka streams
Several companies I know are using Apache Kafka as a way to replace RabbitMQ or Redis and do similar job as Amazon Kinesis or SNS.
There are at least two libraries out there to use Kafka with Ruby :
The principles are similar to RabbitMQ or Kinesis : events are written as stream of data by producers and read and treated by consumers. Multiple consumers can receive the stream and treat it in different ways at the same time, just like we would do with SNS or Kinesis.
You can find some blog posts on the use of Kafka with a Ruby code base such as Developing with Kafka and Rails Applications – Blue Apron Engineering or in a more general way Communication between microservices with Apache Kafka.
I have not use personally this setup but by the look of documentation and posts out there it’s not much different than what one would do with SNS, SQS, Kinesis or other pub/sub systems.
I am far less familiar with services offered by GC at the moment but a look to the documentation and services tell me we can handle background jobs through GC CloudTasks or CloudFunctions the way we do with SNS and Lambda.
As for the heavier compute part then AppEngine or ComputeEngine services will do the job EC2 would.
This would although require some more digging to really talk about, but if you are going that way it looks like there are definitely potential solutions available.
Those are not the only options out there but the ones I have encountered, heard about or talked about with people in the industry in the last 10 years.
The goal of that post was mainly to expose different avenues a team using Ruby as their main language could go down when deciding how to process data in background jobs.
I have often seen junior and mid level ruby software engineers stay focused on Resque or Sidekiq without considering the potential of moving to technologies that not only make their application more reliable but also open up a whole different way to think of their data and its treatment.
I would be happy to hear from you if you have examples of how you use Kafka Streams, Google Compute services and Amazon Kinesis in your ruby applications. So hit our twitter or email (see below)
Interested in this architecture or something similar ? I am happy to tell you more about this and help you. I am based in France, and can consult remotely and on site. Contact me : firstname.lastname@example.org.