How to Fix Tricky Sidekiq Memory Issues

Jeff Morhous headshot

Jeff Morhous

@JeffMorhous

If you use Sidekiq to run background jobs for a Ruby app long enough, you’ll eventually encounter memory problems. There’s plenty of great content focused on helping you find and fix memory issues, but sometimes a background job just needs a lot of memory!

Whether you’re doing an expensive data import or iterating through many records, some background jobs need way more memory than others. If you’ve already tried the common advice on making a background job more memory-efficient and it’s not working for you, there are a few options for making them less problematic.

In this article, we’ll touch briefly on finding and trying to fix background jobs that are consuming too much memory, but we’ll focus mostly on mitigating the damage a memory-hungry background job can do. From separating background jobs to their own queue to autoscaling your Sidekiq process, you’ll learn some uncommon ways to fix your Sidekiq memory problems.

Finding Sidekiq Memory Problems with APM Tools

It’s hard to fix memory issues in your app if you don’t even know they’re happening. Monitoring your application and your Sidekiq processes with a good APM tool allows you to observe memory issues, including patterns behind any spikes in memory you may have.

Memory spikes associated with a particular application workflow or even time of day could indicate a Sidekiq job that consumes more memory than others, and information from an APM tool can help you narrow it down to a particular job. Scout, New Relic, and AppSignal all offer insights into memory usage, so start by integrating one of these if you haven’t already.

A Common Fix for Sidekiq Memory Issues

One of the most common suggestions for getting a background job (Sidekiq or otherwise) to consume less memory is to “fan out” big jobs, taking advantage of parallel division

Let’s say you have an application that sends out reminders to users for events they have saved in your application. One way to send messages to users on the day of their event is to run a scheduled Sidekiq job that checks each saved event and sends a reminder if the event is on that current day. This is a trivial example of a background job that might be both long-running and memory-expensive.

Fanning out Sidekiq jobs

Rather than have one background job to do this, you might consider “fanning out” the job into smaller tasks that can be executed faster and even in parallel. One way to do this is to have one job that iterates through records (something like RemindersJob) and enqueues needed notifications into their own job, something like NotifyUserJob.

Fanning out background jobs diagram

The job that iterates through records, RemindersJob is likely to be the memory-expensive part of the job, and enqueuing a new job, NotifyUserJob, is a good way to break it up.

The RemindersJob could iterate through all the Event records like this:

class RemindersJob
  include Sidekiq::Job

  def perform
    needed_notifications = Event.
      where('reminder_date < ?', Date.current).pluck(:id).map { |id| [id] }

    NotifyUserJob.perform_bulk(needed_notifications)
  end
end

This job pulls only the IDs of every Event record that needs a notification, then makes a nested array of those IDs, leaning on Sidekiq’s perform_bulk method to effectively enqueue the notification jobs.

This is also a great strategy to break up jobs that take a long time to run. Fanning them out into faster jobs makes handling states of failure a bit easier, so this strategy is often a great place to start. If one of these NotifyUserJobs fails, it can be rerun without rerunning everything.

While fanning out jobs to make them need less memory is often enough, some jobs still need lots of memory. There are just some background jobs that unavoidably consume lots of memory, so let’s look at some infrastructure changes you can make to mitigate the problems that causes your app.

Mitigating Unavoidable Sidekiq Memory Problems

Sometimes a Sidekiq job is just memory-expensive and the usual approaches to solving that problem don’t work. If you have an unavoidably expensive background job, there are some things you can do at the configuration or infrastructure level to control the damage of that job.

Sometimes fanning out a job isn’t enough to bring down its memory requirements. If you’re doing an import by parsing a massive CSV file, your memory requirements scale with the size of the CSV even if you fan out the job. So what are your options?

Let’s continue by walking through a new example, ImportJob, a background job that is unavoidably memory-expensive.

Splitting jobs into queues by their memory needs

One approach is to put memory-expensive jobs in their own Sidekiq queue and treat that queue differently than the rest of your queues. This is notably different from putting the work into different jobs. Leaning on parallel division is good, but with only one queue, your worker process just pulls the next item in the queue:

Default queue with multiple Sidekiq jobs

The separation of jobs might solve some problems, but the real benefit is that it makes it easier to put work into separate queues altogether.

By default, Sidekiq will enqueue a job into the default queue. If you’ve narrowed down your memory problems to a particular job, like our ImportJob, you can change your class definition to include a separate queue like in this example:

class ImportJob
  include Sidekiq::Job
  sidekiq_options queue: 'memory_hungry'

  def perform(*important_args)
    # Some code that needs a lot of memory
  end
end

This allows your Sidekiq worker to differentiate between types of jobs.

Running multiple Sidekiq queues

If you don’t already use multiple queues, you probably start a Sidekiq process with bundle exec sidekiq, which pulls from the default queue. Now that you have more than one queue, you’ll need to assign both of them to your process:

bundle exec sidekiq -q default -q memory_hungry

At this point, we haven’t actually addressed the memory issues. Separating the jobs into different queues lays the groundwork for assigning the memory-hungry queue its own process, which is where the magic really happens.

Assigning memory-hungry queues their own Sidekiq process

Splitting your jobs into queues based on their memory needs allows you to run them in completely separate processes. You can easily run multiple Heroku dynos for each worker, and you can even use your Procfile to make this easily repeatable:

web: bundle exec puma -C config/puma.rb
worker_default: bundle exec sidekiq -q default
worker_memory_hungry: bundle exec sidekiq -q memory_hungry

This Procfile gives us 2 workers, one for the default Queue and another for the memory_hungry queue.

Running Sidekiq queues on separate processes

This is important for a few reasons. First, a memory-hungry job won’t block another job.

If you’re running multiple queues in the same process, you risk starving a queue that is not prioritized. Sidekiq prioritizes queues in the order that you pass them when running Sidekiq.

bundle exec sidekiq -q default -q memory_hungry -q some_other_queue

In this example, it’s possible that jobs in some_other_queue are never executed if the jobs in default and memory_hungry take too long.

Second, it allows you to run your Sidekiq worker single-threaded which is a great way to reduce the memory consumption of each job. In the Profile, you can set the number of threads for the Sidekiq worker dedicated to the memory-hungry queue:

web: bundle exec puma -C config/puma.rb
worker_default: bundle exec sidekiq -q default
worker_memory_hungry: RAILS_MAX_THREADS=1 bundle exec sidekiq -q memory_hungry

Third, the memory-hungry queue can be processed by an entirely different machine. You can vertically scale the machine processing the dedicated queue, giving it more memory. You can follow up by potentially even reducing the memory needed for the other queue(s), ensuring that you’re not paying for more resources than you need.

Running a dedicated Sidekiq process single-threaded with extra memory

Using an independent queue lets you process these memory-expensive jobs with a single thread on a machine that has more memory dedicated to it. Still, you might want to horizontally scale to get through a backlog in the queue or process jobs in parallel. Autoscaling can help you automate this, ensuring you’re only adding extra dynos when you need them.

Autoscaling a Dedicated Sidekiq Queue

Another option that assigning memory-hungry Sidekiq queues their own process opens up for you is horizontal scaling. Adding machines that can process your memory-expensive queue in parallel is a great way to reduce your queue backlog. Still, it’s unlikely you’ll need the same number of machines running at all times.

In our example of a reminder system, we’d only need multiple workers for the memory_hungry queue at night when the jobs are scheduled to run. An autoscaler like Judoscale can automatically scale your Sidekiq workers to better handle peak workload without wasting resources, all based on queue time.

Judoscale watches queue time and autoscales

Once you’ve identified a Sidekiq job that’s consuming too much memory, splitting it out into smaller jobs is the first step. Splitting those jobs into queues based on their memory needs allows you to process them with separate processes, which gives you a lot of flexibility to change configuration, add resources, and autoscale the machines. This gives you many more tools to combat memory problems with Sidekiq jobs, so have fun digging in!