An Opinionated Guide to Configuring Rails on Heroku

Jon Sully
@jon-sullySo you’re firing up a new Rails application and you’ve chosen Heroku as your preferred hosting solution. First, congrats! Taking a new app to production is exciting and one of the best times of a developer’s application-cycle. Our little team of two-and-a-half senior Rails devs here at Judoscale (including a Rails core team member!) still love that “hello, world!” moment ☺️. And we still totally endorse Heroku as being the simplest, easiest, and most straightforward way to bring a Rails application to production. This is the way!
We’ve kicked around the idea of putting together a wide list of recommendations for various architectures, configurations, and setups for a Rails app on Heroku for a couple of years. It’s a big undertaking and there are so many layers to cover, but it’s finally time we put ourselves out there! So, without further ado, here’s a bunch of opinions on how you should setup a fresh Rails app on Heroku!
The ENV Vars
JEMALLOC_ENABLED = true
Why: okay, technically you need to make sure you’re running the buildpack before you enable the ENV var, but Jemalloc is free, that easy to activate, and will make your app significantly more memory-stable and use less memory to begin with. We’ve never heard a reason to not use Jemalloc. You should enable it and use it.
👀 Note
PS: jemalloc
is a stand-alone, open-source tool that changes Ruby’s underlying memory systems for the better. While implementing that tool into your application on the current Heroku platform (“Cedar”), the soon-to-be-available platform, “Fir”, will likely have a different setup and system for both build packs and possibly system-tools like jemalloc
.
RAILS_MAX_THREADS = 3
This is now the default for Rails applications as of Rails 7.2, but you should declare the ENV var in your Heroku app and set it to 3
anyway. It’ll be much easier to find and adjust in the future if you ever need to change it.
If you want to read up on what happened here, what this controls, and why 3 is a good number, check out this post: “Why Did Rails’ Puma Config Change?!”
RUBY_YJIT_ENABLE = 1
Hopefully you’re running your fresh new application on a recent Ruby version — 3.3 or 3.4. If so, the YJIT system your Ruby has ready-to-go is well matured, battle tested, and brings with it some free (free!!) performance gains for your Rails app by just flipping a switch. Frankly we expect YJIT to be enabled by default for production environments soon. It’s very production-ready!
👀 Keep in mind that running YJIT will increase your application’s memory footprint a little bit. If that proves to be an issue for you, we have a two-part series on “Rails on Heroku: How to Use Less Memory” that’s worth a good read.
WEB_CONCURRENCY = 0
or 2
or 8
…. NOT auto
A couple of things here. First, make sure that you add this value. It’s not a default environment variable and leaving it out, as of Rails 8, means that Puma will run in auto
mode. That’s a bad thing on Heroku. Puma will fork more processes than it should on just about every Heroku dyno!
The right answer here is to deliberately set a value of 0, 2, or 8, depending on which type of dyno you’re running for your web
process. We discuss this at length in “Heroku Dynos: Sizes, Types, and How Many You Need”, but the tl;dr: is that a Standard-1X
dyno should use WEB_CONCURRENCY=0
, a Standard-2X
dyno should run WEB_CONCURRENCY=2
, and a Perf-L
dyno should run WEB_CONCURRENCY=8
.
And, we’ll get to this here in a bit, but you should really only be running 1X
‘s, 2X
’s, or Perf-L
’s (ignoring Private/Shield dynos)!
WORKER_THREADS = 5
This is another case where we want to deliberately add an environment variable to override a default with the same value as the default so that we can quickly find and adjust it later. Mostly, anyway. Technically, Sidekiq’s default is going to read RAILS_MAX_THREADS
which is 3 as of Rails 7.2, not 5 as before.
Nonetheless, the WORKER_THREADS
environment variable is designed to control the number of threads our Sidekiq workers spin up separate from the number of threads our Puma processes spin up. We designed this strategy in “Quick Tip: Fix ActiveRecordConnection Pool Errors For Good” and it involves setting this variable and adjusting the Sidekiq process(es) in our Procfile
to:
worker: RAILS_MAX_THREADS=${WORKER_THREADS:-${RAILS_MAX_THREADS}} bundle exec sidekiq
Which is essentially a single-process override of RAILS_MAX_THREADS
using WORKER_THREADS
’ value, safely falling back to RAILS_MAX_THREADS
itself.
Puma Configuration
Puma has grown simpler and simpler for the typical Rails app the last several years. It’s battle-hardened, easy to use, and entirely set-and-forget once you have it set correctly. In fact, so long as you have the environment variables above set, there’s nothing we need to do in our puma.rb
config file inside our application. Puma will read our environment variables and get us on our way!
If your particular application ends up running into memory issues with the default configurations we’ve suggested, we recommend reviewing our short series, “Rails on Heroku: How to Use Less Memory”. There we dive into tweaks to process counts, thread counts, and other important strategies for bringing your app’s memory footprint down.
Sidekiq Configuration
While it’s implied above, at this point we still believe in rolling Sidekiq for fresh Rails apps. We all believe that Solid Queue is well on its way to being the better choice for day-zero applications and low overhead installations, but it’s hard to compete with Sidekiq’s battle-tested track record. It’s been around for a long time, and it always just works.
Since we disconnected our Puma thread count from our Sidekiq thread count above, we can control them separately. That’s great since our Sidekiq processes may run into memory issues or CPU bottlenecks totally independent of our web process! Don’t worry… we have a post on that too 😁 “How to Fix Tricky Sidekiq Memory Issues”.
Finally, keep in mind that if your application grows to very high work levels and a large scale, it may be worth looking into Sidekiq Enterprise. Primarily since Enterprise grants the ability to run multi-process Sidekiq safely and natively. Meaning that, if you run on a Perf-L
, you can run 8 Sidekiq processes safely all on the same dyno.
Choosing the Right Dyno Size
We wrote about this topic extensively in “Heroku Dynos: Sizes, Types, and How Many You Need”, but let me go ahead and give you the opinionated starting-point: start with Standard-1X
dynos. And stay on them as long as you can! That means you’ll be running your web
process with WEB_CONCURRENCY=0
and your background worker processes on default (5 threads), but the cost-to-value ratio with Standard-1X
‘s is fantastic.
The tricky part of running a production Rails app on Standard-1X
’s is memory. So do your best, read through and implement the tips in “Rails on Heroku: How to Use Less Memory”, and try to stick to 1X
’s as long as possible. There’s nothing wrong with bumping up to Standard-2X
’s, they just end up being larger steps when it comes to autoscaling. And, if you’ve read our guide to autoscaling, you’ll know that autoscaling can track your metrics better the smaller the autoscaling steps are.
Lastly, we recommend avoiding Eco and Basic dynos for any serious project. They’re great for proof of concepts or personal projects that you aren’t worried about, but for anything serious and/or business-related, we recommend stepping up to proper, professional dynos.
A Note on Databases
This is one place our team is actually a little split on opinions! I (Jon 👋) tend to lean on the side of “Just use Heroku’s Postgres” because it’s easy. It definitely costs more than just about every other option out there, but the installation, developer ergonomics, and general usage are dead-simple. Just click one button! Their entry-level tiers aren’t too expensive, but if your application grows you’ll likely end up moving to another provider to save costs. For me, I choose Heroku Postgres because I’d rather pay the “make it easy” tax to make my get-to-production workflow easiest on day one.
🚨 Warning
One note here — the three cheapest Heroku Postgres tiers (“Essential-*”) provide no RAM on the Postgres server. This may or may not be okay for you, your team, and your application. That all depends on your queries and their native speeds. But do keep that in mind as you select your service tier.
Adam, on the other hand, totally disagrees. Hosting your database on just about every other option is definitely cheaper than Heroku… but Adam also has come to prefer keeping the application database separated from the compute. That allows you to try out, or switch to, a different PaaS with relatively little fuss — you just point your new platform to the same (third party) database.
If you’re going to go non-Heroku, we’d recommend checking out Crunchy Data for general-purpose, hosted Postgres instances. On the other hand, if you’re processing and handling lots of time-series data, we’ve really enjoyed using Timescale’s hosted database service.
About That Connection Pool…
You’ll find that your database.yml
file, by default, has some logic around how the pool:
value gets set. You’ll want to get rid of all that. Trust us, it’s going to be gone in the next major version of Rails anyway.
Instead, we concur with Ben Sheldon (author of Good Job):
✨ The secret to perfectly calculate Rails database connection pool size: Don’t! Set the pool size to a very large, constant number, and never worry about it again.*
And it’s worth reading Ben’s post in full to understand why this is the right answer — but in short, just set your pool to the same number of connections as your Heroku Postgres plan allows you. If that’s 64, set your pool to 64. If that’s 500, set it to 500. Problem forever-solved.
👀 Note
Okay, of course, in the future if you run out of connections at the database, you’ll need to upgrade to a higher DB tier that offers more connections… but the ‘right’ answer remains, even in those situations, leaving the Rails pool size as large as possible.
Recommended Add-Ons / Tools
Judoscale
First and foremost, we obviously have to recommend Judoscale! It’s completely free (forever!) with our White Belt plan and safeguards your app from potential scaling woes!
We support autoscaling based on queue time (the only way to really autoscale!) with custom-built plugins for several languages and frameworks. Check out our 60-second intro video here. Did I mention it’s free?
Redis Cloud
While Heroku offers their own hosted Redis option (like their Postgres), our experience with Heroku Redis has always been… bad. Redis Cloud, on the other hand, is the hosted wing of the Redis team (as in, the team that made Redis and continue to open source Redis itself). Luckily, we’ve had excellent experiences with Redis Cloud and their Heroku Add-on plugs right in to any Heroku application:
LogTail (BetterStack)
Now getting into tertiary services a bit, we fell in love with LogTail a few years back — right before they were acquired by BetterStack. We were originally a bit nervous that the LogTail service would decline in quality… but it totally hasn’t. BetterStack has made small but important improvements to LogTail while at the same time creating a bunch of other tools under the BetterStack umbrella itself. That’s more than we need to get into here, but they still offer LogTail as a Heroku Add-on in the Heroku Marketplace and we still think it’s incredibly feature-packed.
The reason we’re willing to recommend them here in our opinionated list, outside of actually enjoying their service, is that they have a generous free tier. Unlimited team members, 8 days of retention for search, and 5GB of log volume per month? That’s more than enough for any first-starting-up application! There are other great logs providers out there, but we love this one:
Scout APM
We’ve mentioned before how much we love Scout’s APM product. It has powerful insights, an extremely easy-to-use interface, and a general polish about it that just feels at home for Rails developers (we think, anyway!). We use Scout all the time and it’s often our first go-to for debugging performance issues or determining slow-downs across our applications.
Scout too has a generous free tier via their Heroku Marketplace add-on. We recommend running it on any startup app!
Sentry
Rounding out our add-on tooling recommendations, we prefer to use Sentry for error tracking and management. Sentry also has a Heroku Add-on with a generous free tier:
Add-ons vs. Direct Accounts
We do want to point out that all of the services we’ve mentioned above offer their Heroku Marketplace Add-ons (as mentioned) but also offer their services as ‘direct’ accounts. Meaning that you sign up directly on their website and configure / integrate the tooling into your app on your own. That said, a direct account is almost always going to be the cheaper approach. You pay a premium for the convenience of using an add-on and automatic configuration.
Still, we recommend using the add-ons when starting out both because of the convenience and because the free tiers offered through Heroku are usually more generous than you’d get signing up directly. You can start with the free add-on, then migrate to a direct plan when your app grows. The only service where the migration might be a bit painful is Redis, so maybe consider signing up for Redis Cloud directly instead of using the add-on.
Avoiding Common Heroku Pitfalls
Still in the realm of configuration and initial setup here, we just want to give you some experience based advice around what to expect with Heroku.
First, you should learn and understand that Heroku uses random routing, not a load balancer, for web requests. When running multiple web dynos / instances, Heroku does not attempt to determine which instance has the lowest load and send the next request there. It sends requests to various instances randomly. That’s an important difference and requires that you have familiarity with “in-dyno concurrency” since Ruby is a single-threaded language. We strongly recommend reading “Heroku Dynos: Sizes, Types, and How Many You Need” where we break down exactly how random routing and in-dyno concurrency work together.
Second, avoid the pitfall of not setting up autoscaling. As an autoscaling company we’ve head lots of stories. Most start with something like:
We should’ve set this up sooner. We had so many alerts. Everything was on fire. We just couldn’t take it anymore. Judoscale fixed all of that for us.
Setting up autoscaling with Judoscale takes just a couple of minutes and can save you countless headaches (and downtime!) in the future. It’s totally free and there’s simply no reason to not run autoscaling on your Heroku application, even on day one. Enable Judoscale’s White Belt plan and you’ll be covered when your application suddenly gets popular and takes on more traffic than you expected.
Third, let us help you avoid the pitfall of surprise: for most apps, Heroku Postgres is going to be the single most expensive charge in your bill. The database will always be your most expensive component. If you learn to accept that now you’ll feel much better in the future when you need to upgrade to a larger one 😅.
Run Cloudflare
Our last chunk of opinionated advice for Rails on Heroku is actually to run Cloudflare in front of Heroku. We wrote a whole series on how Propshaft and a CDN work together, including a piece on how CDNs work in the first place, to streamline Rails’ asset delivery. It’s a beautiful system when it’s all working correctly! And, to that end, make sure you get the SSL certs configured correctly… but once you do, Cloudflare will act as a giant shield for nefarious requests, asset requests, and bots! That alone can save you plenty of money with Heroku since your dynos won’t need to serve extra (bogus) requests!
And, as with the other tools we’ve recommended here, Cloudflare has an extremely generous free tier that makes adding the system a complete no-brainer. We highly recommend this path.
Your App Is Ready for Heroku — Now Keep It Running Smoothly
If there’s one overarching theme to all of this, it’s that Heroku makes deployment easy (🎉) but great configuration makes your app thrive. We believe that this list of opinionated recommendations will set both you and your Rails app up for success: fewer scaling headaches, smoother performance, and fewer surprises down the road.
But don’t stop here. Configuration isn’t a one-and-done task — it’s a continuous process of fine-tuning as your app grows and your needs change. Maybe today you’re happy on Standard-1X
dynos with a default Postgres plan. A few months from now you might be dealing with high traffic spikes, gnarly Sidekiq queues, or the need to squeeze more out of your infrastructure. Those are signs of growth, so that’s a good thing! But you can rely on the foundations you’ve set with this guide to carry you safely into the next, larger, phase of your application.
As your app grows, monitor your performance, track your costs, and adjust your configurations accordingly. And most importantly, automate what you can. Tools like Judoscale, Cloudflare, and Sentry help your app run smoothly so you can focus on building features, not babysitting infrastructure.
At the end of the day, Rails and Heroku are both about developer happiness. This guide is here to make sure your happiness doesn’t stop after that first successful deploy. So go forth, tweak those ENV vars, and let your app shine on Heroku. We’ll be here to help if things get bumpy! 🚀