Judoscale on Tour: An Ode To Heroku

Jon Sully
@jon-sullyNote on AI use
Hey there 👋 Jon here. Just want you to know that I wrote every word of this article by hand, myself, pre-LLM-style.
I wouldn't ask someone to read something that I'm not willing to write.
That said, while I have creative illustration ideas, I'm not a designer! I do use ai-powered image generation in sketch-style to implement my ideas.
This all applies to my past articles as well, we only started adding this note in >=2026 given how pervasive AI slop articles have become.
Judoscale ‘On Tour’ Series
- “The Friction Model” & Heroku (This page!)
- Render (Coming soon…)
- Railway (Coming soon…)
- Fly (Coming soon…)
- Northflank (Coming soon…)
- Digital Ocean (Coming soon…)
- Amazon ECS Fargate (Coming soon…)
Thus we begin our tour! As we mentioned in our last post, we’re going to take our production app on a hosting tour to fully experience what each option in today’s hosting marketplace looks like, feels like, and runs like. But before we do any of that, we need a baseline and a strategy.
Judoscale has been on Heroku since its origins ten years ago. Adam, Judoscale’s founder, has been using Heroku since it was a first-days startup! All that to say, we’ve been around the Heroku block many, many times. We’re too close. We need to intentionally zoom out and take a look at Heroku like a brand new user would. We need to put words to the things we take for granted before we jump ship so we can know what to look for somewhere else.
And we’re not alone! Many folks are starting to put out feelers for alternative hosting platforms as Heroku’s moved into maintenance mode a “Sustaining Engineering model” and they too need a pragmatic view of the features and toolkits that make a hosting platform fantastic. So we asked for their input as well:
Our goal here is to build a community rubric of sorts. A set of baseline standards for performance, developer experience, and complexities-vs-niceties on hosting platforms.
With all of that said, this article is our attempt at outlining many of the features that have made Heroku so great (and occasionally difficult!) to build on over the last fifteen years. We’ve come up with an assessment strategy that we think will work for all platforms and we’ll apply it here to Heroku first.
👀 Note
It’s worth calling out: not everyone uses, sets up, or runs their Heroku-based applications in the same ways. Even just asking if folks use a staging-server setup brought many different opinions into the mix. That’s okay! There may be things noted here in this write-up that simply don’t apply to you or you don’t care about. Just keep an open mind: there’s no single way to do simple app hosting, and you might even find new ideas here!
The Friction Model
After spending quite a bit of time brainstorming, then asking other experienced devs for input, we’ve got a long list of features that we know make Heroku great. From this simple stuff (git push heroku) to the more complicated (“What are all the buildpacks I need for vips, again?”), there are plenty of things. But a plain list of things is only helpful for making our eyes glaze over. We need to see these features in some kind of logical groupings that help us understand intent and perspective.
Given Heroku’s history and philosophy over the years, we believe this is best captured as a friction model. From the beginning, Heroku’s value proposition was always about friction: removing it. Anti-friction = anti-pain. Anti-pain = peaceful shipping. And, of course, peaceful shipping nets developers that are excited to build (“productivity”). It’s always been about friction.
As it’s said: you don’t know what you’ve got until it’s gone… (which it’s not, but we can try to simulate the feeling) so let’s try to peel back the anti-friction layers and discover what Heroku’s been silently handling for so many years.
We’re considering four feature groups here:
Shipping friction. As in, “how many steps are there between my local code and production?” This vector covers things like deploys and releases, migrations, pipelines, review apps, setup, CI/CD (is that phrase still popular?), how long it takes to deploy, and zero-downtime deploys ✨.
Debugging friction. As in, “WTF IS GOING ON WITH PRODUCTION RIGHT NOW?!” This vector covers a lot of visibility and speed-of-access: logs, metrics, dashboards, production consoles/terminals, scaling, and some cron/scheduled jobs concepts. Also, reaching actual customer-service help when necessary!
Infrastructure friction. As in, “how much platform’y stuff do we have to own and maintain? How often do I have to (re-learn how to) fix this stuff?” Things like environment variables and secrets, SSL configurations, domains and DNS, multi-region / replication, compliance (scary, I know), and networking/routing. Oh, also, are the servers actually fast / performant?
And finally, Organizational friction. As in, “the stuff my manager probably cares more about than me, but therefore still impacts me indirectly”. How much does the platform actually cost? How many nines? Do I need to hire an Ops team? How much is my CTO going to hear the word “Heroku”?
We believe the friction model helps to paint a clear and personal picture of what hosting on any platform will feel like. It’s not just a feature-list table with checks and x’s; our goal is to capture the subjective experience of using platforms at various moments in a commercial developer’s workflow.
Okay, enough setup! Let’s dive in and see how Heroku fits into this model.
Heroku: Shipping Friction
Heroku essentially taught an entire generation of developers that shipping could be as simple as git push heroku, so it’s fair to say that they’ve optimized for low shipping friction from the beginning. In fact, Heroku’s pioneering of the PaaS concept was mostly rooted around low shipping friction. While most applications these days probably use automatic deployments off main via GitHub connection rather than pushing from local, Heroku’s done a great job of keeping that setup just about as simple to configure as good old git push heroku. A few clicks and you’re off to the races.
Heroku was also built with “release” processes in mind. We might take them for granted now, but a dedicated short-lived process that runs a command once only when deploying a new commit is both very helpful and somewhat complicated! Unless your host has this specific workflow supported and pre-setup in their platform, trying to do it yourself can be a real pain in the rear. Heroku simply built and gave us a perfect home for db:migrate.
Skimming through some of the other features here, the story is broadly the same: we only think about these features and/or know about them because Heroku brought them to the masses. Automatic PR-review apps, pipeline setups to go from review app to staging to production, direct-deploy main to production (after CI passes!) — I’d wager these concepts are familiar to most developers because Heroku pioneered them.
If we boil down the category into a single question, “how hard is it to take an app I have running locally and get it running in the cloud?” Then we’d give Heroku an A. To this day, Heroku sets the bar for low-friction shipping.
Heroku: Debugging Friction
Figuring out why your prod app is on fire isn’t ever easy per se, but there are things a platform can do to (hopefully) make it easier. We’d normally split this concept into two camps:
- The platform’s native tooling for viewing and searching logs, seeing metrics, and assessing what’s failing
- How easy the platform makes it to add third-party software (APMs, scalers, etc.) which can provide even more visibility
But Heroku sort of has a third — or maybe a 2a. Add-ons. Entire third-party software suites that can bolt onto your application with (typically) zero extra configuration required, with a single click. That’s neat!
But let’s start with #1. How good is Heroku’s native tooling for figuring out why production is on fire? Eh 🤷♂️. There’s good and bad.
Being able to fire up heroku run ad-hoc at any point is handy, but it can take a little while to spin up and runs on its own separate VM. Heroku does allow SSH’ing into running dynos, which is handy, but there’s an ephemerality that you need to keep in mind with dynos. A dyno that’s currently ‘on fire’ may well restart and shut you out at any time if the platform control plane decides it’s on fire enough. Essentially, there are times when the control plane feels more authoritative than the actual resources! That’s helpful sometimes, but harmful others.
Heroku’s approach for giving you helpful log parsing and tooling is essentially just to not do that. The CLI allows you to tail your real-time logs (as does the web UI) but you’ll have to pipe that into other tools if you want to do anything more than just read logs whizzing by. It’s accessible quickly enough that it can be useful, just hit heroku logs -t, but depending on your app’s RPS it may be way too much info to be useful to human eyes on a terminal.
On the metrics front, Heroku’s dashboard metrics display is… fine. Heroku isn’t an APM and doesn’t install a package into your code, so it really doesn’t have access to the sort of application-level stats we might be interested in these days. But is is a reasonable readout of throughput, memory, errors, and dyno load, though they lose points for the last one. “Dyno load” is an opaque and unhelpful metric derived from opaque resource-sharing algorithms for their Standard dynos. How much dyno load should you use? 🤷♂️
Lastly in the native-tooling group, Heroku does provide a cron-ish scheduling system that’s first-party (even though it’s installed as an add-on) but it’s just not great. We actually used it for years before deciding to move away. It’s fine for very small apps and/or non-critical jobs, but it’s not something that should scale with any application. Though, to be fair, we wouldn’t consider having a “heavy duty scheduler” a responsibility of a hosting platform. That’s something you should implement inside your application layer one way or another.
On to #2: how easy Heroku makes it to add third-party software. The short answer is that Heroku makes it very easy. Heroku decided early-on to invest in infrastructure for an “add-on” system that made installing third-party libraries as easy as installing a new app on your phone. In fact, Judoscale was born on the Heroku Marketplace before we branched out to supporting many platforms! Maybe you want Scout for your APM, a MySQL DB for your data, and ElasticSearch for a search index across that data. All of those can be setup with just a click or two from the Heroku Marketplace. Handy!
Aside from the Marketplace, it’s also easily to install third party libraries or software the old-fashioned way: signing up directly. Heroku doesn’t restrict any dynos’ outbound internet access by default so getting third party libraries configured which need to verify license keys, send data somewhere, or otherwise talk to some server work fine. Given that environment variable control on Heroku is quite simple too, the “DIY” third party software path is nearly as simple as the Marketplace path. We’ve only ever experienced the occasional friction of dynos not having static IP’s… but there are add-ons that do just that!
Beyond #1 and #2, we also need to consider Heroku’s actual customer support system for when we experience actual platform issues. How long does it take to get helpful, actual customer support when something happens? Well, it looks and feels just about like:
Unless you pay hefty fees for an elevated support tier on an enterprise contract, Heroku’s support response times can be rough. You can be paying thousands per month for all the dynos you’d like but still have to wait actual days before you get a response to a critical issue you submitted a ticket for. If you’ve ever tried, you know. That stinks.
If we boil down the concept of ‘debugging friction’ into a single question, “how hard is it to figure out what’s on fire?” Which, being fair, is a crazy large question that your hosting provider holds just a slice of responsibility for, we’d give Heroku a B. The tooling is mature and reliable, as both points #1 and #2 above cover, but we can’t deny the awful experience of their customer service ticketing. Nonetheless, the likelihood of needing to open a ticket remains low, so we have to balance the weight there.
Heroku: Infrastructure Friction
Our application is shipped, prod is running smoothly, and we’re starting to think about week-over-week maintenance and upkeep. At this point we need to consider how much of our platform footprint requires our own active involvement across months and seasons — how much friction our infrastructure causes in our day-to-day for an existing app: “How often do I have to (re-learn how to) fix this stuff?” Let’s enumerate the basics.
When it comes to environment variables, Heroku model is almost aggressively simple. There’s no separate secret manager, no multiple layers of injection depending on build vs runtime, and no ambiguity about where a value is coming from. It’s a single, flat interface per application; you set it once, and your app has it. Done. It’s not the most flexible system in the world, but it’s extremely predictable, and predictability reduces friction.
The same story shows up in domains and SSL. Adding a custom domain and getting HTTPS working is deliberately designed to be a few clicks, then something you just don’t think about again. They even do some special magic we’ve written about — some neat tricks for correctly configuring SSL when you’ve opted to use Cloudflare in front of Heroku… it all just works! No provisioning your own certs, no renewal tasks every few months (or more…), just a 🔒 in your browser address bar that you will (lovingly) ignore for the rest of your app’s lifetime.
Networking and routing is a similar story but has its own tradeoffs. When you run an app on Heroku there’s no ports to configure, no connections to setup; you don’t own the routing or load balancing layers at all. But, Heroku’s “load balancer” actually isn’t. As we’ve mentioned in “Understanding Queue Time: The Metric that Matters”, Heroku’s router uses a random routing algorithm. There’s no load balancing! So, while Heroku does grant the wonderful simplicity of, ‘your app listens for requests on a port, Heroku handles the rest’, the one caveat is that you should take just a few minutes and read a primer on how random routing might impact your app. The article I just linked is exactly that 😜.
There are, of course, a couple of rough edges. Buildpacks can get tricky when you need system-level dependencies. Performance characteristics of dynos are frustratingly opaque, especially when you start caring about CPU vs memory (and please don’t get me started on noisy neighbors). And while Heroku’s abstractions are usually quite helpful, they can be limited once you venture into multi-region replication, strict compliance requirements, and truly private networking. Heroku has a lot of features and capabilities in those spaces, but some of the “it just works” shine might fade.
But the point here isn’t to judge whether or not the platform can do everything — most can if you’re willing to fiddle enough. The point is about assessing how often the platform makes us think about these configurations and setups in the first place. Infrastructure friction is about the recurring cost of the platform in terms of our own time.
Heroku’s abstractions let us think about our infrastructure configuration and maintenance, year over year, less than just about any other host. For that reason, we give it an A- on infrastructure friction. Points lost for “dyno” resource opaqueness!
Heroku: Organizational Friction
Finally, let’s talk about the stuff managers and owners usually care about more than boots-on-the-ground developers. This is less about the mechanics of actually building on the platform and more about the ripple effects of that platform up the chain-of-command. Remember, someone’s got to actually pay the bill!
And we might as well start with the bill. Heroku is notoriously the “worst” deal in PaaS’s. Just about any way you slice the performance-per-dollar, Heroku is more expensive than everyone else. We built a PaaS Price Calculator that makes that much clear:
And that is perhaps the greatest knock against Heroku in this entire rundown. While maybe not the first thing that jumps to mind when thinking about a “friction model”, there is absolutely friction here: cost friction! Friction incurred when having to hand over all those dollars every month instead of, in some cases, half! We’ll save the deeper discussion for another day, but just understand that the balance here is a higher cost vs. all of the low-friction abstractions we described in all of the paragraphs above. Heroku’s schtick is paying for simplicity. It always has been, it likely always will be. You’re buying back time that you might otherwise have to spend on infrastructure and hosting tasks.
Along with that idea, your platform choice has implications on team structure. To be frank, you shouldn’t need dedicated operations engineers if you’re running on Heroku. So maybe that potential savings accounts for some of the cost, but we’ll leave that to your own discretion. The truth is that many large applications and businesses, Judoscale included, began with one developer building and deploying an app on Heroku. Heroku allows the “one dev shop” to scale enormously in ways that more complex platforms would not — Adam talked about this quite a bit in our interview — most teams can go a long time before needing more complexity and control than Heroku gives. We’ve seen companies doing billions in annual revenue humming along perfectly fine on a cluster of Perf-L’s!
When it comes to uptime concerns (depending on how you count them and consider magnitude), Heroku has had 2-4 major outages in the last two years, each lasting at least a few hours, impacting most of their customers. That’s still somewhere in the “three nines” region, and frankly, I don’t know of any hosting platform shooting much higher. Heroku outages just tend to be more prominent in developer news given how many applications run on Heroku. Answering the earlier question of, “How often is my CTO going to hear the word ‘Heroku’?”, which is an implication of outages, the answer is probably once or twice a year. Therefore, we’d consider their uptime to be solidly “good”.
If we boil the concept of organizational friction down to simply, “how does using this platform impact my business beyond my developers?” Then there are a few plain answers: it’s going to cost a lot, it’s going to save you from hiring quite a bit, and it’s going to be boring and unmentioned almost all the time. Generally speaking, most organizations are into that tradeoff, which is why Heroku has been so successful. Nonetheless, given that Heroku is no longer the only fish in the sea, and that its competitors have kept up with modern compute hardware and pricing adjustments so much more, we give Heroku a C in for organizational friction. It’s a very good service, but it just shouldn’t be as expensive as it is in 2026.
👀 Note
There’s one last bit of “organizational friction” that’s more amorphous and would be difficult to give a grade because it’s entirely subjective: trust in the platform and company itself. Do you trust that the platform is moving in the right direction? Building according to your needs and interests? Has the best intentions? Looks out for the customer’s needs?
We’re not going to factor it into our grade here (a C already is what it is), but this is perhaps this single biggest pain point for us with Heroku right now. Heroku has absolutely burned a whole lot of developer trust in the last two months. Unclear messaging, vague direction, a “Sustaining Engineering” model lots of corporate hand-wavy verbiage… trust in Heroku is at a many-year low.
Tangibly, we’d love to see platforms provide public development roadmaps, transparent communication when things go wrong (or right!), and open spaces for developers to provide feedback that’s taken seriously. Heroku currently fails on all three fronts.
Let’s Wrap It Up
Heroku:
- Shipping Friction: A
- Debugging Friction: B
- Infrastructure Friction: A-
- Organizational Friction: C (for ‘Cost’ 😆)
Perhaps our most primary opinion given in our “Heroku: What’s Next” article was that of, “Heroku’s still fine, we’ve got years”. That opinion feels worth reiterating here because it’s hard to look at these grades and not choose Heroku. It’s not a perfect platform by any means, but it still sets quite a high bar in 2026, cost aside.
Our “friction model” grading mechanism and rubric, if anything, might be helping to remind us why we chose Heroku in the first place. It wasn’t flashy features or landmark architectures… Heroku just removes a lot of friction in a lot of different places. You can ship quickly and easily, debug reasonably, focus on building your product, and largely not worry about Heroku at all. That’s nice!
But, of course, this sets the stage for what’s coming: it’s time to find out if this experience and ease of use holds up elsewhere. It’s time to go on tour and move our production application to each of the competitors; time to see how much friction exists on other platforms. We want to find out what’s harder, what’s easier, what’s faster, and what’s rough. That’s the real test, but now we have our baseline!
✅ Tip
Reminder: This article is the first post in our “Judoscale going on tour” series, where we put our money where our mouth is and migrate Judoscale to various platforms. No holding back, no keeping background jobs somewhere else, no splitting traffic.
Judoscale is a 24/7 real-time reactive production application. We receive well over 3,000 RPS every moment of every day. Our downtime is exceedingly rare (generally only when Cloudflare or Heroku themselves have issues), but then, it darn well should be! We’re an autoscaler! We need to be online, regardless of traffic load, so that we can reactively scale our clients’ applications correctly and appropriately any time of day.
Sign up for our newsletter to join us on this tour as we discover the nooks and crannies of 2026’s available PaaS’s. If you’ve been thinking about moving, let us feel the pain first — we’ll tell you all about it 😆.