Helping PostPilot Process Gigabytes of PDFs

This case study was based on our chat with Matt Bertino, founder and CTO of PostPilot and vocal advocate for our autoscaler add-on.

Introducing PostPilot and their direct mail service

PostPilot is the #1 postcard direct mail marketing service for Shopify and Ecommerce. They give businesses the power to mail out personalized, profit-generating postcards and handwritten notes.

Their service is a great success, rapidly growing in the last two years up to a 50-person+ company.

Users receive a fully managed experience as PostPilot can design the mailings, manage audiences, print them and mail each piece. This includes the option to create handwritten notes using real pen-wielding robots.

Generating 30MB print files

PostPilot runs their own print production facility, complete with industrial-size printers, cutters and a small forest’s worth of paper. Managing this takes a set of production apps to handle a range of requirements.

A key step is generating a separate print-ready PDF file for each recipient. Each file is around 30MB and there can easily be thousands in a batch at a moments notice. The faster they can generate those files, the faster they can be printed and sent off.

Handling massive spikes in webhooks

As with most Shopify apps, PostPilot sees a huge increases in webhook traffice around the holiday season.

99.9% of our apps resources were used to deal with webhooks related processing.

From our web dynos to our worker dynos these floods of webhooks need up causing highly varied loads across their system. Their developers were manually dialing up their dyno count when necessary, trying to limit their costs from excess capacity, but that was becoming a constant headache.

In search of autoscaling

PostPilot’s first approach was to squeeze as much juice as possible from Sidekiq by tweaking their queues and priorities, but it only got them so far. With a high Heroku bill, they were reluctant to permanently raise their dyno count, so they turned to autoscaling solutions.

Heroku’s inbuilt autoscaling was unable to help, as it only works on performance web dynos. So they tried an alternative autoscaling solution they had previously encountered but got stuck in the setup.

Thankfully they found Judoscale in the Heroku marketplace and in around 20 minutes, their web and worker dynos were happily scaling.

10x faster for negligible effort

Judoscale now regularly ramps up PostPilot’s capacity to over 50 worker dynos. With four threads running in each Sidekiq process, they estimate a 10x faster throughput on their core PDF processing.

Their requirements have also expanded, such as splitting out several more independently autoscaled worker processes for their new in-house rendering processes. It has allowed them to grow to process millions of print pieces per month, fully rendered and customized on both sides.

$39/month to remove a whole heap of hassle

PostPilot started off on the $39/month plan, which rapidly provided a strong ROI. The initial priority was not having to log into Heroku each time he received an email alert about a spike.

I feel like it’s been a bargain for the headaches it saves me. It lets us not have to dive into Heroku each time there’s a burst of activity.

Now with ever-growing throughput and thousands of dollars in server costs, Judoscale is a core part of their stack. It lets them maintain throughput while controlling costs, all for minimal effort.

These days their Judoscale setup runs along happily in the background, mostly left to do its thing until the team is working on a new feature or process such as migrating their UI to Hotwire and Turbo Streams.

If you are using Heroku, I can not see why you wouldn’t add Judoscale, especially if you have a variable high demand from your web or worker dynos.

If you would like to try Judoscale for your own application, you can use our White Belt plan for free. Click here to get up and running in minutes.