Helping XBE Scale for a Lumpy Workload

This case study was based on our chat with Sean Devine, CEO of XBE

Introducing XBE and their construction management platform

XBE started in 2016 as a platform for managing horizontal construction projects, such as building new highways, bike paths and parking lots. Their users are large contractors whose setup includes quarries, asphalt plants, construction crews and truck fleets.

Coordinating all these moving parts is a considerable undertaking, which is where XBE comes in.

Their software provides a platform to manage the entire production chain, from materials to transport to manufacturing and construction. Each company on XBE will have hundreds of daily users across every level of operations.

Lumpy workloads with varying priorities

XBE’s software includes a range of intensive calculations such as job site simulations, each of which must be cached. It also connects to 100+ external systems which all poll for information or queue up data to be processed.

All of these created a lumpy demand on their worker dynos, with daily rush hours as the workday begins for users in each time zone.

We intentionally overscaled, but still sometimes had the problem of not having enough resources.

It created a difficult time for XBE. They kept their system overscaled, but that was ramping up their costs while still leaving them with some spikes still overloading their resources and interrupting the team with alerts.

Finding their ideal fit with Judoscale

Sean and his team estimated they were overspending by a few thousand a month. They weren’t autoscaling their Sidekiq processes because Heroku’s autoscaler doesn’t support worker dynos.

Thankfully, one of the XBE team remembered a conversation about queue time vs response time that they had with Adam, Judoscale’s founder, in Nate Berkopec’s Rails Performance Slack group.

When diving into the details of Judoscale, they confirmed that it met their three priorities:

  1. Reducing unnecessary overspending without compromising user experience
  2. Fine granular controls over different queue conditions
  3. Reduces risk, with clear consideration of every edge case

After making the decision, they had Judoscale up and running in around 15 minutes, with most of that time spent fine-tuning their configuration and time ranges.

We could see the care in Judoscale’s documentation, the gems were well-documented and designed with a general sense of quality.

Their estimates proved to be correct, with XBE saving thousands through autoscaling while maintaining user performance.

Feeding into upcoming features

XBE are now keen supporters of Judoscale. They have even contributed to the roadmap, with new features being planned based on their needs and ideas such as ways to help users identify the best settings.

Our servers now happily scale anywhere from 2 to 15 dynos, while giving me peace of mind that there’s no risk involved.

If you would like to try Judoscale for your own application, you can use our White Belt plan for free. Click here to get up and running in minutes.