Help!
Understanding Judoscale Adapter's Performance and Compatibility
What is the Performance Impact of the Judoscale Adapter?
The adapters have no noticeable impact on response or job time. They collect the queue time for each request or job in memory — a very simple operation — and an async reporter thread periodically posts those queue times to the Judoscale back-end. Check out the middleware code on GitHub if you’re interested.
Can I use other autoscalers with Judoscale?
You can use different autoscalers for different dynos. For example, you could use Heroku’s native autoscaling for web dynos and Judoscale for worker dynos.
Do not use multiple autoscalers on the same dynos, though. This results in very unpredictable scaling behavior.
Can I export Judoscale metrics to another tool?
We don’t have a public metrics API yet, but it’s on our roadmap. If you need to correlate queue time with other telemetry today, the Judoscale middleware exposes queue time via the Rack environment, so you can forward it to your logging or observability stack alongside your existing data.
What data does Judoscale collect from my app?
Each adapter periodically sends queue-time metrics to the Judoscale back-end so we can aggregate them and apply our autoscaling algorithm. Alongside those metrics we collect a small amount of metadata (language, framework, adapter version, and basic configuration) to make sure we interpret the data correctly. All adapter packages are open source, so you can review exactly how those metrics are produced before installing them.
Can I run Judoscale without sending data to Judoscale servers?
No. The autoscaling configuration and logic run on Judoscale’s infrastructure, so we need the metrics reported from your app to make scaling decisions and call the appropriate platform APIs. There isn’t an airgapped or isolated mode available today.