Solid Queue in Rails 8: Install, Migrate, and Deploy
Install Solid Queue in Rails 8 with database migrations, configure recurring jobs and concurrency, then deploy with Kamal. Includes Mission Control monitoring.
Rails 8 ships with jobs that run on your database - no Redis required. After running Solid Queue in production across multiple Rails 8 apps, here’s what you need to know about setup, when to trust it, and where the edges are.
For solo developers and small teams building SaaS applications, this is a significant simplification. You can eliminate Redis from your stack for most use cases while gaining robust background job processing with built-in monitoring. Here’s everything you need to know.
Quick Reference
Q: Is Solid Queue the default background job backend in Rails 8?
Yes. Rails 8 ships with Solid Queue as the default Active Job adapter, replacing the need for Redis-backed alternatives like Sidekiq or Resque. New Rails 8 apps include Solid Queue out of the box - just run bin/rails db:migrate and your jobs run on your existing database.
Q: What environment variable configures Puma to manage Solid Queue in a Kamal deployment?
Set SOLID_QUEUE_IN_PUMA=1 in your environment. In Rails 8 with Kamal, this tells Puma to automatically start and supervise the Solid Queue supervisor process. You can also use plugin :solid_queue directly in config/puma.rb, or the conditional ENV['PUMA_RUN_JOBS'] approach for more control over which servers run background jobs.
Q: How do I install Solid Queue in an existing Rails app?
Run bin/rails solid_queue:install to generate the CreateSolidQueueTables migration and config files, then run bin/rails db:migrate. This creates the solid_queue_ready_executions, solid_queue_claimed_executions, solid_queue_blocked_executions, and related tables in your database.
Q: What tables does the CreateSolidQueueTables migration create?
bin/rails solid_queue:install generates the CreateSolidQueueTables migration. Running bin/rails db:migrate creates 11 tables: solid_queue_jobs (the main jobs table), solid_queue_ready_executions, solid_queue_claimed_executions, solid_queue_blocked_executions, solid_queue_scheduled_executions, solid_queue_failed_executions, solid_queue_recurring_executions, solid_queue_recurring_tasks, solid_queue_pauses, solid_queue_processes, and solid_queue_semaphores. All execution tables have foreign keys to solid_queue_jobs with ON DELETE CASCADE.
Q: What does each Solid Queue table do?
solid_queue_jobs stores all job data (class, arguments, priority, queue). solid_queue_ready_executions holds jobs ready to run. solid_queue_claimed_executions tracks jobs locked by a worker process. solid_queue_blocked_executions holds jobs waiting on concurrency limits. solid_queue_scheduled_executions stores jobs scheduled for future execution. solid_queue_failed_executions records failed jobs with error details. solid_queue_recurring_executions and solid_queue_recurring_tasks manage cron-style recurring jobs. solid_queue_pauses tracks paused queues. solid_queue_processes registers running worker/dispatcher processes with heartbeats. solid_queue_semaphores implements concurrency control via database-level semaphores.
What Solid Queue Is (and Isn’t)
Solid Queue is a database-backed Active Job backend that ships with Rails 8. It gives you:
- Delayed jobs - Schedule jobs for future execution
- Recurring jobs - cron-like scheduling without cron
- Concurrency control - Limit simultaneous jobs by type or arguments
- Priority queues - Process critical jobs first
- Built-in monitoring - Dashboard via Mission Control - Jobs
Solid Queue vs Sidekiq: Key Differences
Traditionally, background jobs in Rails meant Sidekiq + Redis. That’s a proven pattern, but it comes with operational overhead that Solid Queue eliminates for most applications.
| Feature | Solid Queue (Rails 8) | Sidekiq + Redis |
|---|---|---|
| Backend | PostgreSQL (your existing DB) | Redis (separate service) |
| Throughput | ~1,000-2,000 jobs/min | ~10,000+ jobs/min |
| Job latency | 100ms-5s (polling) | 5-10ms (pub/sub) |
| Recurring jobs | Built-in (recurring.yml) | Requires sidekiq-cron gem |
| Concurrency control | Built-in (limits_concurrency) | Sidekiq Enterprise only |
| Monitoring | Mission Control (free) | Sidekiq Web (free) / Pro ($) |
| Infrastructure cost | $0 extra (uses Postgres) | $50-150+/mo (managed Redis) |
| Transactional enqueue | Yes (same DB transaction) | No (separate datastore) |
| Setup complexity | Minimal (ships with Rails 8) | Moderate (Redis + gem config) |
When Solid Queue Shines
Here’s where it excels:
- SaaS applications with moderate job volumes (< 1000 jobs/minute)
- E-commerce and payment apps needing transactional job enqueueing
- Startups wanting to minimize infrastructure complexity
- Internal tools where operational simplicity matters
- Apps already using PostgreSQL (no additional databases needed)
What Solid Queue Isn’t
Be realistic about limitations:
- Not a Redis replacement for caching - Use Solid Cache for that
- Not for ultra-high throughput - Sidekiq+Redis is faster for millions of jobs/day
- Not a message bus - Use ActionCable or Kafka for pub/sub patterns
Architecture in Two Minutes
Understanding Solid Queue’s design helps you use it effectively.
Three Core Actors
1. Workers - Process jobs from queues
# Each worker polls a queue
worker_1: polling "critical" queue
worker_2: polling "default" queue
worker_3: polling "mailers" queue
2. Dispatcher - Routes jobs to workers based on priority and concurrency rules
3. Scheduler - Enqueues recurring jobs at specified times
How Polling Works
Solid Queue uses PostgreSQL’s SKIP LOCKED feature:
-- Multiple workers can poll simultaneously without blocking
SELECT * FROM solid_queue_jobs
WHERE queue_name = 'default'
AND scheduled_at <= NOW()
ORDER BY priority DESC, scheduled_at ASC
LIMIT 1
FOR UPDATE SKIP LOCKED;
This prevents thundering herd problems and ensures each job is processed exactly once.
Single vs Separate Databases
Single Database (default):
- Simplest setup
- Jobs share connection pool with app queries
- Works great for most applications
Separate Queue Database:
- Isolates job processing from app queries
- Useful for high-traffic apps
- Prevents job processing from blocking user requests
# config/database.yml
production:
primary:
<<: *default
database: myapp_production
queue:
<<: *default
database: myapp_queue_production
migrations_paths: db/queue_migrate
Start with a single database. Move to separate databases only if you see connection pool exhaustion or slow queries caused by job processing.
Getting Started (Rails 8 New App)
Rails 8 includes Solid Queue by default. For new apps, you’re ready to go:
# Create new Rails 8 app
rails new myapp
# Solid Queue is already configured
# Just run migrations
bin/rails db:migrate
# Start job processor
bin/jobs
Adding to Existing Rails Apps
If you’re upgrading an existing app:
# Add to Gemfile
gem 'solid_queue'
# Install
bundle install
bin/rails solid_queue:install
# This generates the CreateSolidQueueTables migration and creates:
# - config/solid_queue.yml (worker/dispatcher configuration)
# - db/queue_schema.rb (schema for separate queue database)
# - db/migrate/XXXXXX_create_solid_queue_tables.rb (migration file)
# Run migrations
bin/rails db:migrate
The CreateSolidQueueTables Migration
Here’s the exact schema that bin/rails solid_queue:install generates. This is what the CreateSolidQueueTables migration creates when you run bin/rails db:migrate:
# db/migrate/XXXXXX_create_solid_queue_tables.rb
# Generated by: bin/rails solid_queue:install
#
# Creates 11 tables for Solid Queue's database-backed job processing
create_table "solid_queue_jobs" do |t|
t.string "queue_name", null: false
t.string "class_name", null: false
t.text "arguments"
t.integer "priority", default: 0, null: false
t.string "active_job_id"
t.datetime "scheduled_at"
t.datetime "finished_at"
t.string "concurrency_key"
t.datetime "created_at", null: false
t.datetime "updated_at", null: false
end
create_table "solid_queue_ready_executions" do |t|
t.bigint "job_id", null: false
t.string "queue_name", null: false
t.integer "priority", default: 0, null: false
t.datetime "created_at", null: false
end
create_table "solid_queue_claimed_executions" do |t|
t.bigint "job_id", null: false
t.bigint "process_id"
t.datetime "created_at", null: false
end
create_table "solid_queue_blocked_executions" do |t|
t.bigint "job_id", null: false
t.string "queue_name", null: false
t.integer "priority", default: 0, null: false
t.string "concurrency_key", null: false
t.datetime "expires_at", null: false
t.datetime "created_at", null: false
end
create_table "solid_queue_scheduled_executions" do |t|
t.bigint "job_id", null: false
t.string "queue_name", null: false
t.integer "priority", default: 0, null: false
t.datetime "scheduled_at", null: false
t.datetime "created_at", null: false
end
create_table "solid_queue_failed_executions" do |t|
t.bigint "job_id", null: false
t.text "error"
t.datetime "created_at", null: false
end
create_table "solid_queue_recurring_executions" do |t|
t.bigint "job_id", null: false
t.string "task_key", null: false
t.datetime "run_at", null: false
t.datetime "created_at", null: false
end
create_table "solid_queue_recurring_tasks" do |t|
t.string "key", null: false
t.string "schedule", null: false
t.string "command", limit: 2048
t.string "class_name"
t.text "arguments"
t.string "queue_name"
t.integer "priority", default: 0
t.boolean "static", default: true, null: false
t.text "description"
t.datetime "created_at", null: false
t.datetime "updated_at", null: false
end
create_table "solid_queue_pauses" do |t|
t.string "queue_name", null: false
t.datetime "created_at", null: false
end
create_table "solid_queue_processes" do |t|
t.string "kind", null: false
t.datetime "last_heartbeat_at", null: false
t.bigint "supervisor_id"
t.integer "pid", null: false
t.string "hostname"
t.text "metadata"
t.datetime "created_at", null: false
t.string "name", null: false
end
create_table "solid_queue_semaphores" do |t|
t.string "key", null: false
t.integer "value", default: 1, null: false
t.datetime "expires_at", null: false
t.datetime "created_at", null: false
t.datetime "updated_at", null: false
end
# All execution tables cascade-delete when the parent job is removed
add_foreign_key "solid_queue_blocked_executions", "solid_queue_jobs",
column: "job_id", on_delete: :cascade
add_foreign_key "solid_queue_claimed_executions", "solid_queue_jobs",
column: "job_id", on_delete: :cascade
add_foreign_key "solid_queue_failed_executions", "solid_queue_jobs",
column: "job_id", on_delete: :cascade
add_foreign_key "solid_queue_ready_executions", "solid_queue_jobs",
column: "job_id", on_delete: :cascade
add_foreign_key "solid_queue_recurring_executions", "solid_queue_jobs",
column: "job_id", on_delete: :cascade
add_foreign_key "solid_queue_scheduled_executions", "solid_queue_jobs",
column: "job_id", on_delete: :cascade
Key things to note about the schema:
solid_queue_jobsis the central table - all execution tables reference it viajob_idforeign keys withON DELETE CASCADE- Each execution type (
ready,claimed,blocked,scheduled,failed,recurring) has a unique index onjob_id- a job can only be in one execution state at a time solid_queue_semaphoresimplements concurrency control using database-level row lockingsolid_queue_processestracks running workers and dispatchers via heartbeats for process supervision- Indexes are optimized for polling queries using
SKIP LOCKED(e.g.,index_solid_queue_poll_by_queueon[queue_name, priority, job_id])
Basic Configuration
# config/solid_queue.yml
production:
dispatchers:
- polling_interval: 1
batch_size: 500
workers:
- queues: critical
threads: 5
processes: 2
polling_interval: 0.1
- queues: default
threads: 3
processes: 3
polling_interval: 1
- queues: low_priority
threads: 2
processes: 1
polling_interval: 5
Key settings:
threads- Concurrent jobs per processprocesses- Number of worker processes per queuepolling_interval- How often to check for new jobs (seconds)
Development vs Production
# config/solid_queue.yml
development:
workers:
- queues: "*" # Process all queues
threads: 1
processes: 1
polling_interval: 2
production:
workers:
# Separate workers per queue for better control
- queues: critical
threads: 5
processes: 2
- queues: [default, mailers]
threads: 3
processes: 3
Running Jobs in Development
# Terminal 1: Rails server
bin/rails server
# Terminal 2: Job processor
bin/jobs
Or use the Puma plugin to run jobs in the same process:
# config/puma.rb
plugin :solid_queue
# Now jobs run automatically with Puma
# Great for development, be cautious in production
Solid Queue Recurring Jobs in Rails
Solid Queue handles recurring jobs natively through config/recurring.yml - no cron, no whenever gem, no external scheduler. You define schedules in YAML and Solid Queue’s supervisor process runs them automatically.
Basic Recurring Job
# config/recurring.yml
production:
send_daily_summary:
class: DailySummaryJob
schedule: every day at 9am
cleanup_old_sessions:
class: SessionCleanupJob
schedule: every 1 hour
process_subscriptions:
class: SubscriptionChargeJob
schedule: every day at 2am
queue: critical
generate_reports:
class: ReportGenerationJob
schedule: "0 */4 * * *" # Every 4 hours (cron syntax)
Idempotency Matters
Recurring jobs may run multiple times due to retries or scheduler issues. Make them idempotent:
# app/jobs/daily_summary_job.rb
class DailySummaryJob < ApplicationJob
queue_as :default
def perform
today = Date.current
# Only process if not already done today
return if DailySummary.exists?(date: today, status: 'completed')
# Create a record to track execution
summary = DailySummary.create!(date: today, status: 'processing')
begin
# Generate summary
users = User.active.includes(:transactions)
data = generate_summary_data(users)
# Save results
summary.update!(
data: data,
status: 'completed',
completed_at: Time.current
)
rescue => e
summary.update!(status: 'failed', error: e.message)
raise # Re-raise to trigger retry
end
end
end
Recurring Jobs with Arguments
# config/recurring.yml
production:
sync_user_data:
class: UserSyncJob
args: [{ force: true }]
schedule: every 6 hours
send_notifications:
class: NotificationJob
args: ["daily_digest"]
schedule: every day at 8am
Production Example: Daily Reconciliation
# app/jobs/transaction_reconciliation_job.rb
class TransactionReconciliationJob < ApplicationJob
queue_as :critical
retry_on StandardError, wait: :exponentially_longer, attempts: 3
def perform
date = Date.current - 1.day
# Skip if already reconciled
return if Reconciliation.completed_for_date?(date)
reconciliation = Reconciliation.create!(
date: date,
status: 'in_progress'
)
Transaction.unreconciled.find_each(batch_size: 500) do |transaction|
ReconciliationService.process(transaction)
end
reconciliation.update!(
status: 'completed',
completed_at: Time.current
)
end
end
# config/recurring.yml
production:
reconcile_transactions:
class: TransactionReconciliationJob
schedule: every day at 1am
queue: critical
Retry Behavior
Important: Solid Queue doesn’t handle retries itself. That’s Active Job’s responsibility:
class MyJob < ApplicationJob
# Active Job retry configuration
retry_on TimeoutError, wait: 5.minutes, attempts: 3
retry_on ApiError, wait: :exponentially_longer, attempts: 5
discard_on ActiveRecord::RecordNotFound
def perform(user_id)
# Job logic
end
end
Concurrency Controls That Actually Help
Solid Queue’s concurrency controls prevent race conditions and resource exhaustion.
Limit Jobs by Type
# app/jobs/report_generation_job.rb
class ReportGenerationJob < ApplicationJob
queue_as :default
# Only 3 report jobs can run simultaneously
limits_concurrency to: 3, key: -> { "report_generation" }
def perform(user_id, report_type)
# Generate CPU-intensive report
user = User.find(user_id)
ReportGenerator.create(user, report_type)
end
end
Limit by Arguments (Per-Resource)
# app/jobs/invoice_export_job.rb
class InvoiceExportJob < ApplicationJob
queue_as :default
# Only 1 invoice export per account at a time
limits_concurrency to: 1, key: -> (account_id) { "invoice_export_#{account_id}" }
def perform(account_id)
account = Account.find(account_id)
# This can take several minutes
InvoiceExporter.generate_all(account)
end
end
Why this matters: Without concurrency limits, enqueueing 100 invoice exports for the same account could cause:
- Database lock contention
- Race conditions
- Wasted processing
With limits, subsequent jobs wait until the first completes.
Production Example: Payment Processing
# app/jobs/payment_processor_job.rb
class PaymentProcessorJob < ApplicationJob
queue_as :critical
# Only 1 payment per user at a time (prevent double-charging)
limits_concurrency to: 1, key: -> (transaction_id) {
transaction = Transaction.find(transaction_id)
"payment_processing_user_#{transaction.user_id}"
}
retry_on PaymentGateway::TemporaryError,
wait: :exponentially_longer,
attempts: 5
discard_on PaymentGateway::CardDeclined
def perform(transaction_id)
transaction = Transaction.find(transaction_id)
# Process payment with gateway
result = PaymentGateway.charge(
amount: transaction.amount,
token: transaction.payment_token
)
transaction.update!(
status: 'completed',
gateway_transaction_id: result.id
)
# Enqueue follow-up jobs
SendReceiptJob.perform_later(transaction.id)
UpdateAccountingJob.perform_later(transaction.id)
end
end
Concurrency with Expiry
class ApiSyncJob < ApplicationJob
# Limit to 5 concurrent, expire lock after 10 minutes
limits_concurrency to: 5,
key: -> { "api_sync" },
duration: 10.minutes
def perform
# Sync data from external API
end
end
Failure Handling & Manual Re-enqueue
Failed jobs stay in the database for inspection:
# In Rails console
failed_job = SolidQueue::Job.failed.last
# Inspect error
failed_job.error
failed_job.error_backtrace
# Fix data and retry
failed_job.retry
# Or discard permanently
failed_job.discard
Observability & Operations
Mission Control - Jobs
Rails 8 includes Mission Control - Jobs, a web dashboard for monitoring:
# Gemfile
gem 'mission_control-jobs'
# Mount in routes
Rails.application.routes.draw do
mount MissionControl::Jobs::Engine, at: "/jobs"
end
Visit /jobs to see:
- Active jobs - Currently processing
- Scheduled jobs - Waiting to run
- Failed jobs - With errors and backtraces
- Recurring jobs - Schedule and last run
- Queue stats - Throughput and latency
Dashboard Features
Retry/Discard Actions:
# From the UI, you can:
# - Retry failed jobs individually or in bulk
# - Discard jobs that shouldn't retry
# - View full error traces
# - Inspect job arguments
Queue Inspection:
- See pending job counts per queue
- Identify backlog issues
- Monitor job processing rates
- Track average execution time
Authentication
Protect your dashboard in production:
# config/routes.rb
authenticate :user, ->(user) { user.admin? } do
mount MissionControl::Jobs::Engine, at: "/jobs"
end
# Or with basic auth
MissionControl::Jobs.username = ENV["JOBS_USERNAME"]
MissionControl::Jobs.password = ENV["JOBS_PASSWORD"]
AppSignal Integration
For production-grade monitoring:
# Gemfile
gem 'appsignal'
# config/initializers/appsignal.rb
Appsignal.configure do |config|
config.active = true
config.push_api_key = ENV['APPSIGNAL_PUSH_API_KEY']
end
AppSignal automatically tracks:
- Job execution time
- Failure rates
- Queue depth
- Error details
Set up alerts:
- Notify when job queue depth > 1000
- Alert on job failure rate > 5%
- Warn if job execution time > 5 minutes
Production Readiness Checklist
Before deploying Solid Queue to production, verify:
1. Database Setup
# Use separate queue database (recommended for high traffic)
# config/database.yml
production:
primary:
database: myapp_production
queue:
database: myapp_queue_production
migrations_paths: db/queue_migrate
Or at minimum, use a separate schema:
# config/initializers/solid_queue.rb
SolidQueue.connects_to = { writing: :queue, reading: :queue }
2. Indexes
Solid Queue’s migrations include necessary indexes, but verify:
# db/queue_schema.rb should include:
# - Index on queue_name, scheduled_at, priority
# - Index on key (for concurrency control)
# - Index on active_job_id
# - Index on concurrency_limit_value
3. Worker Thread Counts
Match your workload:
# config/solid_queue.yml
production:
workers:
# CPU-intensive jobs: fewer threads
- queues: reports
threads: 2
processes: 2
# I/O-bound jobs: more threads
- queues: [mailers, api_calls]
threads: 10
processes: 2
# Mixed: moderate threads
- queues: default
threads: 5
processes: 3
Rule of thumb:
- CPU-intensive: threads ≤ CPU cores
- I/O-bound: threads = 2-5x CPU cores
- Mixed: threads = 1-2x CPU cores
4. Graceful Shutdown
Ensure jobs complete before restart:
# config/puma.rb
on_worker_shutdown do
SolidQueue.supervisor.stop
end
# Or if using separate process manager
# Set shutdown timeout to 30-60 seconds
5. Health & Readiness Probes
# config/routes.rb
get '/health/jobs', to: 'health#jobs'
# app/controllers/health_controller.rb
class HealthController < ApplicationController
def jobs
# Check if workers are processing
active_workers = SolidQueue::Worker.active.count
# Check queue depths
critical_depth = SolidQueue::Job.where(queue_name: 'critical').pending.count
if active_workers > 0 && critical_depth < 1000
render json: { status: 'ok' }, status: :ok
else
render json: {
status: 'unhealthy',
active_workers: active_workers,
critical_queue_depth: critical_depth
}, status: :service_unavailable
end
end
end
6. Rolling Restarts
For zero-downtime deploys:
# Kamal config
# .kamal/deploy.yml
service: myapp
servers:
web:
- 192.168.1.1
jobs:
- 192.168.1.2
cmd: bin/jobs
accessories:
postgres:
service: postgres
Or with systemd:
# /etc/systemd/system/solid-queue.service
[Unit]
Description=Solid Queue Worker
After=network.target
[Service]
Type=simple
User=deploy
WorkingDirectory=/var/www/myapp
ExecStart=/usr/local/bin/bundle exec bin/jobs
ExecReload=/bin/kill -USR1 $MAINPID
KillMode=mixed
TimeoutStopSec=60
[Install]
WantedBy=multi-user.target
7. Backups
Your job queue is in PostgreSQL, so:
# Backup includes jobs
pg_dump myapp_production > backup.sql
# Or separate queue database
pg_dump myapp_queue_production > queue_backup.sql
8. Known Gotchas
Based on production experience and GitHub issues:
Connection Pool Exhaustion:
# Ensure pool size accommodates workers
# config/database.yml
production:
queue:
pool: <%= ENV.fetch("SOLID_QUEUE_POOL_SIZE", 25) %>
Long-Running Jobs:
# Jobs holding DB connections for hours
# Use streaming or break into smaller jobs
class HugeReportJob < ApplicationJob
def perform(user_id)
User.find_each(batch_size: 100) do |user|
ProcessUserReportJob.perform_later(user.id)
end
end
end
Clock Drift:
# Ensure servers are time-synced
# Use NTP or cloud provider time sync
When Not to Use Solid Queue
Be honest about limitations. Consider alternatives if:
1. Ultra-High Throughput
Scenario: Processing millions of jobs per day with strict latency requirements
Numbers:
- Solid Queue: ~1000-2000 jobs/minute (depends on job complexity)
- Sidekiq: ~10,000+ jobs/minute
Solution: Use Sidekiq + Redis for high-volume queues
2. Hybrid Pattern
Keep Solid Queue for most jobs, isolate firehose queues to Redis:
# app/jobs/application_job.rb
class ApplicationJob < ActiveJob::Base
# Most jobs use Solid Queue (default)
end
# app/jobs/high_volume_job.rb
class HighVolumeJob < ApplicationJob
queue_adapter :sidekiq # Override for specific job
queue_as :firehose
def perform(event_data)
# Process high-volume events
end
end
# config/queue_adapters.yml
development:
default: solid_queue
production:
default: solid_queue
firehose: sidekiq # High-volume queue uses Redis
3. Real-Time Requirements
If you need sub-100ms job latency, Redis will be faster:
- Solid Queue polling interval: 100ms minimum
- Redis: Near-instant via pub/sub
4. Specialized Job Features
Sidekiq Pro/Enterprise offers:
- Batch job tracking
- Rate limiting
- Unique jobs (no duplicates)
- Web throttling
Solid Queue is simpler but less feature-rich.
Production Migration: The Numbers
Example: Migrating a SaaS application from Sidekiq to Solid Queue. Here’s what changed:
Before (Sidekiq + Redis)
Infrastructure:
- Rails app on 2x $40/month servers
- Redis cluster: $95/month (managed)
- Sidekiq workers: Shared with Rails processes
Performance:
- Job processing: ~500 jobs/minute
- Average job latency: 50ms (enqueue to start)
- Monthly costs: $215 (servers + Redis)
Operational complexity:
- Redis monitoring and alerts
- Redis backup management
- Connection pool tuning for both Postgres and Redis
- Separate Sidekiq configuration
After (Solid Queue)
Infrastructure:
- Rails app on 2x $40/month servers
- PostgreSQL: Already included
- Solid Queue workers: Integrated with Rails
Performance:
- Job processing: ~500 jobs/minute (same workload)
- Average job latency: 150ms (slightly higher, acceptable)
- Monthly costs: $80 (just servers)
Operational complexity:
- Single database to monitor
- Unified backup strategy
- One connection pool to tune
- Built-in Mission Control dashboard
Savings:
- $135/month (63% reduction)
- -1 service to manage
- +Better developer experience (simpler stack)
Trade-off:
- 100ms higher job latency (not impactful for this app)
- Lower theoretical throughput (not reached in practice)
Copy-Paste Snippets
ApplicationJob with Retry Semantics
# app/jobs/application_job.rb
class ApplicationJob < ActiveJob::Base
# Automatically retry jobs that raised StandardError
retry_on StandardError, wait: :exponentially_longer, attempts: 5
# Discard jobs that raised these exceptions
discard_on ActiveJob::DeserializationError
discard_on ActiveRecord::RecordNotFound
# Log job lifecycle
before_perform do |job|
Rails.logger.info "Starting job: #{job.class.name} with #{job.arguments}"
end
after_perform do |job|
Rails.logger.info "Completed job: #{job.class.name}"
end
# Handle errors
rescue_from(StandardError) do |exception|
Rails.logger.error "Job failed: #{exception.message}"
Rails.error.report(exception, handled: true, context: {
job_class: self.class.name,
job_arguments: arguments
})
raise # Re-raise to trigger retry
end
end
Example recurring.yml
# config/recurring.yml
production:
# Send daily summary emails
daily_summary:
class: DailySummaryJob
schedule: every day at 9am
queue: mailers
# Cleanup old records
cleanup_old_sessions:
class: SessionCleanupJob
schedule: every 1 hour
# Process subscription charges
process_subscriptions:
class: SubscriptionChargeJob
schedule: every day at 2am
queue: critical
args: [{ force: false }]
# Generate reports
generate_weekly_reports:
class: WeeklyReportJob
schedule: every monday at 6am
# Sync with external API
sync_external_data:
class: ExternalApiSyncJob
schedule: "*/15 * * * *" # Every 15 minutes (cron syntax)
# Database maintenance
vacuum_database:
class: DatabaseMaintenanceJob
schedule: every day at 3am
queue: low_priority
Sample solid_queue.yml Worker Topology
# config/solid_queue.yml
production:
dispatchers:
- polling_interval: 1
batch_size: 500
concurrency_maintenance_interval: 300
workers:
# Critical queue: High priority, fast polling
- queues: critical
threads: 5
processes: 2
polling_interval: 0.1 # 100ms
# Default queue: Moderate resources
- queues: default
threads: 3
processes: 3
polling_interval: 1
# Mailers: I/O bound, many threads
- queues: mailers
threads: 10
processes: 2
polling_interval: 2
# Reports: CPU intensive, fewer threads
- queues: reports
threads: 2
processes: 1
polling_interval: 5
# Low priority: Minimal resources
- queues: low_priority
threads: 1
processes: 1
polling_interval: 10
development:
workers:
- queues: "*"
threads: 1
processes: 1
polling_interval: 2
Puma Plugin Toggle for Dev/Prod
# config/puma.rb
if ENV['RAILS_ENV'] == 'development'
# In development, run jobs in same process
plugin :solid_queue
else
# In production, run jobs in separate process
# (started via systemd, Docker, or Kamal)
end
# Or conditionally based on environment variable
if ENV['PUMA_RUN_JOBS'] == 'true'
plugin :solid_queue
end
Cost & Ops Math: The VPS Deploy
Here’s the economics that make Solid Queue compelling for solopreneurs and SMBs.
Traditional Stack (Sidekiq + Redis)
Managed Infrastructure (Heroku/Render):
- Web dyno: $25/month
- Worker dyno: $25/month
- Postgres: $50/month
- Redis: $95/month
- Total: $195/month
Self-Managed VPS:
- App server: $80/month (4 CPU, 8GB RAM)
- Redis server: $40/month (2 CPU, 4GB RAM)
- Total: $120/month + management time
Solid Queue Stack
Managed Infrastructure:
- Web+Worker dyno: $40/month (combined)
- Postgres: $50/month
- Total: $90/month (54% savings)
Self-Managed VPS (with Kamal):
- App server: $80/month (runs web + jobs)
- Total: $80/month (33% savings)
Additional savings:
- No Redis monitoring costs
- No Redis backup costs
- Simpler deployment (fewer moving parts)
- Faster iteration (one less service to update)
Kamal Deploy Example
# .kamal/deploy.yml
service: myapp
image: username/myapp
servers:
web:
hosts:
- 192.168.1.1
options:
network: "private"
jobs:
cmd: bin/jobs
hosts:
- 192.168.1.1
options:
network: "private"
registry:
username: username
password:
- KAMAL_REGISTRY_PASSWORD
env:
secret:
- DATABASE_URL
- SECRET_KEY_BASE
accessories:
postgres:
image: postgres:16
host: 192.168.1.1
port: 5432
env:
secret:
- POSTGRES_PASSWORD
directories:
- data:/var/lib/postgresql/data
# Deploy everything with one command
kamal deploy
# Zero downtime, automatic rollback on failure
# Web + jobs + database all managed
Latency Real-Talk
Let’s be honest about performance trade-offs.
Round-Trip Costs
Redis (in-memory):
- Enqueue: 1-5ms
- Poll: < 1ms (pub/sub)
- Job start latency: 5-10ms
Solid Queue (PostgreSQL):
- Enqueue: 5-15ms (DB write)
- Poll: 100ms-5s (configurable)
- Job start latency: 100ms-5s
Why SKIP LOCKED Matters
-- Without SKIP LOCKED (old approach)
SELECT * FROM jobs WHERE queue = 'default' LIMIT 1 FOR UPDATE;
-- Workers block waiting for lock, poor performance
-- With SKIP LOCKED (Solid Queue)
SELECT * FROM jobs WHERE queue = 'default' LIMIT 1 FOR UPDATE SKIP LOCKED;
-- Workers never block, each gets different job
This single feature makes database-backed queues viable for production.
When Latency Matters
100ms latency is fine for:
- Email sending (users won’t notice)
- Report generation (already takes minutes)
- Data syncing (background task)
- Cleanup jobs (periodic maintenance)
100ms latency is problematic for:
- Real-time notifications (use ActionCable)
- Payment processing feedback (critical path)
- User-facing workflows (should be synchronous)
Solution: Keep latency-critical jobs in Redis, everything else in Solid Queue.
Toolkit Pairing: The Rails 8 Batteries-Included Story
Solid Queue + Solid Cache + Mission Control = cohesive, simple infrastructure.
The Trinity
Solid Queue - Background jobs
# Replace Sidekiq + Redis
PaymentProcessorJob.perform_later(transaction_id)
Solid Cache - Application caching
# Replace Rails.cache + Redis
Rails.cache.fetch("user_#{user.id}_stats", expires_in: 1.hour) do
expensive_calculation
end
Mission Control - Monitoring dashboard
# Replace Sidekiq Web + Redis Commander
mount MissionControl::Jobs::Engine, at: "/jobs"
The Result
Before Rails 8:
- Rails app
- Redis (Sidekiq)
- Redis (Cache)
- Sidekiq Web
- Redis Commander
- Separate monitoring
After Rails 8:
- Rails app
- PostgreSQL
- Mission Control (built-in)
- Unified monitoring
The Bottom Line
Here’s the honest evaluation:
Solid Queue is the right choice if you:
- Run a typical SaaS, e-commerce, or B2B application
- Process < 1000 jobs/minute
- Value operational simplicity
- Want to minimize infrastructure costs
- Are building with a small team
- Use PostgreSQL already
Stick with Sidekiq + Redis if you:
- Process millions of jobs per day
- Need sub-100ms job latency
- Require Sidekiq Pro/Enterprise features
- Have existing Redis infrastructure
- Need proven scalability for massive throughput
For most applications - including production systems handling serious load - Solid Queue provides the right balance of simplicity and capability.
What’s Next?
Ready to migrate? The next post covers migrating from Sidekiq to Solid Queue, including job adapter compatibility, data migration strategies, zero-downtime cutover, and rollback plans.
Need help setting up Solid Queue in production? I help teams with background job architecture, performance tuning, and Rails 8 migrations. If you’re adopting Solid Queue or moving off Sidekiq, reach out at nikita.sinenko@gmail.com.
Further Reading
- Migrating from Sidekiq to Solid Queue
- Solid Cache in Rails 8: Database-Backed Caching
- How to Deploy Rails 8 Apps with Kamal to a VPS
- Rails 8 Game-Changing Features
- How to Integrate Silverfin API - using Solid Queue for API sync workers
- Solid Queue GitHub Repository
- Mission Control - Jobs
- Rails 8.0 Release Notes