rails background-jobs performance

Solid Queue in Rails 8: Setup, Recurring Jobs, and Production Tuning

Ruby on Rails, Rails 8, Solid Queue, Background Jobs, Performance, PostgreSQL

A practical guide to Solid Queue—Rails 8's DB-backed job runner. Learn setup, recurring jobs, concurrency limits, and monitoring with Mission Control.

Rails 8 ships with jobs that run on your database—no Redis required. After running Solid Queue in production for several months across multiple applications, I want to share how to set it up, when to trust it, and where the edges are.

For solo developers and small teams building SaaS applications, this is transformative. You can eliminate Redis from your stack for most use cases while gaining robust background job processing with built-in monitoring. Here’s everything you need to know.

What Solid Queue Is (and Isn’t)

Solid Queue is a database-backed Active Job backend that ships with Rails 8. It gives you:

  • Delayed jobs - Schedule jobs for future execution
  • Recurring jobs - cron-like scheduling without cron
  • Concurrency control - Limit simultaneous jobs by type or arguments
  • Priority queues - Process critical jobs first
  • Built-in monitoring - Dashboard via Mission Control — Jobs

Why This Matters vs Redis-Backed Systems

For 15 years, background jobs in Rails meant Sidekiq + Redis. That’s a proven pattern, but it comes with operational overhead:

Traditional Stack (Sidekiq + Redis):

  • Extra service to manage, monitor, and secure
  • Redis cluster for high availability
  • Memory management and eviction policies
  • Network latency between Rails and Redis
  • Additional monthly costs ($50-150+ for managed Redis)

Solid Queue Stack:

  • Uses your existing PostgreSQL database
  • Same connection pool, same backups
  • Transactional integrity with your app data
  • One less service to deploy and monitor
  • Simpler infrastructure

When Solid Queue Shines

After running Solid Queue in production, here’s where it excels:

  1. SaaS applications with moderate job volumes (< 1000 jobs/minute)
  2. FinTech apps needing transactional job enqueueing
  3. Startups wanting to minimize infrastructure complexity
  4. Internal tools where operational simplicity matters
  5. Apps already using PostgreSQL (no additional databases needed)

What Solid Queue Isn’t

Be realistic about limitations:

  • Not a Redis replacement for caching - Use Solid Cache for that
  • Not for ultra-high throughput - Sidekiq+Redis is faster for millions of jobs/day
  • Not a message bus - Use ActionCable or Kafka for pub/sub patterns

Architecture in Two Minutes

Understanding Solid Queue’s design helps you use it effectively.

Three Core Actors

1. Workers - Process jobs from queues

# Each worker polls a queue
worker_1: polling "critical" queue
worker_2: polling "default" queue
worker_3: polling "mailers" queue

2. Dispatcher - Routes jobs to workers based on priority and concurrency rules

3. Scheduler - Enqueues recurring jobs at specified times

How Polling Works

Solid Queue uses PostgreSQL’s SKIP LOCKED feature:

-- Multiple workers can poll simultaneously without blocking
SELECT * FROM solid_queue_jobs
WHERE queue_name = 'default'
  AND scheduled_at <= NOW()
ORDER BY priority DESC, scheduled_at ASC
LIMIT 1
FOR UPDATE SKIP LOCKED;

This prevents thundering herd problems and ensures each job is processed exactly once.

Single vs Separate Databases

Single Database (default):

  • Simplest setup
  • Jobs share connection pool with app queries
  • Works great for most applications

Separate Queue Database:

  • Isolates job processing from app queries
  • Useful for high-traffic apps
  • Prevents job processing from blocking user requests
# config/database.yml
production:
  primary:
    <<: *default
    database: myapp_production

  queue:
    <<: *default
    database: myapp_queue_production
    migrations_paths: db/queue_migrate

I recommend starting with a single database. Move to separate databases only if you see connection pool exhaustion or slow queries caused by job processing.

Getting Started (Rails 8 New App)

Rails 8 includes Solid Queue by default. For new apps, you’re ready to go:

# Create new Rails 8 app
rails new myapp

# Solid Queue is already configured
# Just run migrations
bin/rails db:migrate

# Start job processor
bin/jobs

Adding to Existing Rails Apps

If you’re upgrading an existing app:

# Add to Gemfile
gem 'solid_queue'

# Install
bundle install
bin/rails solid_queue:install

# This creates:
# - config/solid_queue.yml
# - db/queue_schema.rb
# - db/migrate/*_create_solid_queue_tables.rb

# Run migrations
bin/rails db:migrate

Basic Configuration

# config/solid_queue.yml
production:
  dispatchers:
    - polling_interval: 1
      batch_size: 500

  workers:
    - queues: critical
      threads: 5
      processes: 2
      polling_interval: 0.1

    - queues: default
      threads: 3
      processes: 3
      polling_interval: 1

    - queues: low_priority
      threads: 2
      processes: 1
      polling_interval: 5

Key settings:

  • threads - Concurrent jobs per process
  • processes - Number of worker processes per queue
  • polling_interval - How often to check for new jobs (seconds)

Development vs Production

# config/solid_queue.yml
development:
  workers:
    - queues: "*"  # Process all queues
      threads: 1
      processes: 1
      polling_interval: 2

production:
  workers:
    # Separate workers per queue for better control
    - queues: critical
      threads: 5
      processes: 2
    - queues: [default, mailers]
      threads: 3
      processes: 3

Running Jobs in Development

# Terminal 1: Rails server
bin/rails server

# Terminal 2: Job processor
bin/jobs

Or use the Puma plugin to run jobs in the same process:

# config/puma.rb
plugin :solid_queue

# Now jobs run automatically with Puma
# Great for development, be cautious in production

Recurring Jobs Without Cron

One of Solid Queue’s killer features is built-in recurring jobs.

Basic Recurring Job

# config/recurring.yml
production:
  send_daily_summary:
    class: DailySummaryJob
    schedule: every day at 9am

  cleanup_old_sessions:
    class: SessionCleanupJob
    schedule: every 1 hour

  process_subscriptions:
    class: SubscriptionChargeJob
    schedule: every day at 2am
    queue: critical

  generate_reports:
    class: ReportGenerationJob
    schedule: "0 */4 * * *"  # Every 4 hours (cron syntax)

Idempotency Matters

Recurring jobs may run multiple times due to retries or scheduler issues. Make them idempotent:

# app/jobs/daily_summary_job.rb
class DailySummaryJob < ApplicationJob
  queue_as :default

  def perform
    today = Date.current

    # Only process if not already done today
    return if DailySummary.exists?(date: today, status: 'completed')

    # Create a record to track execution
    summary = DailySummary.create!(date: today, status: 'processing')

    begin
      # Generate summary
      users = User.active.includes(:transactions)
      data = generate_summary_data(users)

      # Save results
      summary.update!(
        data: data,
        status: 'completed',
        completed_at: Time.current
      )
    rescue => e
      summary.update!(status: 'failed', error: e.message)
      raise  # Re-raise to trigger retry
    end
  end
end

Recurring Jobs with Arguments

# config/recurring.yml
production:
  sync_user_data:
    class: UserSyncJob
    args: [{ force: true }]
    schedule: every 6 hours

  send_notifications:
    class: NotificationJob
    args: ["daily_digest"]
    schedule: every day at 8am

Real-World Example: FinTech Reconciliation

# app/jobs/transaction_reconciliation_job.rb
class TransactionReconciliationJob < ApplicationJob
  queue_as :critical
  retry_on StandardError, wait: :exponentially_longer, attempts: 3

  def perform
    date = Date.current - 1.day

    # Skip if already reconciled
    return if Reconciliation.completed_for_date?(date)

    reconciliation = Reconciliation.create!(
      date: date,
      status: 'in_progress'
    )

    Transaction.unreconciled.find_each(batch_size: 500) do |transaction|
      ReconciliationService.process(transaction)
    end

    reconciliation.update!(
      status: 'completed',
      completed_at: Time.current
    )
  end
end
# config/recurring.yml
production:
  reconcile_transactions:
    class: TransactionReconciliationJob
    schedule: every day at 1am
    queue: critical

Retry Behavior

Important: Solid Queue doesn’t handle retries itself. That’s Active Job’s responsibility:

class MyJob < ApplicationJob
  # Active Job retry configuration
  retry_on TimeoutError, wait: 5.minutes, attempts: 3
  retry_on ApiError, wait: :exponentially_longer, attempts: 5

  discard_on ActiveRecord::RecordNotFound

  def perform(user_id)
    # Job logic
  end
end

Concurrency Controls That Actually Help

Solid Queue’s concurrency controls prevent race conditions and resource exhaustion.

Limit Jobs by Type

# app/jobs/report_generation_job.rb
class ReportGenerationJob < ApplicationJob
  queue_as :default

  # Only 3 report jobs can run simultaneously
  limits_concurrency to: 3, key: -> { "report_generation" }

  def perform(user_id, report_type)
    # Generate CPU-intensive report
    user = User.find(user_id)
    ReportGenerator.create(user, report_type)
  end
end

Limit by Arguments (Per-Resource)

# app/jobs/invoice_export_job.rb
class InvoiceExportJob < ApplicationJob
  queue_as :default

  # Only 1 invoice export per account at a time
  limits_concurrency to: 1, key: -> (account_id) { "invoice_export_#{account_id}" }

  def perform(account_id)
    account = Account.find(account_id)

    # This can take several minutes
    InvoiceExporter.generate_all(account)
  end
end

Why this matters: Without concurrency limits, enqueueing 100 invoice exports for the same account could cause:

  • Database lock contention
  • Race conditions
  • Wasted processing

With limits, subsequent jobs wait until the first completes.

Real-World Example: Payment Processing

# app/jobs/payment_processor_job.rb
class PaymentProcessorJob < ApplicationJob
  queue_as :critical

  # Only 1 payment per user at a time (prevent double-charging)
  limits_concurrency to: 1, key: -> (transaction_id) {
    transaction = Transaction.find(transaction_id)
    "payment_processing_user_#{transaction.user_id}"
  }

  retry_on PaymentGateway::TemporaryError,
           wait: :exponentially_longer,
           attempts: 5

  discard_on PaymentGateway::CardDeclined

  def perform(transaction_id)
    transaction = Transaction.find(transaction_id)

    # Process payment with gateway
    result = PaymentGateway.charge(
      amount: transaction.amount,
      token: transaction.payment_token
    )

    transaction.update!(
      status: 'completed',
      gateway_transaction_id: result.id
    )

    # Enqueue follow-up jobs
    SendReceiptJob.perform_later(transaction.id)
    UpdateAccountingJob.perform_later(transaction.id)
  end
end

Concurrency with Expiry

class ApiSyncJob < ApplicationJob
  # Limit to 5 concurrent, expire lock after 10 minutes
  limits_concurrency to: 5,
                     key: -> { "api_sync" },
                     duration: 10.minutes

  def perform
    # Sync data from external API
  end
end

Failure Handling & Manual Re-enqueue

Failed jobs stay in the database for inspection:

# In Rails console
failed_job = SolidQueue::Job.failed.last

# Inspect error
failed_job.error
failed_job.error_backtrace

# Fix data and retry
failed_job.retry

# Or discard permanently
failed_job.discard

Observability & Operations

Mission Control — Jobs

Rails 8 includes Mission Control — Jobs, a web dashboard for monitoring:

# Gemfile
gem 'mission_control-jobs'

# Mount in routes
Rails.application.routes.draw do
  mount MissionControl::Jobs::Engine, at: "/jobs"
end

Visit /jobs to see:

  • Active jobs - Currently processing
  • Scheduled jobs - Waiting to run
  • Failed jobs - With errors and backtraces
  • Recurring jobs - Schedule and last run
  • Queue stats - Throughput and latency

Dashboard Features

Retry/Discard Actions:

# From the UI, you can:
# - Retry failed jobs individually or in bulk
# - Discard jobs that shouldn't retry
# - View full error traces
# - Inspect job arguments

Queue Inspection:

  • See pending job counts per queue
  • Identify backlog issues
  • Monitor job processing rates
  • Track average execution time

Authentication

Protect your dashboard in production:

# config/routes.rb
authenticate :user, ->(user) { user.admin? } do
  mount MissionControl::Jobs::Engine, at: "/jobs"
end

# Or with basic auth
MissionControl::Jobs.username = ENV["JOBS_USERNAME"]
MissionControl::Jobs.password = ENV["JOBS_PASSWORD"]

AppSignal Integration

For production-grade monitoring:

# Gemfile
gem 'appsignal'

# config/initializers/appsignal.rb
Appsignal.configure do |config|
  config.active = true
  config.push_api_key = ENV['APPSIGNAL_PUSH_API_KEY']
end

AppSignal automatically tracks:

  • Job execution time
  • Failure rates
  • Queue depth
  • Error details

Set up alerts:

  • Notify when job queue depth > 1000
  • Alert on job failure rate > 5%
  • Warn if job execution time > 5 minutes

Production Readiness Checklist

Before deploying Solid Queue to production, verify:

1. Database Setup

# Use separate queue database (recommended for high traffic)
# config/database.yml
production:
  primary:
    database: myapp_production
  queue:
    database: myapp_queue_production
    migrations_paths: db/queue_migrate

Or at minimum, use a separate schema:

# config/initializers/solid_queue.rb
SolidQueue.connects_to = { writing: :queue, reading: :queue }

2. Indexes

Solid Queue’s migrations include necessary indexes, but verify:

# db/queue_schema.rb should include:
# - Index on queue_name, scheduled_at, priority
# - Index on key (for concurrency control)
# - Index on active_job_id
# - Index on concurrency_limit_value

3. Worker Thread Counts

Match your workload:

# config/solid_queue.yml
production:
  workers:
    # CPU-intensive jobs: fewer threads
    - queues: reports
      threads: 2
      processes: 2

    # I/O-bound jobs: more threads
    - queues: [mailers, api_calls]
      threads: 10
      processes: 2

    # Mixed: moderate threads
    - queues: default
      threads: 5
      processes: 3

Rule of thumb:

  • CPU-intensive: threads ≤ CPU cores
  • I/O-bound: threads = 2-5x CPU cores
  • Mixed: threads = 1-2x CPU cores

4. Graceful Shutdown

Ensure jobs complete before restart:

# config/puma.rb
on_worker_shutdown do
  SolidQueue.supervisor.stop
end

# Or if using separate process manager
# Set shutdown timeout to 30-60 seconds

5. Health & Readiness Probes

# config/routes.rb
get '/health/jobs', to: 'health#jobs'

# app/controllers/health_controller.rb
class HealthController < ApplicationController
  def jobs
    # Check if workers are processing
    active_workers = SolidQueue::Worker.active.count

    # Check queue depths
    critical_depth = SolidQueue::Job.where(queue_name: 'critical').pending.count

    if active_workers > 0 && critical_depth < 1000
      render json: { status: 'ok' }, status: :ok
    else
      render json: {
        status: 'unhealthy',
        active_workers: active_workers,
        critical_queue_depth: critical_depth
      }, status: :service_unavailable
    end
  end
end

6. Rolling Restarts

For zero-downtime deploys:

# Kamal config
# .kamal/deploy.yml
service: myapp

servers:
  web:
    - 192.168.1.1
  jobs:
    - 192.168.1.2
    cmd: bin/jobs

accessories:
  postgres:
    service: postgres

Or with systemd:

# /etc/systemd/system/solid-queue.service
[Unit]
Description=Solid Queue Worker
After=network.target

[Service]
Type=simple
User=deploy
WorkingDirectory=/var/www/myapp
ExecStart=/usr/local/bin/bundle exec bin/jobs
ExecReload=/bin/kill -USR1 $MAINPID
KillMode=mixed
TimeoutStopSec=60

[Install]
WantedBy=multi-user.target

7. Backups

Your job queue is in PostgreSQL, so:

# Backup includes jobs
pg_dump myapp_production > backup.sql

# Or separate queue database
pg_dump myapp_queue_production > queue_backup.sql

8. Known Gotchas

Based on production experience and GitHub issues:

Connection Pool Exhaustion:

# Ensure pool size accommodates workers
# config/database.yml
production:
  queue:
    pool: <%= ENV.fetch("SOLID_QUEUE_POOL_SIZE", 25) %>

Long-Running Jobs:

# Jobs holding DB connections for hours
# Use streaming or break into smaller jobs
class HugeReportJob < ApplicationJob
  def perform(user_id)
    User.find_each(batch_size: 100) do |user|
      ProcessUserReportJob.perform_later(user.id)
    end
  end
end

Clock Drift:

# Ensure servers are time-synced
# Use NTP or cloud provider time sync

When Not to Use Solid Queue

Be honest about limitations. Consider alternatives if:

1. Ultra-High Throughput

Scenario: Processing millions of jobs per day with strict latency requirements

Numbers:

  • Solid Queue: ~1000-2000 jobs/minute (depends on job complexity)
  • Sidekiq: ~10,000+ jobs/minute

Solution: Use Sidekiq + Redis for high-volume queues

2. Hybrid Pattern

Keep Solid Queue for most jobs, isolate firehose queues to Redis:

# app/jobs/application_job.rb
class ApplicationJob < ActiveJob::Base
  # Most jobs use Solid Queue (default)
end

# app/jobs/high_volume_job.rb
class HighVolumeJob < ApplicationJob
  queue_adapter :sidekiq  # Override for specific job
  queue_as :firehose

  def perform(event_data)
    # Process high-volume events
  end
end
# config/queue_adapters.yml
development:
  default: solid_queue

production:
  default: solid_queue
  firehose: sidekiq  # High-volume queue uses Redis

3. Real-Time Requirements

If you need sub-100ms job latency, Redis will be faster:

  • Solid Queue polling interval: 100ms minimum
  • Redis: Near-instant via pub/sub

4. Specialized Job Features

Sidekiq Pro/Enterprise offers:

  • Batch job tracking
  • Rate limiting
  • Unique jobs (no duplicates)
  • Web throttling

Solid Queue is simpler but less feature-rich.

Real-World Migration: The Numbers

I recently migrated a SaaS application from Sidekiq to Solid Queue. Here’s what changed:

Before (Sidekiq + Redis)

Infrastructure:

  • Rails app on 2x $40/month servers
  • Redis cluster: $95/month (managed)
  • Sidekiq workers: Shared with Rails processes

Performance:

  • Job processing: ~500 jobs/minute
  • Average job latency: 50ms (enqueue to start)
  • Monthly costs: $215 (servers + Redis)

Operational complexity:

  • Redis monitoring and alerts
  • Redis backup management
  • Connection pool tuning for both Postgres and Redis
  • Separate Sidekiq configuration

After (Solid Queue)

Infrastructure:

  • Rails app on 2x $40/month servers
  • PostgreSQL: Already included
  • Solid Queue workers: Integrated with Rails

Performance:

  • Job processing: ~500 jobs/minute (same workload)
  • Average job latency: 150ms (slightly higher, acceptable)
  • Monthly costs: $80 (just servers)

Operational complexity:

  • Single database to monitor
  • Unified backup strategy
  • One connection pool to tune
  • Built-in Mission Control dashboard

Savings:

  • $135/month (63% reduction)
  • -1 service to manage
  • +Better developer experience (simpler stack)

Trade-off:

  • 100ms higher job latency (not impactful for this app)
  • Lower theoretical throughput (not reached in practice)

Copy-Paste Snippets

ApplicationJob with Retry Semantics

# app/jobs/application_job.rb
class ApplicationJob < ActiveJob::Base
  # Automatically retry jobs that raised StandardError
  retry_on StandardError, wait: :exponentially_longer, attempts: 5

  # Discard jobs that raised these exceptions
  discard_on ActiveJob::DeserializationError
  discard_on ActiveRecord::RecordNotFound

  # Log job lifecycle
  before_perform do |job|
    Rails.logger.info "Starting job: #{job.class.name} with #{job.arguments}"
  end

  after_perform do |job|
    Rails.logger.info "Completed job: #{job.class.name}"
  end

  # Handle errors
  rescue_from(StandardError) do |exception|
    Rails.logger.error "Job failed: #{exception.message}"
    Rails.error.report(exception, handled: true, context: {
      job_class: self.class.name,
      job_arguments: arguments
    })
    raise  # Re-raise to trigger retry
  end
end

Example recurring.yml

# config/recurring.yml
production:
  # Send daily summary emails
  daily_summary:
    class: DailySummaryJob
    schedule: every day at 9am
    queue: mailers

  # Cleanup old records
  cleanup_old_sessions:
    class: SessionCleanupJob
    schedule: every 1 hour

  # Process subscription charges
  process_subscriptions:
    class: SubscriptionChargeJob
    schedule: every day at 2am
    queue: critical
    args: [{ force: false }]

  # Generate reports
  generate_weekly_reports:
    class: WeeklyReportJob
    schedule: every monday at 6am

  # Sync with external API
  sync_external_data:
    class: ExternalApiSyncJob
    schedule: "*/15 * * * *"  # Every 15 minutes (cron syntax)

  # Database maintenance
  vacuum_database:
    class: DatabaseMaintenanceJob
    schedule: every day at 3am
    queue: low_priority

Sample solid_queue.yml Worker Topology

# config/solid_queue.yml
production:
  dispatchers:
    - polling_interval: 1
      batch_size: 500
      concurrency_maintenance_interval: 300

  workers:
    # Critical queue: High priority, fast polling
    - queues: critical
      threads: 5
      processes: 2
      polling_interval: 0.1  # 100ms

    # Default queue: Moderate resources
    - queues: default
      threads: 3
      processes: 3
      polling_interval: 1

    # Mailers: I/O bound, many threads
    - queues: mailers
      threads: 10
      processes: 2
      polling_interval: 2

    # Reports: CPU intensive, fewer threads
    - queues: reports
      threads: 2
      processes: 1
      polling_interval: 5

    # Low priority: Minimal resources
    - queues: low_priority
      threads: 1
      processes: 1
      polling_interval: 10

development:
  workers:
    - queues: "*"
      threads: 1
      processes: 1
      polling_interval: 2

Puma Plugin Toggle for Dev/Prod

# config/puma.rb
if ENV['RAILS_ENV'] == 'development'
  # In development, run jobs in same process
  plugin :solid_queue
else
  # In production, run jobs in separate process
  # (started via systemd, Docker, or Kamal)
end

# Or conditionally based on environment variable
if ENV['PUMA_RUN_JOBS'] == 'true'
  plugin :solid_queue
end

Cost & Ops Math: The VPS Deploy

Here’s the economics that make Solid Queue compelling for solopreneurs and SMBs.

Traditional Stack (Sidekiq + Redis)

Managed Infrastructure (Heroku/Render):

  • Web dyno: $25/month
  • Worker dyno: $25/month
  • Postgres: $50/month
  • Redis: $95/month
  • Total: $195/month

Self-Managed VPS:

  • App server: $80/month (4 CPU, 8GB RAM)
  • Redis server: $40/month (2 CPU, 4GB RAM)
  • Total: $120/month + management time

Solid Queue Stack

Managed Infrastructure:

  • Web+Worker dyno: $40/month (combined)
  • Postgres: $50/month
  • Total: $90/month (54% savings)

Self-Managed VPS (with Kamal):

  • App server: $80/month (runs web + jobs)
  • Total: $80/month (33% savings)

Additional savings:

  • No Redis monitoring costs
  • No Redis backup costs
  • Simpler deployment (fewer moving parts)
  • Faster iteration (one less service to update)

Kamal Deploy Example

# .kamal/deploy.yml
service: myapp

image: username/myapp

servers:
  web:
    hosts:
      - 192.168.1.1
    options:
      network: "private"

  jobs:
    cmd: bin/jobs
    hosts:
      - 192.168.1.1
    options:
      network: "private"

registry:
  username: username
  password:
    - KAMAL_REGISTRY_PASSWORD

env:
  secret:
    - DATABASE_URL
    - SECRET_KEY_BASE

accessories:
  postgres:
    image: postgres:16
    host: 192.168.1.1
    port: 5432
    env:
      secret:
        - POSTGRES_PASSWORD
    directories:
      - data:/var/lib/postgresql/data
# Deploy everything with one command
kamal deploy

# Zero downtime, automatic rollback on failure
# Web + jobs + database all managed

Latency Real-Talk

Let’s be honest about performance trade-offs.

Round-Trip Costs

Redis (in-memory):

  • Enqueue: 1-5ms
  • Poll: < 1ms (pub/sub)
  • Job start latency: 5-10ms

Solid Queue (PostgreSQL):

  • Enqueue: 5-15ms (DB write)
  • Poll: 100ms-5s (configurable)
  • Job start latency: 100ms-5s

Why SKIP LOCKED Matters

-- Without SKIP LOCKED (old approach)
SELECT * FROM jobs WHERE queue = 'default' LIMIT 1 FOR UPDATE;
-- Workers block waiting for lock, poor performance

-- With SKIP LOCKED (Solid Queue)
SELECT * FROM jobs WHERE queue = 'default' LIMIT 1 FOR UPDATE SKIP LOCKED;
-- Workers never block, each gets different job

This single feature makes database-backed queues viable for production.

When Latency Matters

100ms latency is fine for:

  • Email sending (users won’t notice)
  • Report generation (already takes minutes)
  • Data syncing (background task)
  • Cleanup jobs (periodic maintenance)

100ms latency is problematic for:

  • Real-time notifications (use ActionCable)
  • Payment processing feedback (critical path)
  • User-facing workflows (should be synchronous)

Solution: Keep latency-critical jobs in Redis, everything else in Solid Queue.

Toolkit Pairing: The Rails 8 Batteries-Included Story

Solid Queue + Solid Cache + Mission Control = cohesive, simple infrastructure.

The Trinity

Solid Queue - Background jobs

# Replace Sidekiq + Redis
PaymentProcessorJob.perform_later(transaction_id)

Solid Cache - Application caching

# Replace Rails.cache + Redis
Rails.cache.fetch("user_#{user.id}_stats", expires_in: 1.hour) do
  expensive_calculation
end

Mission Control - Monitoring dashboard

# Replace Sidekiq Web + Redis Commander
mount MissionControl::Jobs::Engine, at: "/jobs"

The Result

Before Rails 8:

  • Rails app
  • Redis (Sidekiq)
  • Redis (Cache)
  • Sidekiq Web
  • Redis Commander
  • Separate monitoring

After Rails 8:

  • Rails app
  • PostgreSQL
  • Mission Control (built-in)
  • Unified monitoring

The Bottom Line

After months of running Solid Queue in production, here’s my take:

Solid Queue is the right choice if you:

  • Run a typical SaaS, e-commerce, or FinTech application
  • Process < 1000 jobs/minute
  • Value operational simplicity
  • Want to minimize infrastructure costs
  • Are building with a small team
  • Use PostgreSQL already

Stick with Sidekiq + Redis if you:

  • Process millions of jobs per day
  • Need sub-100ms job latency
  • Require Sidekiq Pro/Enterprise features
  • Have existing Redis infrastructure
  • Need proven scalability for massive throughput

For most applications—including production FinTech apps processing millions in transactions—Solid Queue provides the perfect balance of simplicity and capability.

What’s Next?

In my next post, I’ll cover migrating from Sidekiq to Solid Queue, including:

  • Job adapter compatibility
  • Data migration strategies
  • Zero-downtime cutover
  • Performance monitoring
  • Rollback plans

Subscribe or check back soon!


Building a Rails 8 application or considering Solid Queue for your project? I’m available for consulting and development work. With 15+ years of Rails experience and deep expertise in background job architectures, I can help you design and deploy production-ready job processing systems.

Based in Dubai, working with clients worldwide.

Let’s discuss your project: nikita.sinenko@gmail.com

Further Reading

N

Need help with your Rails project?

I'm Nikita Sinenko, a Senior Ruby on Rails Engineer with 15+ years of experience. Based in Dubai, working with clients worldwide on contract and consulting projects.

Let's Talk