Solid Cache in Rails 8: Database-Backed Caching Guide
Replace Redis with Solid Cache in Rails 8 for simpler ops and larger cache capacity. Setup, configuration, production gotchas, and when to keep Redis.
Caching turns a 400ms database query into a 2ms read. It makes a $10/month server handle traffic that would otherwise need a $100/month one.
Done badly, caching creates invisible bugs, stale data, and the kind of performance cliffs that page you at 2am.
Rails has always treated caching as a first-class concern. Rails 8 makes a sharper statement: caching should be good out of the box, operationally boring, and not require an extra infrastructure dependency.
That is the spirit behind Solid Cache.
TL;DR
- Solid Cache is a database-backed cache store for
Rails.cache(and fragment caching) that is designed to work well on modern SSD-backed databases. - Rails 8 enables Solid Cache by default in new apps, as part of the “Solid” trio (Cache, Queue, Cable).
- It trades some raw latency versus pure in-memory stores for simpler ops, larger cache capacity, and often better real-world hit rates.
- The main risk is obvious: you are moving cache IO into your database. If your database is already the bottleneck, this can make things worse unless you isolate or tune it.
This post is a practical guide: how Solid Cache works, how to configure it, and what to watch in production.
Solid Cache vs Redis vs Memcached
Before diving into specifics, here’s how Solid Cache compares to the alternatives:
| Feature | Solid Cache | Redis | Memcached |
|---|---|---|---|
| Read latency | 1-5ms (SSD) | 0.1-0.5ms (RAM) | 0.1-0.5ms (RAM) |
| Max cache size | Disk-limited (100GB+) | RAM-limited (typically 1-16GB) | RAM-limited (typically 1-64GB) |
| Extra infrastructure | None (uses your DB) | Redis server required | Memcached server required |
| Persistence | Durable by default | Optional (RDB/AOF) | None (volatile) |
| Encryption support | Built-in (Active Record) | Redis 6+ TLS | No native encryption |
| Eviction strategy | FIFO (size/age-based) | LRU, LFU, TTL | LRU |
| Monthly cost (managed) | $0 (shared DB) or ~$15 (separate DB) | $15-50 (ElastiCache/Upstash) | $15-50 (ElastiCache) |
| Rails integration | Native (Rails 8 default) | redis-rails gem |
dalli gem |
| Best for | Apps wanting simplicity, large caches | Sub-ms latency, pub/sub, data structures | Pure high-throughput caching |
The key insight: Solid Cache wins on operational simplicity and cache capacity. Redis wins on raw latency and feature breadth. For most Rails apps, the latency difference is invisible to users while the operational simplicity is not.
What Solid Cache is (and what it is not)
Solid Cache is an ActiveSupport::Cache store that persists cache entries in a database table using Active Record. You keep using Rails.cache.fetch, fragment caching, and Russian doll caching exactly as before - only the storage backend changes from RAM to disk.
The API surface is intentionally minimal:
- You configure
config.cache_store = :solid_cache_store. - You keep using
Rails.cache.fetch, fragment caching, Russian doll caching, and collection caching exactly as before. - Your cache is now durable on disk, instead of living in RAM inside Redis or Memcached.
Solid Cache is not trying to be a perfect replacement for every Redis usage. If you use Redis as:
- a pub/sub backbone,
- a shared coordination mechanism,
- a rate limiter,
- a distributed lock manager,
- a data structure store,
then you still need Redis (or an alternative) for those jobs.
Solid Cache is specifically about the cache store behind Rails caching APIs.
Why a database cache can make sense in 2025
A slightly slower cache that holds far more data often outperforms a faster cache that evicts too aggressively, because cache misses are expensive. Modern SSD-backed databases handle 50,000+ random reads per second, and production cache performance depends more on hit rate, eviction behavior, and operational overhead than on raw latency.
The traditional “RAM fast, disk slow” thinking misses the real-world tradeoff:
Solid Cache leans into this: keep a bigger cache on disk, accept a small access-time penalty, and win overall by missing less and running fewer external services.
If you’re interested in why Rails remains a strong choice for modern applications, this focus on operational simplicity is a big part of it.
Rails 8 defaults and the “skip-solid” escape hatch
Rails 8 enables Solid Cache by default in new applications. Opt out with --skip-solid when generating a new app if you prefer Redis or Memcached. Solid Cache is a default, not a mandate - Rails still supports all cache store backends.
For more context on the broader Rails 8 feature set, including how Solid Cache fits alongside Solid Queue and Solid Cable, that post covers the full picture.
How Solid Cache behaves: eviction and retention
Solid Cache uses a FIFO (first in, first out) eviction strategy instead of Redis-style LRU. It tracks size and age limits, expiring old entries in batches when thresholds are hit. In practical terms:
- entries are written,
- the store tracks size and age limits,
- when thresholds are hit, it expires old entries in batches.
This is not as theoretically optimal as LRU for certain access patterns, but FIFO is simpler to manage and predictable, and a larger cache can compensate for less clever eviction.
From an engineering standpoint, what matters is not “is FIFO perfect”, but “is the system stable under load, and does it produce good hit rates for my application”.
Installation and setup
If you are on Rails 8 already
Solid Cache is pre-configured in most Rails 8 apps. Verify three things before deploying: the database connection, the cache schema in production, and sane size/age limits.
A cache with no size limit will grow until it fills your disk. Set limits before deploying.
If you are upgrading an existing app (Rails 7.x or older)
Solid Cache can be added to older Rails apps.
At a high level:
bundle add solid_cache
bin/rails solid_cache:install
The installer configures Solid Cache as the production cache store and generates a cache configuration file (by default config/cache.yml).
It also creates a separate cache schema artifact depending on your schema format:
db/cache_schema.rbfor Ruby schema formatdb/cache_structure.sqlfor SQL schema format
After that you configure your database.yml to include a cache database (or connection) and run db:prepare in production to create the cache database and load schema.
Configure the cache database (recommended)
Store cache entries in a separate database to isolate IO from your core OLTP traffic. This is the recommended setup for production - it keeps cache churn from affecting your primary database’s autovacuum and query planner behavior.
A typical production setup for Postgres might look like this:
# config/database.yml
production:
primary: &primary_production
adapter: postgresql
encoding: unicode
database: app_production
username: app
password: <%= ENV["APP_DATABASE_PASSWORD"] %>
cache:
<<: *primary_production
database: app_production_cache
migrations_paths: db/cache_migrate
Then in production Rails config:
# config/environments/production.rb
config.cache_store = :solid_cache_store
The “single database” setup (works, but understand the tradeoff)
Solid Cache can also use your primary database connection pool. In fact, if you do not specify database, databases, or connects_to, Solid Cache falls back to ActiveRecord::Base connection pool.
This is convenient, but it comes with a very non-obvious behavior:
- cache reads and writes can participate in your application transactions.
That means in the presence of a wrapping transaction, a cache write might not behave like an independent side effect. This is not always bad, but you need to understand it.
If you want caching to be operationally independent and predictable, a separate cache database is the calmer option.
Configuring cache limits: max_size and max_age
Set max_size and max_age before deploying to production - a cache with no limits will grow until it fills your disk. Solid Cache reads configuration from config/cache.yml (or config/solid_cache.yml), and supports:
max_age: cap the age of the oldest entry (retention style control)max_size: cap total size of cached entriesmax_entries: cap number of entriesnamespace: environment-based namespacing
A practical starting point looks like:
# config/cache.yml
default: &default
store_options:
namespace: <%= Rails.env %>
max_age: <%= 14.days.to_i %>
production:
database: cache
store_options:
<<: *default
max_size: <%= 10.gigabytes %>
How big should max_size be?
Start with a budget you are willing to pay for:
- If your cache DB has 50 GB of space, do not set
max_sizeto 256 GB. - If you are in a managed Postgres environment with expensive storage, be conservative.
- If your app is heavy on fragment caching, a larger cache can be worth it.
The right answer is not theoretical. It is the cheapest number that reliably produces good hit rates.
Using Solid Cache day-to-day
Every Rails.cache.fetch, fragment cache, and Russian doll pattern works identically to Redis or Memcached - the API is unchanged. Switch your cache store and all existing cache code keeps working.
Low-level caching with Rails.cache.fetch
def expensive_dashboard_stats(company_id)
Rails.cache.fetch("dashboard:v1:company:#{company_id}", expires_in: 10.minutes) do
DashboardStatsQuery.new(company_id).call
end
end
A few principles that stay true regardless of cache backend:
- Put a version in your keys (
v1,v2) so you can invalidate by changing code. - Keep keys stable and explicit.
- Use
expires_infor time-bounded staleness, even if you also use key-based invalidation.
Do not cache Active Record objects directly
Caching full model instances is a classic footgun. Attributes can change, records can be deleted, and serialization can surprise you.
Cache primitives:
ids = Rails.cache.fetch("super_admin_user_ids", expires_in: 12.hours) do
User.super_admins.pluck(:id)
end
User.where(id: ids).to_a
This keeps your cache resilient and easier to reason about.
Fragment caching still works the same
<% cache ["company-card", @company.cache_key_with_version] do %>
<%= render @company %>
<% end %>
If you do Russian doll caching, keep keys and dependencies deliberate. The cache backend does not save you from dependency mistakes.
Expiration mechanics: threads vs jobs
Solid Cache expires entries in batches, and you control whether expiry runs in a background thread or via a background job by configuring expiry_method. Choose :job if you already run Solid Queue - it makes expiry visible and controllable through your job dashboard.
If you already run background jobs reliably, :job can be a calmer operational choice because it makes expiry behavior more visible and controllable. For details on setting up a robust job backend, see my Solid Queue guide - it pairs naturally with Solid Cache.
If you are allergic to adding any more job traffic, the default thread-based expiry can be fine. Just remember it still consumes resources on your app nodes.
Encryption: when your cache contains sensitive data
Solid Cache supports built-in encryption via Active Record Encryption by setting encrypt: true in your cache config. This protects accidentally cached personal data in fragments - a common issue in Rails apps that Redis and Memcached do not address natively.
Example:
# config/cache.yml
production:
encrypt: true
Do not flip this switch blindly. Encryption adds CPU overhead and changes failure modes (bad keys, missing credentials, rotation issues). But for some apps it is worth it.
Production gotchas and risks
1) You are moving load to your database
Solid Cache shifts cache IO to your database. If your database is already the bottleneck, Solid Cache will make both caching and queries slower. Use a separate cache database or verify your primary database has headroom before switching.
Mitigation strategies:
- Use a separate cache database.
- Use a separate primary (or replica) for cache if your topology supports it.
- Put strict size and age limits in place.
- Measure database IO before and after.
For more on keeping your database healthy under load, see my post on PostgreSQL optimization techniques.
2) Autovacuum and table churn are real
Caches churn: writes, deletes, rewrites.
In Postgres, that means:
- bloat,
- vacuum pressure,
- IO spikes,
- sometimes surprising query planner behavior.
A dedicated cache database makes this easier to reason about. You can tune it specifically for churn without worrying about side effects on your core tables.
3) Transaction semantics can surprise you
If Solid Cache uses ActiveRecord::Base connection pool, cache reads and writes can be part of a wrapping transaction.
That can make some patterns behave differently than Redis, where cache writes are external side effects.
If you want caching to be independent of request transactions, configure a separate cache DB connection.
4) Cache key discipline matters more than the backend
A bigger cache can hide problems for a while.
But:
- unstable keys,
- keys that depend on mutable objects,
- missing versioning,
- overly granular keys,
will still create weirdness.
Solid Cache makes caching operationally easier. It does not make caching intellectually easier.
When I would choose Solid Cache
I would seriously consider Solid Cache when:
- I want a Rails 8 app that is easy to operate on a single database and a single server.
- I want to remove Redis as a dependency primarily used for caching.
- I expect fragment caching to be a big win and I want a large cache capacity.
- I want encryption support for cached values without building a custom system.
If you’re deploying a Rails 8 app and want to keep things simple, my guide on deploying with Kamal to a VPS shows how the Solid stack fits into a minimal production setup.
When I would not
I would avoid Solid Cache (or isolate it aggressively) when:
- the primary database is already the bottleneck,
- the app is extremely latency-sensitive and the cache hit path must be as close to RAM as possible,
- the cache workload is huge and spiky and could drown core OLTP traffic,
- the architecture is multi-region and relies on a shared cross-region cache for performance.
In those cases Redis (or Memcached) is still a very good tool. The goal is not ideological purity. The goal is predictable systems.
A practical adoption checklist
If you want to roll this out safely:
- Start in staging with production-like traffic replay if you can.
- Enable strict limits (
max_size,max_age) before you ship to prod. - Decide on isolation: separate cache DB vs shared pool.
- Measure DB impact: IO, latency, CPU, autovacuum activity.
- Track app metrics: cache hit rate (if available), request p95, DB time per request.
- Plan rollback: switching cache store back should be a config change, not a rewrite.
Caching is powerful precisely because it is optional. Treat it that way operationally too.
Wrapping up
Solid Cache is Rails making a pragmatic bet: for many apps, the simplest operational story wins.
One less external service. One less moving part. A cache large enough to actually matter. And an API that stays familiar.
It is not magic. It shifts load. It introduces new database considerations. And it still demands discipline around keys and invalidation.
But it is a thoughtful default, and for a large category of Rails applications, it is a genuinely calmer way to get performance.
Need help with Rails caching or performance? I help teams with caching strategy, database optimization, and Rails 8 migrations. If you’re evaluating Solid Cache or tuning cache hit rates, reach out at nikita.sinenko@gmail.com.
Further Reading
- Solid Queue in Rails 8: Setup, Recurring Jobs, and Production Config
- How to Deploy Rails 8 Apps with Kamal to a VPS
- TimescaleDB with Rails: When to Use It and When to Avoid It
- Database Optimization Techniques in Rails
- Service Objects Are Not an Architecture - Data ownership and boundaries matter more than code organization
- Solid Cache GitHub Repository