Gold Lapel vs Redis
A Second Database Shouldn't Be the First Instinct
If you'll permit me a moment of candour — most teams reach for Redis to solve a problem their existing database could handle, if someone had taken the time to show them how.
The arrangement I find in most applications
Allow me to describe the architecture I encounter most often. Your application checks Redis first. Misses. Queries Postgres. Stores the result in Redis. Returns it. Every write requires manually invalidating the relevant Redis keys — a task that grows more complex with every relationship in your data model, and where a single missed invalidation produces stale data that is silent, persistent, and discovered at the worst possible moment.
You are maintaining two databases, writing invalidation logic by hand, debugging cache coherency issues, and paying for Redis infrastructure. All to avoid hitting a database that is perfectly capable of serving your queries quickly — if it had the right indexes and materialized views. Gold Lapel automates their creation and refresh.
This arrangement is understandable. Redis is fast, well-documented, and the tutorials all recommend it. But the question I would respectfully put to you is worth considering: is Redis solving a performance problem, or compensating for one that could be addressed at the source?
What Gold Lapel provides
Gold Lapel sits between your application and Postgres as a transparent proxy. Same query. No maintenance. Gold Lapel handles caching automatically. No SET/GET calls, no manual invalidation, no Redis infrastructure to maintain.
For most developers, the quickest path is the language wrapper: pip install goldlapel, then call gl.start() with your database URL. The wrapper manages the proxy process and returns a connection your application uses with any driver. If you prefer to run the proxy standalone, change your connection port from :5432 to :7932 — the result is the same.
Every query result is cached in Gold Lapel's local memory on first execution. The second time the same query arrives, it is served directly — no round-trip to Postgres. When a write touches a table, Gold Lapel automatically invalidates every cached result that references it. No application code needed. No cache keys to name, no TTLs to estimate, no invalidation calls to maintain across your codebase.
| Capability | Gold Lapel | Redis |
|---|---|---|
| Query caching | Automatic — every query cached on first hit | Manual SET/GET per query |
| Cache invalidation | Automatic — sees the write and invalidates instantly | Manual — you write invalidation calls after every write |
| Stale data risk | Automatic — invalidates on every detected write, serves from the freshest cache layer | Proportional to invalidation coverage — missed calls produce stale reads |
| Infrastructure | Zero — runs as a single binary alongside your app | Separate server, monitoring, failover, memory management |
| Code changes | gl = await goldlapel.start(url) — then point your driver at gl.url | Every query needs cache logic (check, miss, store, invalidate) |
| Connection pooling | Built-in (session + transaction mode) | No |
| N+1 detection | Automatic detection + batch prefetch | No |
| Index creation | Automatic B-tree, trigram, expression, partial indexes | No |
| Query rewriting | Automatic materialized-view-based rewriting | No |
| Materialized views | Automatic creation and refresh | No |
The benchmark — if I may be specific
Gold Lapel applies three tiers of optimization, each building on the last:
- SQL optimizations — indexes and query rewrites that help Postgres process queries faster
- Materialized views — preprocessed results for fast data retrieval
- L1 native cache — in-process results served in microseconds, no network hop
An 8-table analytics query across 1.3 million rows, measured on the same system under identical conditions:
- Direct PostgreSQL: 12,000ms (twelve seconds)
- Redis cached: ~0.1ms (100 microseconds — one network round trip to the Redis server)
- Gold Lapel L1 cache: ~0.004ms (4 microseconds — served from in-process memory, no network hop)
I wish to be precise about why this difference exists, because it is not a question of better software. Redis is excellent at what it does. The difference is architectural. Redis is a separate service — every cache hit requires a TCP round trip across the network, however short. Gold Lapel's L1 cache lives inside your application process. The data travels from one memory address to another. No serialization, no socket, no wire. Four microseconds.
But the number that deserves your attention is not the cache latency. It is this: Gold Lapel also creates materialized views and indexes that make the uncached query faster. Suppose the L1 cache did not exist at all. That 12,000ms query would still drop to 3.7ms, because the materialized view has already done the work.
This is the distinction I consider most important. A cache makes a slow query invisible. An optimization makes the query genuinely fast. Both have value. But when the optimization is in place, the cache becomes a courtesy rather than a necessity — and the consequences of a cache miss drop from catastrophic to negligible.
What about the other duties Redis performs?
A fair question, and one I am glad you raised. Redis is considerably more than a cache — it handles pub/sub, sessions, job queues, rate limiting, and more. It is worth knowing that Postgres provides native capabilities for each of these — LISTEN/NOTIFY for pub/sub, SKIP LOCKED for job queues, and more. Many teams have not had occasion to explore them, which is entirely reasonable — PostgreSQL's documentation is thorough but not, I'm afraid, what anyone would call inviting. I have prepared a strategic overview of PostgreSQL caching without Redis that covers the full picture. For working code in Python, Node.js, Go, and Ruby, I have assembled a practical guide to LISTEN/NOTIFY, SKIP LOCKED, and UNLOGGED tables.
| Use case | Redis approach | Postgres approach |
|---|---|---|
| Pub/sub | SUBSCRIBE / PUBLISH | LISTEN / NOTIFY (built into Postgres) |
| Session storage | SET with TTL | Sessions table — Gold Lapel caches reads at sub-ms |
| Job queues | Lists, Streams | SKIP LOCKED + pg_notify (battle-tested at scale) |
| Rate limiting | INCR + EXPIRE | Counter table with window functions |
| Leaderboards | Sorted sets (ZSET) | Materialized view — Gold Lapel creates and refreshes automatically |
| Full-text search | RedisSearch | pg_trgm + tsvector — Gold Lapel auto-indexes LIKE/ILIKE patterns |
| Geospatial | GEO commands | PostGIS (the industry standard) |
| Streams | XADD / XREADGROUP / XACK | Gold Lapel stream_add, stream_read, stream_ack — durable consumer groups backed by Postgres tables |
"We stand on Redis's shoulders. But your application may not need to."
— from You Don't Need Redis, Chapter 2: The Infrastructure You Were Told You Needed
When to keep Redis — and I do mean this
I should be forthright about the boundaries of this comparison, because overstating a case serves no one well.
Redis is excellent software — genuinely well-engineered and purpose-built for workloads where it excels. It is the right tool when:
Shared ephemeral state across servers — you need sub-millisecond access to data that does not require database durability, shared across a fleet of application servers with no single source of truth. This is Redis at its finest.
Redis Streams for event sourcing at extreme throughput — Gold Lapel now provides full Streams support (stream_add, stream_create_group, stream_read, stream_ack, and stream_claim) backed by PostgreSQL tables with consumer groups, acknowledgment, and crash recovery. For most workloads, Gold Lapel's streams are a capable replacement. If you need millions of messages per second with minimal latency, Redis Streams remain purpose-built for that scale.
Mature Redis infrastructure — your team has deep Redis expertise, monitoring is solid, and the operational cost is genuinely manageable. Working infrastructure has earned the benefit of the doubt.
My concern is narrower than it may appear. I am not suggesting Redis has no place in your architecture. I am suggesting that its most common deployment — as a query cache alongside Postgres because the database felt slow — is addressing the symptom rather than the cause. For that particular duty, Gold Lapel is simpler and requires no additional infrastructure.
The migration
# App → Redis → Postgres
REDIS_URL=redis://localhost:6379
DATABASE_URL=postgres://user:pass@localhost:5432/mydb
# Your code:
# 1. Check Redis for cached result
# 2. Cache miss → query Postgres
# 3. Store result in Redis
# 4. On write → invalidate relevant Redis keys # Option 1: Language wrapper (recommended)
# pip install goldlapel
import goldlapel
gl = goldlapel.start("postgresql://user:pass@mycompany.com/mydb")
# Use gl.url with any driver — Django, SQLAlchemy, psycopg, etc.
# Option 2: Standalone proxy
DATABASE_URL=postgres://user:pass@localhost:7932/mydb
# Either way, Gold Lapel learns your query patterns and caches automatically.
# Writes invalidate the cache — no manual invalidation needed. Remove your Redis cache logic. Change your connection port. Gold Lapel learns your query patterns and begins optimizing on the first query. No cache warming. No invalidation code to maintain.
I appreciate that "just two lines of code" sounds like the sort of claim that is never actually that simple. In this case, it is. The most involved part of the migration is deleting code — the SET/GET calls, the invalidation hooks, the TTL configuration. Removing complexity is, I have always found, the most satisfying form of engineering.
Verdict
If you added Redis to make your Postgres queries faster, Gold Lapel handles that duty entirely — with less infrastructure, less code, and faster cache performance. Your application sends the same SQL it always has. Gold Lapel attends to the rest.
If you use Redis for capabilities beyond query caching — pub/sub, streams, shared ephemeral state — it earns its keep there, and I would not suggest otherwise. But the query caching layer, the manual invalidation, the stale data at inconvenient hours? That work belongs to something that can see the writes as they happen and respond accordingly.
We stand on Redis's shoulders. But your application may not need to.
Frequently asked questions
Terms referenced in this article
I have written at some length on this very theme. The book chapter PostgreSQL vs Redis for Caching examines the full case — when Redis earns its place, when PostgreSQL handles the load alone, and how to know which situation you are in before adding a second database to your architecture.