Chapter 19: The Case for Simplicity
I have spent eighteen chapters demonstrating techniques. Allow me to spend one chapter explaining why they matter.
The materialized view that converts a seven-second dashboard query into a seven-millisecond index scan is a meaningful performance improvement. The connection pooler that prevents your serverless functions from exhausting the database is a meaningful reliability improvement. The read replica that distributes traffic across multiple copies of your data is a meaningful scaling improvement. Each of these techniques has been demonstrated with code, benchmarked with data, and implemented in seven languages and four frameworks.
But the most important thing this book has done — more important than any individual technique — is demonstrate that these improvements did not require adding a service to your infrastructure. They required using what was already there.
The default impulse in software engineering is to add. Add a caching layer. Add a search service. Add a message queue. Add a separate analytics database. Each addition solves a problem. Each addition also introduces a service to deploy, monitor, backup, secure, upgrade, and debug when it fails at three in the morning. The question this book has asked from the beginning — and which this chapter asks directly — is whether the addition was necessary.
This book is not anti-Redis. It is not anti-Elasticsearch. It is not anti-anything. It is pro-simplicity. Use what PostgreSQL offers before adding what PostgreSQL does not require.
One Database, Many Capabilities
Consider the typical modern application stack. PostgreSQL for relational data. Redis for caching and sessions. Elasticsearch for search. RabbitMQ or Amazon SQS for job queues. A separate analytics database for aggregations. That is five services. Five deployment pipelines. Five monitoring dashboards. Five sets of credentials to rotate. Five backup strategies to maintain. Five things that can fail independently, at any hour, in any combination.
Now consider what PostgreSQL already provides.
Caching. Materialized views — the central thesis of this book — pre-compute expensive query results and serve them at index-scan speed. A dashboard that aggregates millions of rows becomes a table that returns pre-computed results in milliseconds. No Redis required. No cache invalidation protocol. No serialization layer. The cache is a table. You query it with SQL. It refreshes on a schedule you control.
Sessions. UNLOGGED tables provide approximately 2.9 times faster writes than regular tables by skipping write-ahead log entries. Data survives clean restarts but is lost on crash — which is precisely the durability guarantee that sessions require. No Redis required.
Search. This one deserves its own equation. PostgreSQL provides tsvector for lexical full-text search — tokenization, stemming, stop word removal, boolean operators, and BM25-style ranking, available since version 8.3, released in 2008. It provides pgvector for semantic similarity search — vector embeddings, cosine distance, and HNSW indexes for approximate nearest neighbor queries. It provides pg_trgm for trigram-based fuzzy matching — typo tolerance, autocomplete, and "did you mean" suggestions. It provides fuzzystrmatch for phonetic matching — Soundex, Metaphone, and Levenshtein distance for "sounds like" queries. Together: Elasticsearch is approximately equal to tsvector plus pgvector plus pg_trgm plus fuzzystrmatch. Four extensions. Zero additional services.
Pub/sub. LISTEN/NOTIFY provides real-time event notification between database sessions. A trigger fires on INSERT, sends a notification, and a listening process receives it within milliseconds. No Redis pub/sub required. No RabbitMQ required. The message passes through the database that already knows about the data change.
Job queues. Rails 8 made this official: SolidQueue stores background jobs in PostgreSQL and ships as the framework's default. 37signals processes 20 million jobs per day with it. Good Job provides the same capability with built-in cron scheduling and a web dashboard. Laravel's database queue driver does it without any additional package. The pattern is proven at scale, and it requires no external service.
Document storage. JSONB with GIN indexes provides native JSON document storage with indexing, querying, and aggregation. You can store, query, and index semi-structured data without MongoDB.
Time-series. Table partitioning by time range, with automatic partition pruning, provides time-series storage and querying without InfluxDB or TimescaleDB — though TimescaleDB, notably, is itself a PostgreSQL extension, which rather proves the point.
Analytics. Materialized views pre-computing aggregations that would otherwise require a separate OLAP system. The same materialized view that serves your dashboard also serves your analytics — one object, two purposes, zero additional services.
PostgreSQL is not a database. It is an ecosystem that most teams use as a database. The capabilities listed above have been available — most of them for years, some for over a decade — and yet teams continue to add services to solve problems that PostgreSQL solved before they wrote their first migration.
The Compounding Cost of Complexity
Every service you add to your infrastructure incurs a cost that extends far beyond the service's monthly invoice. Each service requires deployment configuration — containers, environment variables, health checks. Each requires monitoring and alerting — dashboards, thresholds, on-call rotations. Each requires backup and restore procedures — tested, documented, and rehearsed. Each requires credential management and rotation. Each requires version upgrades and compatibility testing against every other service in the stack. Each requires networking rules, firewall configuration, and security auditing. Each requires documentation — for the current team and for the developer who inherits the system after you leave.
These costs do not amortize. They compound. A team running five services does not spend five times the operational effort of a team running one. They spend considerably more, because the interactions between services create a combinatorial explosion of failure modes, consistency edge cases, and debugging complexity.
Gloria Mark's research at UC Irvine found that it takes 23 minutes and 15 seconds to regain focus after an interruption. This applies to cognitive context switches in infrastructure work as well. Debugging a cache invalidation bug that spans PostgreSQL, Redis, and Elasticsearch requires holding mental models of three systems, three query languages, and three consistency models simultaneously. Debugging the same issue when the cache is a materialized view requires one mental model, one query language, and one consistency model. The bug is the same. The cognitive load is a fraction.
The failure surface tells a similar story. Redis goes down: your cache disappears and traffic hits PostgreSQL directly, potentially overwhelming it. Elasticsearch goes down: your search is unavailable and users see errors. The message queue goes down: background jobs stop processing and the queue backs up. When all of these capabilities live inside PostgreSQL, there is one thing that can go down — and if PostgreSQL goes down, everything was going down regardless. The failure surface contracts from five independent services to one, and that one service is the one you were already monitoring, backing up, and protecting with replicas and failover.
There is also the matter of hiring. A team that runs PostgreSQL, Redis, Elasticsearch, and RabbitMQ needs expertise in four systems. A team that runs PostgreSQL needs expertise in one. The depth of understanding you can achieve when PostgreSQL is your only database system is materially greater than the breadth you achieve when it is one of four. Depth produces mastery. Breadth produces familiarity. Mastery solves problems at three in the morning. Familiarity consults the documentation.
Simplicity is not the absence of capability. It is the discipline to use existing capability before adding new complexity. Every service in your infrastructure should be there because you tried PostgreSQL first and found it genuinely insufficient — not because a blog post recommended it, a conference talk demonstrated it, or a vendor's sales team presented it.
The Industry Agrees
The thesis of this book is not contrarian. As of 2026, it is increasingly the consensus.
The evidence begins with money. Snowflake acquired Crunchy Data — a PostgreSQL managed hosting provider — for $250 million. Databricks acquired Neon — a serverless PostgreSQL platform — for $1 billion. Supabase raised a $100 million Series E at a $5 billion valuation. The largest data infrastructure companies in the world are placing their bets on PostgreSQL as the foundation. This is not a trend driven by developer sentiment. It is a trend driven by enterprise investment at a scale that does not tolerate fashion.
The evidence continues with frameworks. Rails 8 replaced Redis with PostgreSQL-backed alternatives in its default stack. SolidQueue for background jobs, SolidCache for caching, SolidCable for WebSocket connections — all backed by PostgreSQL. The most opinionated framework in web development examined its own default architecture and decided that Redis was optional. When the framework that popularized Redis-backed background jobs concludes that PostgreSQL is sufficient, the signal is difficult to ignore.
The evidence extends to scale. OpenAI runs ChatGPT for 800 million users on a single unsharded PostgreSQL primary with approximately 50 read replicas. Millions of queries per second. Five-nines availability. Low double-digit millisecond p99 latency. Their infrastructure engineer stated it directly at PGConf.Dev 2025: PostgreSQL can scale to support massive workloads without sharding, using a single primary writer. If you believe you need to shard before reaching 800 million users, something in your architecture deserves examination.
The evidence culminates in community. "It's 2026, Just Use Postgres" reached the front page of Hacker News. PostgreSQL has been the most admired database in Stack Overflow's developer survey for multiple consecutive years. The sentiment is no longer niche, nor is it the province of PostgreSQL enthusiasts talking to each other. It is the mainstream position of the industry.
For 99% of applications, PostgreSQL handles everything you need. The remaining 1% — petabytes of logs ingested per hour, exotic real-time analytics across distributed clusters, specialized graph traversals that exceed what recursive CTEs can express — will know they are the 1% because they will have benchmarked their workload and hit a genuine wall. Until you have hit that wall yourself, adding services solves imagined problems while creating real ones.
Good Evening
We began this book with a problem. A dashboard query that took seven seconds. An infrastructure that had grown more complex than the problems it was solving. A household, if you will, that had hired too many staff for too few duties.
The solutions were not exotic. They were materialized views and connection pools and proper indexes — techniques that have existed inside PostgreSQL for years, waiting patiently for someone to use them. The performance improvements were not marginal. They were orders of magnitude. Seven seconds to seven milliseconds. A thousand concurrent connections reduced to twenty. An architecture simplified from five services to one.
These techniques were demonstrated across Python, Node.js, Ruby, Java, PHP, Go, and .NET. They were integrated with Django, Rails, Spring Boot, and Laravel. They were tested against Prisma, Drizzle, and SQLAlchemy. They were scheduled with pg_cron, SolidQueue, Celery, and time.Ticker. They were pooled through PgBouncer and PgCat. They were scaled with read replicas and partitioning. And they were validated by the architecture decisions of OpenAI, 37signals, and the broader industry.
The next time someone suggests adding a service to your infrastructure, I would ask you to consider — before approving the pull request, before provisioning the instance, before adding the monitoring dashboard — whether PostgreSQL already provides what you need. It usually does. And when it does, the simplest architecture is not merely the easiest to build. It is the easiest to understand, the easiest to debug, the easiest to secure, and the easiest to hand to the developer who comes after you.
A well-run household does not require a large staff. It requires the right staff, properly employed, attending to their duties with competence and without unnecessary complication.
Your PostgreSQL database is better staff than you have been led to believe. I trust this book has demonstrated as much.
Good evening.