← You Don't Need Redis

Chapter 4: A Proper Introduction to PostgreSQL Materialized Views

The Waiter of Gold Lapel · Updated Mar 30, 2026 Published Mar 11, 2026 · 23 min

Allow me to introduce you to the most powerful feature in your database that you have, in all likelihood, never used.

A materialized view is a query whose results are stored as a physical table. You define the query once — it may join six tables, aggregate four million rows, compute running totals with window functions, and filter by date ranges that span years of data. PostgreSQL executes that query, writes the results to disk as a proper heap table with its own storage and its own statistics, and serves those results at index-scan speed until you explicitly ask it to refresh. The query takes three seconds. The read takes two milliseconds.

This is not caching in the way Redis is caching. There is no separate service, no eviction policy, no serialization overhead, no network hop to a different process. The materialized view lives inside your database, alongside your tables and indexes, managed by the same VACUUM process, backed up by the same pg_dump, replicated by the same streaming replication. It is PostgreSQL through and through.

The concept was introduced in version 9.3, released in September 2013. The ability to refresh without blocking readers — REFRESH CONCURRENTLY — arrived in version 9.4 the following year. In the twelve years since, materialized views have remained the most consistently underused feature in the PostgreSQL ecosystem. Not because they are difficult. The syntax, as you will shortly see, is straightforward. Not because they are limited. They support every SQL feature PostgreSQL offers — window functions, CTEs, LATERAL joins, recursive queries, JSONB aggregation, full-text search vectors. They are underused because nobody introduces developers to them. Bootcamps skip them. ORM documentation ignores them. The PostgreSQL documentation itself devotes a single page to them, tucked between the rule system chapter and the query planner chapter, where few application developers venture.

This chapter is the introduction they deserve. By its end, you will know how materialized views work under the hood, how to create and manage them with complete command of the syntax, how they compare to regular views and tables, how to index them properly, what they genuinely cannot do, and what they have achieved in production systems serving millions of users.

Chapter 5 will address the question you are already forming: how do you keep them fresh?

Patience. We shall get there. First, the fundamentals.

How It Works Under the Hood

I find that developers use features more confidently when they understand the machinery. If you prefer to skip to the syntax, I shall not be offended — Section 3 awaits. But I believe what follows will repay your attention.

What happens during CREATE

When you execute CREATE MATERIALIZED VIEW, PostgreSQL performs a sequence of operations that are worth understanding because they explain the feature's behavior and limitations.

First, ExecCreateTableAs validates that the name doesn't conflict with existing objects and determines whether to populate the view with data. Then create_ctas_internal extracts column definitions from your SELECT query — names, types, constraints — and calls DefineRelation followed by heap_create_with_catalog to create an actual heap relation. This is the same code path used to create a regular table. The materialized view receives its own entry in pg_class with relkind = 'm' — distinct from tables (r), regular views (v), and indexes (i). It gets its own relfilenode under your PostgreSQL data directory, its own entry in the statistics collector, its own visibility in pg_stat_user_tables.

Next, StoreViewQuery saves your defining SELECT in the pg_rewrite system catalog. This query is consulted only during REFRESH — it plays no role during reads. When your application queries the materialized view, PostgreSQL reads from the heap table exactly as it would read from any regular table. The query planner does not know or care that the data was produced by a stored query. EXPLAIN output for a materialized view scan is indistinguishable from a table scan.

Finally, unless you specified WITH NO DATA, PostgreSQL executes the stored query through its standard planner pipeline — QueryRewrite, pg_plan_query, ExecutorStart, ExecutorRun — and writes the resulting tuples into the new heap. The materialized view is now populated and scannable.

CYBERTEC, one of Europe's foremost PostgreSQL consultancies, captures the essence precisely: a materialized view is stored just like a table, so follow the same recommendations you would follow when creating a standard table. Index it. Vacuum it. Monitor it. It is a table in every practical sense, with a query attached for repopulating its contents.

What happens during standard REFRESH

This is where most explanations get it wrong. Standard REFRESH MATERIALIZED VIEW does not delete all rows and insert new ones. That would produce enormous numbers of dead tuples and bloat. Instead, it performs a heap swap — a mechanism shared with CLUSTER and VACUUM FULL.

The process: PostgreSQL acquires an AccessExclusiveLock on the materialized view, which blocks all operations — including SELECTs. It creates a brand-new heap file. It executes the stored query and writes the results into this new heap. Then it atomically swaps the old heap for the new one — updating pg_class to point to the new relfilenode and dropping the old file. Indexes are rebuilt from scratch as part of the process.

The critical insight: because the old heap is simply replaced, standard REFRESH produces zero dead tuples on the materialized view. There is nothing for VACUUM to clean up. The materialized view is as compact and efficient after refresh as a freshly created table. This makes standard REFRESH the better choice for materialized views where blocking reads for a few seconds is acceptable.

The cost is that AccessExclusiveLock. While the refresh is running — which may take seconds or minutes depending on the complexity of your query and the volume of data — no other session can read from the materialized view. For a dashboard that refreshes in 2 seconds every 15 minutes, this is negligible. For a materialized view that takes 5 minutes to refresh and serves real-time API traffic, it is unacceptable.

What happens during REFRESH CONCURRENTLY

REFRESH MATERIALIZED VIEW CONCURRENTLY uses a fundamentally different algorithm called refresh_by_match_merge, designed to allow reads throughout the refresh process.

The process: PostgreSQL acquires an ExclusiveLock — which blocks writes and other refreshes but permits SELECTs to continue. It creates a temporary heap and executes the stored query into it, producing the fresh dataset. Then, instead of swapping heaps, it performs a FULL OUTER JOIN between the existing materialized view data and the new temporary data, using the columns of the UNIQUE index as the join key. The result of this join categorizes every row: rows only in the new data are INSERTed, rows only in the old data are DELETEd, rows present in both but with different values are UPDATEd, and identical rows are left untouched.

This is why CONCURRENTLY requires a UNIQUE index. Without a unique key, rows have no identity — the FULL OUTER JOIN has nothing to match on. Kevin Grittner, who wrote the original patch, explained on the PostgreSQL mailing list that the requirement exists primarily for correctness in the face of duplicate rows containing no nulls. The UNIQUE index must use only column names — no expressions, no WHERE clause — and must cover all rows.

The trade-offs are different from standard REFRESH. Reads continue uninterrupted — your application never notices the refresh happening. But the INSERT, UPDATE, and DELETE operations that apply the diff produce dead tuples, exactly as they would on a regular table. These dead tuples must be cleaned by VACUUM. And the diff computation itself adds overhead: when most rows change between refreshes, CONCURRENTLY can be significantly slower than standard REFRESH because it must compare every row rather than simply replacing the heap.

OperationLock LevelBlocks SELECTs?Blocks Other REFRESH?Produces Dead Tuples?
REFRESH MATERIALIZED VIEWAccessExclusiveLockYes — blocks everythingYesNo — heap swap
REFRESH ... CONCURRENTLYExclusiveLockNo — reads continueYesYes — needs VACUUM

WAL implications

Both refresh methods generate Write-Ahead Log entries, but the profiles differ. Standard REFRESH generates WAL proportional to the full dataset size — the entire new heap is WAL-logged. The old heap's deletion generates minimal WAL. Concurrent REFRESH generates WAL for each individual INSERT, UPDATE, and DELETE during the diff phase. When many rows change, concurrent refresh can generate more WAL than standard refresh. When few rows change, it generates less.

This matters for streaming replication. A large materialized view refresh can cause replication lag on replicas. If your refresh window is tight and your replicas are latency-sensitive, monitor pg_stat_replication during refresh operations.

Views and Materialized Views and Tables

I am asked this question with sufficient frequency that I shall address it definitively, in a format that permits no ambiguity.

A regular view is an alias for a query. It stores nothing. When you SELECT from it, PostgreSQL substitutes the view's definition into your query, optimizes the combined result, and executes it from scratch. The view is a convenience — it provides abstraction, simplifies complex queries, and can restrict access to specific columns or rows. But it offers no performance benefit. The full cost of the underlying query is paid on every read.

A materialized view stores query results as a physical heap table. Reads are fast — they access stored data, not recomputed data. But the data is a snapshot, frozen at the moment of the last REFRESH. The freshness trade-off — read speed in exchange for data staleness — is the fundamental design decision. You choose how stale the data can be by choosing how often to refresh.

A table is the source of truth. You INSERT, UPDATE, and DELETE data directly. It has no defining query. It does not refresh. It simply stores what you put in it.

AttributeRegular ViewMaterialized ViewTable
Physical storageNoneYes — heap table with relfilenodeYes
Read performanceRecomputes on every readReads stored data — index-scan speedReads stored data
Data freshnessAlways currentStale until REFRESHAlways current (for committed data)
Can be indexedNoYes — B-tree, GIN, GiST, BRIN, hash, covering, partial, expressionYes — all types
INSERT / UPDATE / DELETEYes (simple, updatable views)No — read-onlyYes
CREATE OR REPLACEYesNo — must DROP and recreateN/A
TriggersINSTEAD OF triggersNot supportedAll trigger types
WITH CHECK OPTIONYesNoN/A (use CHECK constraints)
Storage costZeroData + indexes (same as equivalent table)Data + indexes
Maintenance requiredNoneREFRESH + VACUUM (for CONCURRENTLY)VACUUM + ANALYZE
Best suited forAbstraction, security, simple reusable queriesExpensive aggregations, dashboards, pre-computed API responses, search indexesPrimary data storage

The decision framework requires only two questions. First: does this query run repeatedly, cost more than 50 milliseconds, and power a feature that doesn't require data from the current second? If yes, use a materialized view. Second: is real-time freshness essential, or is the query already fast? Then a regular view — or no view at all — is the appropriate choice.

The Complete Syntax Reference

I shall now present every command for working with materialized views in PostgreSQL, with production-grade examples. I intend this section to be the one you keep open in a browser tab while working.

CREATE MATERIALIZED VIEW

create_materialized_view.sql
CREATE MATERIALIZED VIEW [ IF NOT EXISTS ] view_name
    [ (column_name [, ...] ) ]
    [ USING method ]
    [ WITH ( storage_parameter [= value] [, ... ] ) ]
    [ TABLESPACE tablespace_name ]
    AS query
    [ WITH [ NO ] DATA ]

A realistic example — daily revenue by product category for the last 90 days, joining four tables:

mv_daily_revenue.sql
CREATE MATERIALIZED VIEW mv_daily_revenue AS
SELECT
    date_trunc('day', o.created_at)::date AS sale_date,
    c.id AS category_id,
    c.name AS category_name,
    COUNT(DISTINCT o.id) AS order_count,
    COUNT(oi.id) AS items_sold,
    SUM(oi.quantity * oi.unit_price)::numeric(13,2) AS revenue,
    ROUND(AVG(oi.quantity * oi.unit_price)::numeric, 2) AS avg_item_value
FROM orders o
JOIN order_items oi ON oi.order_id = o.id
JOIN products p ON p.id = oi.product_id
JOIN categories c ON c.id = p.category_id
WHERE o.status = 'confirmed'
  AND o.created_at >= CURRENT_DATE - INTERVAL '90 days'
GROUP BY 1, 2, 3
WITH DATA;

Against production tables with 2 million orders, 8 million order items, and 50,000 products, this query might take 3–5 seconds. The materialized view executes it once. Every subsequent read — SELECT * FROM mv_daily_revenue WHERE sale_date = CURRENT_DATE — completes in single-digit milliseconds.

WITH DATA (the default) executes the query immediately and populates the materialized view. It is scannable as soon as the command completes.

WITH NO DATA stores only the definition. The view exists in the catalog but contains no data and cannot be queried. Any SELECT returns an error:

error output
ERROR: materialized view "mv_daily_revenue" has not been populated
HINT: Use the REFRESH MATERIALIZED VIEW command.

Use WITH NO DATA when the underlying tables don't yet have data (common in migration scripts), when you want to create indexes before the first population, or when the initial query would be too expensive to run during a deployment window.

IF NOT EXISTS silently succeeds if the view already exists. Essential for idempotent migration scripts.

USING method specifies the table access method — currently only heap (the default). Future PostgreSQL versions may offer columnar storage methods, which would be particularly interesting for analytics-heavy materialized views.

WITH (storage_parameter) accepts the same storage parameters as regular tables. The most useful:

storage_parameters.sql
CREATE MATERIALIZED VIEW mv_example
WITH (fillfactor = 90, autovacuum_vacuum_scale_factor = 0.01)
AS SELECT ... WITH DATA;

fillfactor = 90 leaves 10% free space on each page for HOT (Heap-Only Tuple) updates during CONCURRENTLY refresh, reducing the need for index updates when rows change. autovacuum_vacuum_scale_factor = 0.01 triggers autovacuum when just 1% of rows are dead — essential for frequently-refreshed views.

REFRESH MATERIALIZED VIEW

refresh_syntax.sql
REFRESH MATERIALIZED VIEW [ CONCURRENTLY ] name
    [ WITH [ NO ] DATA ]

Standard refresh: replaces all data via heap swap. Blocks reads.

refresh_standard.sql
REFRESH MATERIALIZED VIEW mv_daily_revenue;

Concurrent refresh: applies diff via FULL OUTER JOIN. Reads continue.

refresh_concurrently.sql
REFRESH MATERIALIZED VIEW CONCURRENTLY mv_daily_revenue;

Prerequisites for CONCURRENTLY: the materialized view must already be populated (cannot refresh WITH NO DATA views concurrently), and at least one UNIQUE index must exist using only column names — no expressions, no WHERE clause, covering all rows.

Without the required UNIQUE index, you'll see:

error output
ERROR: cannot refresh materialized view "public.mv_daily_revenue" concurrently
HINT: Create a unique index with no WHERE clause on one or more columns
of the materialized view.

Privileges: PostgreSQL 17 introduced the MAINTAIN privilege, which allows a role to refresh a materialized view without owning it. Prior versions required ownership.

grant_maintain.sql
GRANT MAINTAIN ON mv_daily_revenue TO refresh_role;

REFRESH WITH NO DATA depopulates a materialized view, returning it to the unscannable state. Rarely useful in practice, but available.

Indexing Materialized Views

A materialized view without indexes is a table without indexes. It will perform sequential scans on every query. This is, I regret to report, the most common mistake I encounter — developers create a materialized view, marvel at the fast refresh, and then wonder why their reads are still slow.

The materialized view is a table. Treat it like one. Analyze your query patterns and add the indexes they require.

indexes.sql
-- Required for CONCURRENTLY: UNIQUE index on column names only
CREATE UNIQUE INDEX idx_mv_revenue_pk
    ON mv_daily_revenue (sale_date, category_id);

-- For filtering by category
CREATE INDEX idx_mv_revenue_category
    ON mv_daily_revenue (category_id);

-- For date range queries, descending order
CREATE INDEX idx_mv_revenue_date
    ON mv_daily_revenue (sale_date DESC);

-- Covering index — includes extra columns to enable index-only scans
CREATE INDEX idx_mv_revenue_covering
    ON mv_daily_revenue (sale_date, category_id)
    INCLUDE (revenue, order_count);

Every index type available for regular tables works on materialized views:

B-tree — the default, suitable for equality and range queries. Most materialized view indexes will be B-tree.

GIN — for JSONB columns, tsvector columns (full-text search), and array columns. If your materialized view pre-computes search vectors or JSON API responses, GIN indexes make the reads sub-millisecond.

gin_index.sql
-- On a materialized view with a pre-computed search vector
CREATE INDEX idx_mv_search_gin
    ON mv_product_search USING GIN (search_vector);

GiST — for geometric data, range types, and certain full-text search scenarios.

BRIN — for naturally ordered data like timestamps or sequential IDs. Extremely compact — suitable for very large materialized views where B-tree index size would be problematic.

brin_index.sql
-- On a time-series materialized view with millions of rows
CREATE INDEX idx_mv_metrics_brin
    ON mv_hourly_metrics USING BRIN (metric_time);

Hash — for equality-only lookups. Rarely the best choice, but useful when B-tree overhead isn't justified.

Partial indexes — index a subset of rows. Useful when most queries filter to a specific segment:

partial_index.sql
CREATE INDEX idx_mv_revenue_recent
    ON mv_daily_revenue (sale_date, category_id)
    WHERE sale_date >= '2026-01-01';

Expression indexes — index a computed expression. Note: expression indexes cannot serve as the UNIQUE index required for CONCURRENTLY.

expression_index.sql
CREATE INDEX idx_mv_revenue_month
    ON mv_daily_revenue (date_trunc('month', sale_date));

The Indexing Rules (I suggest committing these to memory):

  1. Always create at least one UNIQUE index — this enables REFRESH CONCURRENTLY, which you will almost certainly need in production.
  2. The UNIQUE index for CONCURRENTLY must use only column names. No expressions, no WHERE clause, no INCLUDE columns.
  3. Add indexes for every column combination your application filters, sorts, or joins by. The materialized view is a table. Unindexed columns mean sequential scans.
  4. For materialized views with JSONB or tsvector columns, GIN indexes are essential — without them, you lose the performance advantage entirely.
  5. After creating indexes, run ANALYZE mv_name to update statistics. The query planner needs accurate row counts and value distributions to choose good plans.

Inspecting Materialized Views

The pg_matviews system catalog provides metadata about all materialized views in the current database:

inspect_matviews.sql
SELECT schemaname, matviewname, matviewowner, ispopulated
FROM pg_matviews
WHERE schemaname = 'public'
ORDER BY matviewname;
ColumnTypeDescription
schemanamenameSchema containing the materialized view
matviewnamenameName of the materialized view
matviewownernameOwner
tablespacenameTablespace (null if default)
hasindexesbooleanWhether it has any indexes
ispopulatedbooleanWhether it currently contains data
definitiontextThe reconstructed SELECT query

For monitoring disk usage:

monitor_disk_usage.sql
SELECT
    m.matviewname,
    pg_size_pretty(pg_relation_size(c.oid)) AS data_size,
    pg_size_pretty(pg_indexes_size(c.oid)) AS index_size,
    pg_size_pretty(pg_total_relation_size(c.oid)) AS total_size,
    (SELECT n_dead_tup FROM pg_stat_user_tables
     WHERE relname = m.matviewname) AS dead_tuples
FROM pg_matviews m
JOIN pg_class c ON c.relname = m.matviewname
JOIN pg_namespace n ON n.oid = c.relnamespace AND n.nspname = m.schemaname
ORDER BY pg_total_relation_size(c.oid) DESC;

The dead_tuples column is your canary for CONCURRENTLY-refreshed views. If dead tuple counts are growing without bound, autovacuum is not keeping up — consult Chapter 5 for the remedies.

ALTER MATERIALIZED VIEW

You can rename a materialized view, move it to a different schema, change its tablespace, set storage parameters, adjust column statistics targets, and change ownership. You cannot alter the defining query — for that, you must DROP and recreate.

alter_materialized_view.sql
-- Rename
ALTER MATERIALIZED VIEW mv_daily_revenue RENAME TO mv_revenue_by_day;

-- Move to a different schema
ALTER MATERIALIZED VIEW mv_daily_revenue SET SCHEMA reporting;

-- Change storage parameters
ALTER MATERIALIZED VIEW mv_daily_revenue SET (
    fillfactor = 85,
    autovacuum_vacuum_scale_factor = 0.02
);

-- Move to a faster tablespace
ALTER MATERIALIZED VIEW mv_daily_revenue SET TABLESPACE fast_nvme;

-- Change column compression (PG 14+)
ALTER MATERIALIZED VIEW mv_daily_revenue
    ALTER COLUMN category_name SET COMPRESSION lz4;

-- Transfer ownership
ALTER MATERIALIZED VIEW mv_daily_revenue OWNER TO analytics_role;

The inability to alter the defining query is the most significant operational inconvenience. When your query needs to change — a new column, a modified JOIN, an updated WHERE clause — you must:

recreate_materialized_view.sql
BEGIN;
DROP MATERIALIZED VIEW IF EXISTS mv_daily_revenue CASCADE;
CREATE MATERIALIZED VIEW mv_daily_revenue AS
    -- your updated query
WITH DATA;
CREATE UNIQUE INDEX idx_mv_revenue_pk ON mv_daily_revenue (sale_date, category_id);
-- recreate all other indexes
COMMIT;

The CASCADE in the DROP is important — it removes any dependent objects (other materialized views, regular views) that reference this one. The entire operation runs in a transaction. If any step fails, nothing changes.

Rails' Scenic gem and Flyway's repeatable migrations both handle this pattern gracefully, as we'll see in the framework chapters.

DROP MATERIALIZED VIEW

drop_syntax.sql
DROP MATERIALIZED VIEW [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ]

CASCADE drops dependent objects. RESTRICT (the default) refuses to drop if any dependencies exist.

drop_example.sql
-- Safe for idempotent scripts
DROP MATERIALIZED VIEW IF EXISTS mv_daily_revenue CASCADE;

What PostgreSQL Does Not Provide

I should be forthcoming about the gaps, because pretending they do not exist would be a disservice to you and an embarrassment to me. PostgreSQL's materialized views are the simplest implementation among major databases. That simplicity means flexibility — you can use any SQL feature in the defining query, with no restrictions. It also means PostgreSQL asks more of you operationally.

No automatic refresh

PostgreSQL has no built-in mechanism to refresh a materialized view on a schedule or in response to data changes. Unlike Oracle's ON COMMIT REFRESH, which automatically refreshes when the underlying transaction commits, or SQL Server's indexed views, which are maintained automatically on every base table modification, PostgreSQL requires you to call REFRESH explicitly.

This is by design. PostgreSQL favors composable primitives over opinionated automation — it provides the mechanism and leaves the policy to you. The practical consequence is that you must build the refresh scheduling yourself.

Chapter 5 is devoted entirely to this topic, covering seven distinct approaches from the trivial (pg_cron: one line of SQL) to the sophisticated (pg_ivm: incremental maintenance via triggers).

No built-in freshness tracking

PostgreSQL does not record when a materialized view was last refreshed. There is no last_refreshed column in pg_matviews. You cannot query the system catalog to determine whether your materialized view contains data from five minutes ago or five days ago.

The practical workaround is a refresh log table:

mv_refresh_log.sql
CREATE TABLE mv_refresh_log (
    view_name TEXT PRIMARY KEY,
    last_refreshed TIMESTAMPTZ NOT NULL,
    duration INTERVAL,
    row_count BIGINT
);

Wrapped in a function that logs every refresh:

logged_refresh.sql
CREATE OR REPLACE FUNCTION logged_refresh(p_view TEXT)
RETURNS VOID LANGUAGE plpgsql AS $$
DECLARE
    v_start TIMESTAMPTZ := clock_timestamp();
    v_rows BIGINT;
BEGIN
    EXECUTE format('REFRESH MATERIALIZED VIEW CONCURRENTLY %I', p_view);
    EXECUTE format('SELECT count(*) FROM %I', p_view) INTO v_rows;

    INSERT INTO mv_refresh_log (view_name, last_refreshed, duration, row_count)
    VALUES (p_view, clock_timestamp(), clock_timestamp() - v_start, v_rows)
    ON CONFLICT (view_name) DO UPDATE SET
        last_refreshed = EXCLUDED.last_refreshed,
        duration = EXCLUDED.duration,
        row_count = EXCLUDED.row_count;
END $$;

Call SELECT logged_refresh('mv_daily_revenue') instead of raw REFRESH, and you have a complete audit trail.

No CREATE OR REPLACE

Unlike regular views, which support CREATE OR REPLACE VIEW for seamless definition updates, materialized views require DROP followed by CREATE when the query changes. This drops all indexes, which must be recreated. In migration frameworks, the pattern is well-established — Flyway's repeatable migrations (R__ prefix), Liquibase's runOnChange changesets, and Rails' Scenic gem all handle it — but it remains an operational inconvenience that regular views do not share.

No parameterized materialized views

A materialized view cannot accept parameters. You cannot define mv_sales_for_region(region_id) and have PostgreSQL store a separate result set per region. The query is fixed at creation time.

Three workarounds, in order of preference:

Materialize all data, filter at read time. The most common approach. Create the materialized view with all regions, index the region column, and let the application's WHERE clause do the filtering. The index scan is fast enough that the "unnecessary" data causes no practical harm.

Per-segment materialized views. When the number of segments is small and stable — 5 geographic regions, 3 product tiers — create a separate materialized view for each. Manageable at small scale; untenable at scale.

Session variable filtering. Create a regular view on top of the materialized view that filters by current_setting('app.tenant_id'). The materialized view contains all data; the regular view exposes only the current tenant's rows. Useful for multi-tenant applications where row-level security applies.

How PostgreSQL compares to other databases

FeaturePostgreSQLOracleSQL Server
Auto-refresh on commitNoYesYes (always, for indexed views)
Incremental refreshNo (pg_ivm extension available)Yes (FAST REFRESH with MV logs)Yes (automatic)
Query rewrite by optimizerNoYesYes (Enterprise Edition)
Built-in staleness trackingNoYes (DBA_MVIEWS.STALENESS)N/A
All SQL features in MV queryYes — no restrictionsMostly — restrictions apply for FAST REFRESH (no ROWNUM, SYSDATE, remote tables)No — many restrictions (no OUTER JOIN, no subqueries, limited aggregates)
Scheduler includedNo (pg_cron extension)Yes (DBMS_SCHEDULER)Yes (SQL Server Agent)

PostgreSQL's implementation is the most flexible and the least automated. You can use any SQL feature in the defining query — window functions, CTEs, LATERAL joins, recursive queries — with zero restrictions. SQL Server's indexed views, by contrast, prohibit OUTER JOINs, subqueries, DISTINCT, UNION, and most aggregate functions beyond SUM and COUNT_BIG. Oracle's materialized views are the most feature-complete, with incremental refresh, query rewrite, and built-in staleness tracking — but Oracle is Oracle, and its licensing implications are a chapter unto themselves.

Real-World Patterns

Theory is necessary. Evidence is persuasive. Allow me to present five patterns I have observed in production, each with working SQL and specific performance data.

Pattern 1: Dashboard pre-computation

The most common and highest-value pattern. Your analytics dashboard joins orders, products, customers, and events — potentially scanning tens of millions of rows — and presents daily or weekly metrics. Without a materialized view, every dashboard load recomputes the aggregation. With one, the computation happens once per refresh cycle.

mv_dashboard_metrics.sql
CREATE MATERIALIZED VIEW mv_dashboard_metrics AS
SELECT
    date_trunc('day', e.created_at)::date AS day,
    COUNT(DISTINCT e.user_id) AS unique_users,
    SUM(CASE WHEN e.event_type = 'purchase'
        THEN e.amount ELSE 0 END)::numeric(13,2) AS revenue,
    COUNT(CASE WHEN e.event_type = 'purchase' THEN 1 END) AS purchases,
    COUNT(CASE WHEN e.event_type = 'signup' THEN 1 END) AS signups,
    ROUND(
        COUNT(CASE WHEN e.event_type = 'purchase' THEN 1 END)::numeric /
        NULLIF(COUNT(DISTINCT e.user_id), 0) * 100, 2
    ) AS conversion_rate_pct
FROM events e
WHERE e.created_at >= CURRENT_DATE - INTERVAL '90 days'
GROUP BY 1
WITH DATA;

CREATE UNIQUE INDEX ON mv_dashboard_metrics (day);
CREATE INDEX ON mv_dashboard_metrics (day DESC);

Your dashboard API: SELECT * FROM mv_dashboard_metrics ORDER BY day DESC LIMIT 30.

One query. One index scan. The dashboard that took 28 seconds now loads in under 50 milliseconds.

Pattern 2: Pre-computed API responses

Pre-compute the exact JSON structure your API returns, so the endpoint performs a single indexed lookup with no aggregation or joining at request time.

mv_product_api.sql
CREATE MATERIALIZED VIEW mv_product_api AS
SELECT
    p.id,
    jsonb_build_object(
        'id', p.id,
        'name', p.name,
        'slug', p.slug,
        'price', p.price,
        'currency', p.currency,
        'category', jsonb_build_object(
            'id', c.id, 'name', c.name, 'slug', c.slug
        ),
        'rating', ROUND(COALESCE(AVG(r.rating), 0), 2),
        'review_count', COUNT(r.id),
        'in_stock', p.inventory_count > 0
    ) AS api_response
FROM products p
JOIN categories c ON c.id = p.category_id
LEFT JOIN reviews r ON r.product_id = p.id
GROUP BY p.id, p.name, p.slug, p.price, p.currency,
         p.inventory_count, c.id, c.name, c.slug
WITH DATA;

CREATE UNIQUE INDEX ON mv_product_api (id);

Your API handler: SELECT api_response FROM mv_product_api WHERE id = $1.

The response is pre-built. The JSON is pre-serialized by PostgreSQL. No ORM, no serialization library, no N+1 queries assembling the response one relationship at a time. Sub-millisecond. No Redis required.

Pattern 3: Denormalized search index

Combine data from multiple tables into a single searchable materialized view with a GIN-indexed tsvector. This replaces Elasticsearch for most search use cases, using PostgreSQL's built-in full-text search.

mv_search_index.sql
CREATE MATERIALIZED VIEW mv_search_index AS
SELECT
    p.id AS product_id,
    p.name,
    p.price,
    c.name AS category_name,
    setweight(to_tsvector('english', coalesce(p.name, '')), 'A') ||
    setweight(to_tsvector('english', coalesce(p.description, '')), 'B') ||
    setweight(to_tsvector('english', coalesce(string_agg(t.name, ' '), '')), 'C') ||
    setweight(to_tsvector('english', coalesce(c.name, '')), 'D')
    AS search_vector
FROM products p
LEFT JOIN categories c ON c.id = p.category_id
LEFT JOIN product_tags pt ON pt.product_id = p.id
LEFT JOIN tags t ON t.id = pt.tag_id
GROUP BY p.id, p.name, p.price, p.description, c.name
WITH DATA;

CREATE UNIQUE INDEX ON mv_search_index (product_id);
CREATE INDEX ON mv_search_index USING GIN (search_vector);

Search query: SELECT product_id, name, price, ts_rank(search_vector, q) AS rank FROM mv_search_index, plainto_tsquery('english', 'wireless headphones') q WHERE search_vector @@ q ORDER BY rank DESC LIMIT 20.

Weighted full-text search across product names, descriptions, tags, and categories — with relevance ranking — at GIN-index speed. No Elasticsearch instance to manage.

Pattern 4: Time-based aggregation rollups

Create materialized views at multiple granularities, each refreshing at an interval appropriate to its resolution:

chained_rollups.sql
-- Daily metrics: refresh every 15 minutes
CREATE MATERIALIZED VIEW mv_metrics_daily AS
SELECT date_trunc('day', created_at)::date AS day, ...
GROUP BY 1;

-- Weekly rollup: refresh every hour, built FROM the daily MV
CREATE MATERIALIZED VIEW mv_metrics_weekly AS
SELECT date_trunc('week', day)::date AS week,
       SUM(revenue) AS revenue, SUM(events) AS events, ...
FROM mv_metrics_daily
GROUP BY 1;

-- Monthly rollup: refresh once daily at 2 AM
CREATE MATERIALIZED VIEW mv_metrics_monthly AS
SELECT date_trunc('month', week)::date AS month, ...
FROM mv_metrics_weekly
GROUP BY 1;

Chained materialized views. The critical rule: refresh the base views before the dependent ones. If the daily view is stale, the weekly and monthly views will also be stale, regardless of their own refresh timing.

Pattern 5: Poor man's CQRS

Use materialized views as read models — denormalized, pre-computed, optimized for specific query patterns — while keeping your normalized tables as write models. Writes go to the normalized tables through your ORM. Reads come from materialized views through the same ORM (mapped as read-only models).

This provides approximately 80% of the benefits of Command Query Responsibility Segregation at approximately 20% of the complexity. No event sourcing, no separate read databases, no eventual consistency orchestration. Just PostgreSQL, doing what it does well.

By the Numbers: Materialized View Performance in Production

ScenarioBefore (Raw Query)After (Materialized View)ImprovementSource
Rails daily sales (100K users, 1M orders, 5M activities)7,100 ms7 ms1,000xsngeth.com, 2025
Rails category revenue (same dataset)3,400 ms0.368 ms9,252xsngeth.com, 2025
Enterprise reporting dashboard (production)28,000 ms180 ms155xStormatics
Spring Boot application (500M orders)7,200 ms6 ms1,200xvinsguru
PostgreSQL docs spell-check example (480K rows)188 mssub-1 ms~200xPostgreSQL documentation

Every one of these improvements required a single CREATE MATERIALIZED VIEW statement, a CREATE UNIQUE INDEX, and a scheduled REFRESH. No new services. No new infrastructure. No application code changes beyond mapping a model to the view.

Questions the Thorough Reader Will Ask

How large can a materialized view practically be?

PostgreSQL's hard limit for any relation is 32 terabytes. In practice, the limit is your refresh time and your disk budget, not PostgreSQL's architecture. Materialized views with millions of rows work excellently — refreshing in seconds to low single-digit minutes. Hundreds of millions of rows: refresh becomes the bottleneck (minutes to tens of minutes), but read performance remains fast because reads are index scans against stored data, unaffected by data volume beyond what the index must traverse.

One team at Applaudience scaled to 1.2 billion records per month using materialized views before the refresh times — which had grown from seconds to hours — prompted a move to custom stored procedures.

What happens if the refresh fails partway through?

REFRESH MATERIALIZED VIEW is fully transactional. If the underlying query fails — division by zero, out of memory, query timeout, any error — the entire operation rolls back. The old data remains intact and unchanged. No partial state is ever visible to readers. This is true for both standard and concurrent refresh.

This is, I should note, one of the most important properties of materialized views. You can schedule aggressive refresh intervals with confidence that a failure will not corrupt or partially update the view. The worst case is that the view remains at its previous refresh state until the next successful refresh.

How much disk space does a materialized view require?

The same as a table containing the same data. Use pg_total_relation_size('mv_daily_revenue') for the complete answer, including all indexes. For materialized views refreshed with CONCURRENTLY, the actual space usage may temporarily reach up to twice the data size between VACUUM cycles, because dead tuples from the diff algorithm accumulate until VACUUM reclaims them.

Budget accordingly. If your materialized view contains 5 GB of data with 2 GB of indexes, the steady-state size is 7 GB. Between VACUUM cycles after a concurrent refresh, it may briefly approach 12 GB.

Can I build a materialized view on top of another materialized view?

Yes. Chained materialized views are a valid and useful pattern for progressive aggregation — raw data feeding daily summaries, daily summaries feeding weekly rollups, weekly rollups feeding monthly reports. Each level compresses the data further and refreshes less frequently.

The critical operational rule: always refresh base materialized views before their dependents. If the daily view hasn't been refreshed, the weekly view built on top of it will compute stale results regardless of its own refresh timing. Chapter 5 addresses dependency ordering in refresh schedules.

Should I use a materialized view or Redis for this?

If the data involves complex aggregation, multi-table joins, or pre-computed summaries that change on a schedule of minutes or hours — materialized view. The data stays inside your database, the consistency is guaranteed by REFRESH, and your ORM reads from it like any table.

If the data is a simple key-value lookup requiring sub-millisecond latency at tens of thousands of concurrent requests per second — Redis. This book's thesis is not that Redis is unnecessary in all cases. It is that Redis is unnecessary in most cases, and materialized views handle the cases most commonly assigned to Redis.

Chapter 6 explores this boundary in detail.

That is the foundation. You know what materialized views are, how they work under the hood, how to create and index them, what they cannot do, and what they have achieved in production.

But I have been carefully avoiding the question I know you are asking. You have been patient, and I appreciate it.

How do you keep them fresh? How do you schedule the refresh? How do you monitor whether it's working? How do you handle failures? How do you prevent two processes from refreshing simultaneously? How do you manage the dead tuples that CONCURRENTLY produces? What if the data needs to be fresher than a cron schedule allows?

These are excellent questions. Every one of them has a well-understood answer. Chapter 5 provides all of them.

If you'll follow me — I believe you'll find it is the most practically valuable chapter in this book.