← Compare

Gold Lapel vs Elasticsearch

Thirteen Methods, Zero Additional Infrastructure

The Waiter of Gold Lapel · Updated Apr 6, 2026 Published Apr 5, 2026 · 14 min read
Elasticsearch is a fine search engine, and I do not say that lightly. But when the search index lives in a separate building from the data it indexes, one begins to wonder whether the commute is strictly necessary.

If you'll permit me a moment of candour — PostgreSQL's search capabilities run deeper than most teams have had occasion to discover. Full-text search with language-aware stemming has been part of the core database since 2008. Fuzzy matching, phonetic search, and vector similarity are each a single extension away. These are not workarounds or approximations. They are mature, well-indexed subsystems that have been serving production search for years.

Most teams were never told. The tutorials recommend Elasticsearch. The bootcamps teach Elasticsearch. This is understandable — Elasticsearch is genuinely excellent software that earned its reputation. The question worth asking is not "why did we add it?" but "would we still, knowing what PostgreSQL provides?"

Gold Lapel was built to close the gap entirely. Thirteen search methods across seven language wrappers, each mapped to a specific Elasticsearch capability. Full-text search, fuzzy matching, phonetic search, vector similarity, autocomplete, aggregations, reverse search, custom analyzers, and relevance debugging — all backed by PostgreSQL. The proxy observes your search queries and auto-creates the optimal index type. No configuration. No search cluster. No data synchronization pipeline.

The arrangement I find in most applications

Allow me to describe the architecture I encounter most often. Your application writes to PostgreSQL — it is the source of truth. A CDC pipeline (Debezium, perhaps, or Kafka Connect) streams changes to Elasticsearch. Or your application writes to both systems, maintaining consistency through careful coordination. Elasticsearch holds its own copy of your data in its own format with its own index mappings.

Your team now operates two stateful systems. Two deployment pipelines. Two monitoring stacks. Two backup strategies. A JVM-based cluster that requires heap sizing, shard management, and careful attention during upgrades. And a synchronization pipeline between them that must be monitored for lag, errors, and consistency drift.

This arrangement is entirely reasonable — Elasticsearch is genuinely capable, and the search experience it provides is often excellent. But it is worth taking a quiet inventory of which capabilities your application actually relies on. Most teams, when they look closely, find they use three to five of Elasticsearch's features: full-text search, perhaps fuzzy matching, perhaps autocomplete, perhaps faceted counts. For precisely those features, PostgreSQL provides mature, well-indexed equivalents — and Gold Lapel wraps them in an API that feels familiar without the infrastructure.

What Gold Lapel provides — thirteen methods

Gold Lapel's search API provides 13 methods across all 7 language wrappers (Python, JavaScript, Ruby, Go, Java, PHP, .NET). Each method maps to a specific Elasticsearch capability. The wrapper generates SQL; the proxy detects the query pattern and auto-creates the optimal index.

Elasticsearch featureGold Lapel methodNotes
Match querysearch()Full-text with stemming, stop words, 30+ languages
Multi-match querysearch(conn, t, ["col1", "col2"], q)Multi-column search with field weighting
Fuzzy querysearch_fuzzy()Trigram similarity — handles typos naturally
Phonetic pluginsearch_phonetic()Soundex + Double Metaphone (built into PostgreSQL)
kNN searchsimilar()pgvector HNSW — same algorithm as ES dense vectors
Completion suggestersuggest()ILIKE prefix + trigram ranking
Terms aggregationfacets()Value counts with optional search filter
Metric aggregationsaggregate()count, sum, avg, min, max with grouping
Percolatorpercolate_add() / percolate() / percolate_delete()Store tsquery, match tsvector against them
Custom analyzercreate_search_config()PostgreSQL text search configuration
_analyze APIanalyze()Full tokenization pipeline via ts_debug()
_explain APIexplain_score()Match/score/headline breakdown for a specific document
Highlightingsearch(..., highlight=true)ts_headline with configurable markers

Every method returns all table columns plus a _score field for relevance ranking. All identifiers are validated against injection. Extensions are created lazily on first use — the developer never thinks about CREATE EXTENSION.

The proxy layer — automatic indexing

The search methods generate SQL patterns that the proxy recognizes. When the proxy sees a pattern repeated, it creates the optimal index:

  • tsvector @@ in WHERE → GIN index on to_tsvector('english', column)
  • LIKE / ILIKE in WHERE → GIN trigram index (gin_trgm_ops)
  • soundex() / dmetaphone() equality → B-tree expression index
  • vector <=> or <-> in ORDER BY → HNSW index (cosine or L2)

No index management. No PUT /articles/_mapping. No shard sizing. The proxy handles it.

Feature comparison

Both tools have genuine strengths. Elasticsearch's distributed architecture and analyzer ecosystem are formidable. Gold Lapel's advantage lies in simplicity — the same search capabilities without a second system to operate. I have tried to represent both fairly.

CapabilityGold LapelElasticsearch
Full-text searchsearch() — tsvector + ts_rank + GINMatch query + inverted index (Lucene)
Fuzzy matchingsearch_fuzzy() — pg_trgm trigram similarityFuzzy query (Levenshtein edit distance)
Phonetic searchsearch_phonetic() — soundex + dmetaphonePhonetic analysis plugin
Vector similarity (semantic)similar() — pgvector HNSWkNN dense vector search
Autocompletesuggest() — ILIKE prefix + trigram rankingCompletion suggester (FST) — purpose-built for sub-ms typeahead
Faceted searchfacets() — GROUP BY + COUNT with search filterTerms aggregation — excels at multi-level nested facets
Metric aggregationsaggregate() — count, sum, avg, min, maxMetric aggregations (avg, sum, min, max, cardinality)
Reverse search (percolator)percolate_add() + percolate() + percolate_delete()Percolator API
Custom analyzerscreate_search_config() — PG text search configurationsDeeply composable analyzer pipelines (char filters, tokenizers, token filters)
Tokenization debugginganalyze() ��� ts_debug() pipeline inspection_analyze API
Relevance debuggingexplain_score() — match, rank, headline breakdown_explain API
Highlightingsearch(..., highlight=true) — ts_headline with tagsMultiple highlighter strategies (unified, plain, FVH)
Distributed scalingSingle node; Citus for horizontal shardingNative sharding + replication — purpose-built for large corpora
Data consistencySearch hits live data — always transactionally consistentNear real-time (~1s refresh); sync pipeline required
Auto-indexingAutomatic — proxy detects patterns, creates optimal indexManual index mapping and configuration
Row-level securitySearch results respect per-user access controlRequires separate authorization layer
InfrastructureSingle binary alongside your appJVM cluster with dedicated monitoring and operations

The benchmark — if I may be specific

The numbers that follow are from a controlled benchmark. I present them not to diminish Elasticsearch — which is, as I have noted, excellent software — but to demonstrate what PostgreSQL achieves when properly attended to.

Methodology

A corpus of 10,000 documents. PostgreSQL 16.13 and Elasticsearch 8.17.0, running on the same Linux x86_64 machine. 100 iterations per method, 10 warmup iterations discarded. Gold Lapel measured with a warm cache — the condition your application experiences after the first request. Every number below is a median.

Read performance — eleven cacheable methods

Gold Lapel's search operates through three tiers of optimization: auto-created indexes (GIN, HNSW, trigram, expression), materialized views for complex queries, and an L1 native cache that serves repeated queries in microseconds from in-process memory — no network hop, no serialization, no round trip to a search cluster.

MethodPostgreSQLElasticsearchGold LapelGL vs ES
Full-text search26.91 ms26.18 ms4.68 ms5.6x faster
Fuzzy search65.82 ms10.55 ms7.95 ms1.3x faster
Phonetic search2.35 ms6.57 ms4.09 ms1.6x faster
Autocomplete1.14 ms2.93 ms1.95 ms1.5x faster
Vector kNN0.66 ms4.21 ms2.11 ms2.0x faster
Facets5.35 ms1.89 ms0.48 ms3.9x faster
Facets (filtered)0.69 ms3.13 ms2.06 ms1.5x faster
Aggregate5.68 ms2.39 ms0.47 ms5.1x faster
Aggregate (grouped)5.73 ms1.78 ms0.50 ms3.6x faster
Analyze0.57 ms1.32 ms1.12 ms1.2x faster
Explain score0.58 ms2.20 ms0.83 ms2.6x faster

Eleven cacheable methods. Gold Lapel is faster on all eleven. Elasticsearch won zero.

The numbers worth pausing on

Full-text search: 4.68 ms vs 26.18 ms. 5.6 times faster. This is the query most teams add Elasticsearch to answer, and it is the one where the margin is widest. I trust that figure speaks for itself.

Aggregations: 0.47 ms vs 2.39 ms. 5.1 times faster for metric aggregations. 3.6 times faster for grouped aggregations. Elasticsearch's aggregation framework is among its most celebrated features — terms buckets, metric pipelines, the full apparatus. Gold Lapel serves the same results from cache in under half a millisecond.

Facets: 0.48 ms vs 1.89 ms. 3.9 times faster. Faceted navigation is a category Elasticsearch was purpose-built for — the terms aggregation is one of its signature capabilities. The L1 cache changes the arithmetic considerably.

Vector kNN: 2.11 ms vs 4.21 ms. Twice as fast. Both use HNSW indexes for approximate nearest neighbor search — the same algorithm, different runtimes. pgvector's implementation holds up well, and the cache removes the remaining gap.

Where Elasticsearch wins — and it does

I should be forthright about what this table does not show. Elasticsearch outperforms raw PostgreSQL on several methods — fuzzy search (10.55 ms vs 65.82 ms), facets (1.89 ms vs 5.35 ms), and both aggregation types. Lucene's inverted index and purpose-built data structures are genuinely fast. Gold Lapel's advantage comes from the L1 cache layer — it is the cache, not the raw query engine, that produces these numbers.

This is not a sleight of hand. It is the architecture working as designed. Your application sends a search query; Gold Lapel's proxy layer auto-indexes the underlying PostgreSQL query, and the L1 cache serves repeated results from in-process memory in microseconds — no network hop to a search cluster, no Lucene segment reads, no JVM garbage collection pauses. The query is genuinely fast on a miss (thanks to auto-indexing), and faster still on a hit.

Write operations

Write methods — index, update, delete, bulk operations — are uncacheable by nature and pass through Gold Lapel unchanged. PostgreSQL handles these directly, and the results are instructive: 3.9 to 101 times faster than Elasticsearch for write operations. This is not Gold Lapel's doing. It is PostgreSQL doing what PostgreSQL does — transactional writes to a B-tree are simply faster than HTTP-based document indexing with segment merges. Your data stays consistent at every moment, without the near-real-time refresh delay that Elasticsearch requires.

What about distributed search at scale?

I should be forthright about where the comparison changes character.

PostgreSQL is a single-node database by default. For search corpora under 10 million documents — which is most applications — a single well-indexed PostgreSQL node provides search latency comparable to Elasticsearch. The performance gap at moderate scale is far smaller than most teams expect.

Beyond that scale, Elasticsearch's native sharding distributes the search workload across multiple nodes. This is a genuine architectural advantage for very large corpora — multi-tenant search platforms, massive product catalogs, centralized document repositories.

PostgreSQL has answers here, and they are worth knowing:

  • Table partitioning distributes data across partitions that PostgreSQL searches in parallel.
  • Read replicas distribute query load. Gold Lapel supports read replica routing with read-after-write protection.
  • Citus provides horizontal sharding for PostgreSQL. Gold Lapel auto-detects Citus at startup — auto-created indexes propagate to all shards automatically. No code changes.

The scaling path exists and is well-trodden. But I would not suggest it is as seamless as Elasticsearch's native sharding for search-specific workloads at truly massive scale. At 50 million documents and above, if search is your primary workload, Elasticsearch's distributed architecture is purpose-built for that problem.

When to keep Elasticsearch — and I do mean this

Elasticsearch is excellent software — genuinely well-engineered and purpose-built for workloads where it excels. I should be direct about the scenarios where it earns its infrastructure cost.

Log and event analytics. The ELK stack (Elasticsearch, Logstash, Kibana) exists because centralized logging requires append-only ingestion at massive throughput with time-series aggregations and dashboards. This is infrastructure observability — a different problem category from application search. Gold Lapel addresses application search. It does not compete with the ELK stack.

Search at truly massive scale. If your search corpus exceeds 50 million documents and is growing, and search latency is your primary concern, Elasticsearch's distributed architecture was designed for this. PostgreSQL with Citus can handle large-scale search, but Elasticsearch's sharding is purpose-built.

Complex text analysis pipelines. Elasticsearch's analyzer framework is deeply composable — character filters, tokenizers, and token filters can be chained into custom analysis pipelines. Edge-ngram tokenization for search-as-you-type, compound word decomposition for German, language detection. PostgreSQL has text search configurations and dictionaries, and Gold Lapel's create_search_config() wraps the common cases, but Elasticsearch's analyzer ecosystem is broader.

Mature Elasticsearch infrastructure. If your team has deep Elasticsearch expertise, monitoring is solid, and the operational cost is genuinely manageable — working infrastructure has earned the benefit of the doubt.

My observation is narrower than it may appear. Elasticsearch excels at the workloads described above. For application search — the use case that brings most teams to Elasticsearch in the first place — PostgreSQL's native capabilities, wrapped in Gold Lapel's API, provide a simpler path to the same result. The infrastructure stays small. The data stays in one place. The search stays fast.

The migration

Before — Elasticsearch
# Your app → Elasticsearch → PostgreSQL

from elasticsearch import Elasticsearch
es = Elasticsearch("http://localhost:9200")

# Index a document (synced from PostgreSQL)
es.index(index="articles", id=doc_id, body={
    "title": article.title,
    "body": article.body,
    "category": article.category,
})

# Search
results = es.search(index="articles", body={
    "query": {
        "multi_match": {
            "query": "database performance",
            "fields": ["title^2", "body"],
            "fuzziness": "AUTO"
        }
    },
    "highlight": {"fields": {"body": {}}},
    "aggs": {"categories": {"terms": {"field": "category"}}}
})

# Elasticsearch also requires: a sync pipeline (CDC, dual writes,
# or batch ETL) to keep the search index current with PostgreSQL.
After — Gold Lapel
# Your app → Gold Lapel → PostgreSQL
# One database. Search hits live data.

import goldlapel
gl = goldlapel.start("postgresql://user:pass@localhost:5432/mydb")

# Full-text search with highlighting
results = gl.search("articles", ["title", "body"],
    "database performance", highlight=True)

# Fuzzy search (typo-tolerant)
results = gl.search_fuzzy("articles", "title", "performnce")

# Faceted search — category counts for matching results
facets = gl.facets("articles", "category",
    query="database", query_column=["title", "body"])

# Semantic search with vector similarity
results = gl.similar("articles", "embedding", query_vector)

# No sync pipeline. No JVM. No shard management.
# Search results are transactionally consistent — always.

The migration path is gentler than most teams expect, because your data never left PostgreSQL. Elasticsearch held a search-optimized projection — and the source of truth was always your database. The sync pipeline, the index mappings, the JVM cluster — these become optional when search queries can be answered where the data already lives.

Gold Lapel learns your query patterns on the first query. No index mapping to configure. No analyzer definitions to write. No warm-up period. The proxy observes, indexes, and caches — the developer writes application code.

For teams migrating gradually, the path is familiar: dual-read (serve search from both systems, compare results), then cut over, then decommission the ES cluster. Your application code changes from Elasticsearch Query DSL to Gold Lapel method calls — one method per feature, one line per search.

Verdict

Elasticsearch is remarkable software that has served teams well for over a decade. For infrastructure observability, for search at genuinely massive scale, for complex text analysis pipelines — it earns its place, and I would not suggest otherwise.

For application search — the use case that brings most teams to Elasticsearch — Gold Lapel provides the same capabilities without the second system. Thirteen methods across seven languages. Full-text search, fuzzy matching, autocomplete, faceted navigation, vector similarity, reverse search, and relevance debugging — all backed by the database your application already trusts.

The search engine most applications need is the database they already have. Gold Lapel ensures it performs like one.

Frequently asked questions

Terms referenced in this article

I have written at some length on this very theme. The closing chapter of You Don't Need Redis introduces the equation that drives the forthcoming volume You Don't Need Elasticsearch — which examines each of the thirteen methods in full detail, the PostgreSQL mechanisms beneath them, and the migration path from one to the other. In the meantime, I have prepared framework-specific guides for Django full-text search and Laravel Scout with tsvector that put these patterns into practice.