← Docs

Python

One import, one connection. Microsecond reads from L1 cache.

Install

pip install goldlapel

# You also need a Postgres driver — any of these works:
pip install psycopg2-binary   # most common
pip install psycopg            # psycopg3 (newer)
pip install asyncpg            # async Python apps

Gold Lapel is driver-agnostic — install whichever Postgres driver you prefer and point it at gl.url.

Quick Start

Sync

import goldlapel
import psycopg2

# Spawn the proxy in front of your upstream DB — returns a GoldLapel instance
gl = goldlapel.start("postgresql://user:pass@localhost:5432/mydb")

# Use gl.url with any Postgres driver for raw SQL
conn = psycopg2.connect(gl.url)
cur = conn.cursor()
cur.execute("SELECT * FROM users WHERE id = %s", (42,))

# Or use Gold Lapel's wrapper methods directly — no conn arg needed
hits = gl.search("articles", "body", "postgres tuning")
gl.doc_insert("events", {"type": "signup", "user": "steve"})

# Clean up (happens automatically on process exit too)
gl.stop()

Repeated reads serve in microseconds from the built-in L1 cache.

Context manager

with goldlapel.start("postgresql://...") as gl:
    results = gl.search("articles", "body", "query")
# proxy stopped automatically on exit

Async

from goldlapel.asyncio import start
import asyncpg

gl = await start("postgresql://user:pass@localhost:5432/mydb")

conn = await asyncpg.connect(gl.url)
rows = await conn.fetch("SELECT * FROM users WHERE id = $1", 42)

hits = await gl.search("articles", "body", "postgres tuning")
await gl.doc_insert("events", {"type": "signup"})

await gl.stop()

Async context manager

from goldlapel.asyncio import start

async with start("postgresql://...") as gl:
    hits = await gl.search("articles", "body", "query")

Gold Lapel prints the proxy and dashboard URLs on startup to stderr. Access the dashboard programmatically:

gl.dashboard_url  # "http://127.0.0.1:7933" (or None if disabled / not running)

Banner goes to stderr so it never pollutes stdout-piped application output. Pass config={"silent": True} to suppress it entirely — useful for daemons and structured-log pipelines that inspect both streams.

Transactional coordination

When you want wrapper methods to run inside your own transaction, pass your connection via gl.using(conn) (scoped) or the conn= kwarg (per call):

import psycopg2
gl = goldlapel.start("postgresql://...")
conn = psycopg2.connect(gl.url)
conn.autocommit = False
cur = conn.cursor()

# Scoped: all wrapper methods inside this block use `conn`
with gl.using(conn):
    cur.execute("INSERT INTO orders (total) VALUES (%s)", (99,))
    gl.doc_insert("events", {"type": "order.created"})
    cur.execute("UPDATE inventory SET qty = qty - 1")
    conn.commit()

# Or per-call
gl.doc_insert("events", {"type": "x"}, conn=conn)

Async has the same shape with async with gl.using(conn): ... and the conn= kwarg.

API

gl = goldlapel.start(upstream, *, proxy_port=None, dashboard_port=None, invalidation_port=None, log_level=None, mode=None, license=None, client=None, config_file=None, config=None, extra_args=None, silent=False, mesh=False, mesh_tag=None)

Factory that spawns the proxy and returns a GoldLapel instance. Eagerly opens the wrapper's internal driver connection so wrapper methods are fast from the first call.

  • upstream — your Postgres connection string (e.g. postgresql://user:pass@localhost:5432/mydb)
  • proxy_port — proxy port (default: 7932)
  • dashboard_port — dashboard port (default: proxy_port + 1; set to 0 to disable)
  • invalidation_port — cache-invalidation port (default: proxy_port + 2)
  • log_level — one of trace | debug | info | warn | error
  • mode — operating mode (waiter, bellhop)
  • license — path to the signed license file
  • client — client identifier (for telemetry tagging; defaults to "python")
  • config_file — path to a TOML config file read by the Rust binary
  • config — dict of structured tuning knobs (see config_keys())
  • extra_args — additional raw CLI flags (e.g. ["--threshold-impact", "5000"])
  • silent — suppress the startup banner
  • mesh — opt into the mesh at startup (HQ enforces the license; denial is non-fatal)
  • mesh_tag — optional mesh tag; instances with the same tag cluster together

Promoted top-level kwargs (proxy_port, dashboard_port, log_level, mode, etc.) are not valid keys inside config — passing them there raises ValueError.

goldlapel.asyncio.start(upstream, ...)

Async factory. Returns a GoldLapel instance whose wrapper methods are awaitable. Auto-detects asyncpg (preferred) or psycopg3 async.

gl.url

Proxy connection string — pass to any Postgres driver.

gl.dashboard_url

Dashboard URL (e.g. http://127.0.0.1:7933), or None if not running or the dashboard is disabled.

gl.using(conn)

Context manager that scopes every wrapper call inside the block to conn. Works in both sync and async code. All 54+ wrapper methods also accept a conn= kwarg for one-off overrides.

gl.stop()

Stops the proxy and closes the internal connection. Idempotent. Also runs automatically on process exit.

goldlapel.config_keys()

Returns the set of all valid config key names.

import goldlapel
print(goldlapel.config_keys())

Multiple instances

goldlapel.start() is a factory — each call spawns its own proxy subprocess and returns a fresh instance. Use different ports to run several side by side.

# Each start() call returns a fresh instance — bring as many as you like
gl_primary = goldlapel.start("postgresql://primary/mydb", proxy_port=7932)
gl_replica = goldlapel.start("postgresql://replica/mydb", proxy_port=7942)

gl_primary.search("articles", "body", "query")  # hits primary
gl_replica.search("articles", "body", "query")  # hits replica

gl_primary.stop()
gl_replica.stop()

Configuration

Pass a config dict to start(). Keys use snake_case and map directly to CLI flags (pool_size--pool-size). The log_level key accepts string levels (trace / debug / info / warn / error) and translates to the binary's verbose flags internally.

import goldlapel

gl = goldlapel.start(
    "postgresql://user:pass@localhost/mydb",
    mode="waiter",                # top-level: waiter | bellhop
    log_level="info",             # top-level: trace | debug | info | warn | error
    mesh=True,                     # top-level: opt into the mesh at startup
    mesh_tag="prod-east",         # top-level: optional tag; instances with
                                    #            the same tag cluster together
    config={                       # structured tuning knobs only
        "pool_size": 50,
        "disable_matviews": True,
        "replica": ["postgresql://user:pass@replica1/mydb"],
    },
)

Unknown keys raise ValueError immediately. The matter is covered thoroughly in the configuration reference.

Environment variables

The binary also reads GOLDLAPEL_PROXY_PORT, GOLDLAPEL_UPSTREAM, and all other GOLDLAPEL_* env vars automatically. Set GOLDLAPEL_BINARY to override the binary location.

Framework & ORM integrations

Gold Lapel is driver-agnostic, so any framework that speaks to Postgres just works against gl.url.

  • Django — integration code ships inside the goldlapel package. Install Django separately: pip install django. See the Django guide.
  • SQLAlchemy — integration code ships inside the goldlapel package. Install separately: pip install sqlalchemy. See the SQLAlchemy guide.
  • FastAPI / Starlette / async apps — use goldlapel.asyncio.start() with any async driver (asyncpg, psycopg3 async).

The goldlapel[django] and goldlapel[sqlalchemy] install extras are gone in v0.2 — install those packages with regular pip. The integration code is always included.

Upgrading from v0.1

v0.2 is a breaking redesign. The class-based goldlapel.GoldLapel(url).start() shape is replaced by a goldlapel.start(url) factory that returns a GoldLapel instance. Bring your own driver and point it at gl.url.

# v0.1.x (old)
gl = goldlapel.GoldLapel("postgresql://...")
conn = gl.start()
conn.execute("SELECT 1")
gl.stop()

# v0.2 (new) — factory returns an instance, bring your own driver
gl = goldlapel.start("postgresql://...")
conn = psycopg2.connect(gl.url)
conn.cursor().execute("SELECT 1")
gl.stop()

# v0.1 async
gl = goldlapel.GoldLapel("postgresql://...")
await gl.start_async()

# v0.2 async — separate submodule
from goldlapel.asyncio import start
gl = await start("postgresql://...")
  • goldlapel.start(url) returns a GoldLapel instance (previously returned a wrapped connection).
  • goldlapel.start_async moved to goldlapel.asyncio.start.
  • Optional install extras [django] and [sqlalchemy] removed — install those packages directly.
  • All wrapper methods now accept an optional conn= kwarg, and gl.using(conn) provides scoped override.
  • Multiple instances are first-class — each start() call spawns its own proxy.

Utilities

Convenience methods backed by PostgreSQL — 54+ in total, all hanging directly off the gl instance. Tables are auto-created on first use. Every method accepts an optional conn= kwarg for transactional coordination.

Pub/Sub

Backed by PostgreSQL LISTEN/NOTIFY.

# Publish a message to a channel
gl.publish("orders", "new order received")

# Subscribe to a channel — callback fires on each message
gl.subscribe("orders", lambda ch, msg: print(msg))

Queues

Backed by FOR UPDATE SKIP LOCKED.

# Enqueue a job (dict is serialized to JSONB)
gl.enqueue("jobs", {"task": "send_email", "to": "user@example.com"})

# Dequeue the next job — returns dict or None
job = gl.dequeue("jobs")

Counters

Backed by INSERT ON CONFLICT (upsert).

# Increment a named counter
gl.incr("page_views", "home")

# Read the current value
count = gl.get_counter("page_views", "home")

Hash Maps

Backed by JSONB columns with atomic upsert.

# Set a field on a hash key
gl.hset("users", "user:1", "name", "Stephen")
gl.hset("users", "user:1", "email", "s@example.com")

# Get a single field
name = gl.hget("users", "user:1", "name")       # "Stephen"

# Get all fields
all_fields = gl.hgetall("users", "user:1")       # {"name": "Stephen", "email": ...}

# Delete a field
gl.hdel("users", "user:1", "email")              # True

Sorted Sets

Backed by ORDER BY and window functions.

# Add a member with a score
gl.zadd("leaderboard", "player1", 100)

# Increment a member's score
gl.zincrby("leaderboard", "player1", 10)

# Top 10 by score — returns [(member, score), ...]
top10 = gl.zrange("leaderboard", 0, 10)

# Rank and score lookups
rank = gl.zrank("leaderboard", "player1")   # 0-based
score = gl.zscore("leaderboard", "player1")

# Remove a member
gl.zrem("leaderboard", "player1")

Document Store

doc_find, doc_insert, doc_update, doc_delete and friends operate on JSONB-backed collections. Tables are auto-created on first use.

Filter operators

doc_find supports the MongoDB filter operators you'd reach for — $elemMatch, $text, $gt, $in, and more.

# $elemMatch — scope multi-condition filters to a single array element
orders = gl.doc_find("orders", {
    "items": {"$elemMatch": {"sku": "ABC-123", "qty": {"$gte": 2}}}
})
# $text — full-text search, document-wide or field-scoped
hits = gl.doc_find("articles", {
    "$text": {"$search": "postgres tuning"}
})

See Appendix D: Filter Operator Reference for the full list, Postgres translations, and index notes.

Search

Full-text search utilities backed by PostgreSQL tsvector/tsquery. No extensions required.

Facets

Value counts, optionally filtered by a search query. Like Elasticsearch terms aggregation.

# Top categories across all articles
results = gl.facets("articles", "category")
# [{"value": "technology", "count": 842}, ...]

# Filtered by a search query
results = gl.facets("articles", "category", query="machine learning", query_column="body")

Aggregations

Metric aggregations (count, sum, avg, min, max) with optional grouping.

# Average order total grouped by region
results = gl.aggregate("orders", "total", "avg", group_by="region")
# [{"region": "us-east", "value": 89.50}, ...]

Custom Search Config

Create a custom text search configuration to use with search methods.

# Create a custom text search config
gl.create_search_config("my_english", copy_from="english")

Percolator

Reverse search — store queries, then match documents against them. Like Elasticsearch percolate API.

# Store queries for reverse matching
gl.percolate_add("alerts", "breaking-news",
    "breaking news earthquake", metadata={"notify": "slack"})

# Match a document against stored queries
matches = gl.percolate("alerts",
    "A 6.2 magnitude earthquake struck the coast, breaking news.")
# [{"query_id": "breaking-news", "query_text": "...", "_score": 0.12}]

# Remove a stored query
gl.percolate_delete("alerts", "breaking-news")

Analyze

Show the tokenization pipeline for debugging search behavior.

# Show how text is tokenized
tokens = gl.analyze("The quick brown foxes jumped")
# [{"alias": "english_stem", "token": "foxes", "lexemes": ["fox"]}, ...]

Explain Score

Score breakdown for a specific document — why it matched and how strongly.

# Score breakdown for a specific document
result = gl.explain_score("articles", "body",
    "machine learning", id_column="id", id_value=42)
# {"matches": True, "score": 0.0607, "headline": "...to **machine** **learning**..."}

Geospatial

Backed by PostGIS ST_DWithin. Requires the PostGIS extension — see the database setup guide for installation.

# Add a location (requires PostGIS)
gl.geoadd("restaurants", "name", "location", "Pizza Place", -122.4, 37.8)

# Find locations within 5km of a point
nearby = gl.georadius("restaurants", "location", -122.4, 37.8, 5000)

# Distance between two named entries
dist = gl.geodist("restaurants", "location", "name", "Pizza Place", "Burger Joint")