Query Builder
All queries go through views, never tables. pgbo automatically converts camelCase column names to snake_case in SQL, and snake_case results back to camelCase. Result types are fully inferred from the view definition — no manual casts.
Connection
import { createDatabase } from '@pgbo/core'
const db = createDatabase({
connectionString: 'postgresql://user@localhost:5432/mydb',
pool: {
min: 2,
max: 10,
idleTimeoutMs: 30000,
connectionTimeoutMs: 5000,
},
})
// When done:
await db.close()Caching
pgbo ships a pluggable cache layer. Configure a provider on the Database and:
- Queries can opt into read-through caching via
.cached({ tags, ttl? }) - BO writes (
create/update/delete) auto-invalidate bybo.cacheTags
import { createDatabase, memoryCache } from '@pgbo/core'
const db = createDatabase({
connectionString: 'postgresql://localhost/mydb',
cache: memoryCache({ maxEntries: 2000, defaultTtl: 300 }), // 5-min default TTL
})
// Read-through — first call hits DB, second returns cached
const rows = await db.from(tileView)
.where({ slug, action })
.cached({ tags: ['tiles', `tenant:${tenantId}`], ttl: 300 })
.execute()
// Auto-invalidation on BO writes
const tileBO = defineBO(tileTable, {
cacheTags: ['tiles'],
actions: { create: {}, update: {}, delete: {} },
})
await tileBO.update(db, ctx, { id: 1, slug: 'changed' })
// → pgbo calls cache.invalidateByTags(['tiles']) automaticallyThe cache key is derived from the view name, full SQL, and bound parameter values. Pass key: 'custom' to override it.
Custom providers
Implement the CacheProvider interface for Redis / Memcached / anything else:
import type { CacheProvider } from '@pgbo/core'
export function redisCache(client: RedisClient): CacheProvider {
return {
async get(key) { /* ... */ },
async set(key, value, tags, ttl) { /* ... */ },
async invalidateByTags(tags) { /* ... */ },
async invalidateByKey(key) { /* ... */ },
}
}Without a configured cache, .cached() becomes a silent no-op — always hits the database. Useful for development vs. production parity without code changes.
SELECT
// All rows — rows are typed as InferViewRow<typeof warehouseView>
const rows = await db.from(warehouseView).execute()
// With conditions
const active = await db.from(warehouseView)
.where({ status: 'ACTIVE' })
.orderBy('name', 'asc')
.limit(25)
.offset(0)
.execute()
// Multi-column ORDER BY — chain or pass an array
await db.from(docView)
.orderBy('postedAt', 'desc')
.orderBy('documentNumber', 'asc')
.execute()
// → ORDER BY posted_at DESC, document_number ASC
await db.from(docView)
.orderBy([
{ column: 'postedAt', direction: 'desc' },
{ column: 'documentNumber', direction: 'asc' },
])
.execute()
// Count
const total = await db.from(warehouseView).count()
const filtered = await db.from(warehouseView).where({ status: 'ACTIVE' }).count()
// COUNT(DISTINCT col) — for queries where JOINs or translations duplicate parent rows
const distinctSlugs = await db.from(locationView).count({ distinct: 'slug' })
const distinctPairs = await db.from(locationView).count({ distinct: ['slug', 'warehouseSlug'] })The distinct key is type-checked against the view's row type.
Runtime JOINs
SelectBuilder supports ad-hoc JOINs when a predefined PG view isn't enough:
import { view } from '@pgbo/core/schema'
const tileView = view('tile_view').from(tileTable)
const rows = await db.from(tileView)
.join(appTable, { appId: 'id' }) // tile.app_id = app.id
.select({
tileSlug: 'tile_view.slug',
appSlug: 'app.slug',
appName: 'app.name',
})
.where({ action: 'nav' })
.execute().join(table, { localCol: foreignCol })— INNER JOIN.leftJoin(table, { localCol: foreignCol })— LEFT JOIN.select({ outputKey: 'table.column' })— explicit column list with table qualification, output aliased in snake_case- JOINs are also included in
.count()
For static JOINs that should live in the schema, define a proper PG view with .join() in the view builder instead.
WHERE
Operators
Plain values are shorthand for equality. For other operators, pass an object:
.where({ name: 'Alice' }) // name = 'Alice'
.where({ name: { eq: 'Alice' } }) // name = 'Alice'
.where({ name: { like: '%ali%' } }) // name LIKE '%ali%'
.where({ name: { ilike: '%ali%' } }) // name ILIKE '%ali%' (case-insensitive)
.where({ id: { in: [1, 2, 3] } }) // id IN (1, 2, 3)
.where({ slug: { any: allowedSlugs } }) // slug = ANY($1) — single bound param
.where({ slug: { notAny: seededSlugs } }) // slug != ALL($1) — single bound param
.where({ age: { gt: 18 } }) // age > 18
.where({ age: { lt: 65 } }) // age < 65
.where({ age: { gte: 18 } }) // age >= 18
.where({ age: { lte: 65 } }) // age <= 65
.where({ age: { between: [18, 65] } }) // age BETWEEN 18 AND 65
.where({ deletedAt: { isNull: true } }) // deleted_at IS NULL
.where({ email: { isNotNull: true } }) // email IS NOT NULLMultiple keys in a single .where() are joined with AND:
.where({ status: 'ACTIVE', tenantId: 1 })
// WHERE status = $1 AND tenant_id = $2in vs any / notAny
- Use
infor static, small, inline lists (e.g. enum values):{ status: { in: ['ACTIVE', 'PENDING'] } }compiles tocol IN ($1, $2). - Use
any/notAnyfor dynamic, runtime-computed arrays: they bind a single parameter (col = ANY($1)) regardless of array length. This keeps the prepared-statement cache hot, avoids PG's ~65k parameter cap, and has correct NULL semantics for!= ALL.
Empty array behavior:
any: []→FALSE(no matches)notAny: []→TRUE(all rows match)
Logical Operators: OR / AND / NOT
// Tenant scoping with global records
.where({
OR: [
{ tenantId: { isNull: true } },
{ tenantId: currentTenant },
],
})
// WHERE (tenant_id IS NULL OR tenant_id = $1)
// Combined with regular conditions (AND between top-level keys)
.where({
status: 'ACTIVE',
OR: [
{ tenantId: { isNull: true } },
{ tenantId: currentTenant },
],
})
// WHERE status = $1 AND (tenant_id IS NULL OR tenant_id = $2)
// Nested logical operators
.where({
AND: [
{ status: 'ACTIVE' },
{
OR: [
{ slug: { ilike: '%test%' } },
{ name: { ilike: '%test%' } },
],
},
],
})
// NOT
.where({ NOT: { status: 'ARCHIVED' } })
// WHERE NOT (status = $1)INSERT
// Single row
await db.into(warehouseView)
.values({ slug: 'main', name: 'Main Warehouse' })
.execute()
// Batch insert
await db.into(warehouseView)
.values([
{ slug: 'main', name: 'Main' },
{ slug: 'returns', name: 'Returns' },
])
.execute()
// With RETURNING
const [created] = await db.into(warehouseView)
.values({ slug: 'main', name: 'Main' })
.returning('*')
.execute()Upsert (ON CONFLICT)
// ON CONFLICT DO NOTHING
await db.into(userMenuGroupView)
.values({ userId, menuGroupId })
.onConflict(['userId', 'menuGroupId']).doNothing()
.execute()
// ON CONFLICT DO UPDATE with literals
await db.into(countryView)
.values({ code: 'DE', alpha3: 'DEU', numericCode: '276' })
.onConflict(['code']).doUpdate({
alpha3: 'DEU',
numericCode: '276',
})
.execute()
// Use incoming values from EXCLUDED row
await db.into(translationView)
.values({ parentId, locale, name })
.onConflict(['parentId', 'locale']).doUpdate({
name: { excluded: true }, // → name = EXCLUDED.name
})
.execute()
// Increment existing value (e.g. stock quantity)
const [updated] = await tx.into(stockLevelView)
.values({ tenantId, wku, warehouseSlug, quantity: baseQty })
.onConflict(['tenantId', 'wku', 'warehouseSlug']).doUpdate({
quantity: { increment: baseQty }, // → quantity = stock_level.quantity + $n
})
.returning('*')
.execute()
// Decrement
.onConflict(['id']).doUpdate({
balance: { decrement: amount }, // → balance = table.balance - $n
})Assignment types for .doUpdate():
| Form | SQL |
|---|---|
col: value | col = $n (plain literal) |
col: { excluded: true } | col = EXCLUDED.col |
col: { increment: N } | col = table.col + $n |
col: { decrement: N } | col = table.col - $n |
Alternative conflict target: .onConflict({ constraint: 'my_unique' }) to target a named unique constraint.
The assignments are typed — col must be a valid key of the view's row type, and value must match the column's type.
UPDATE
await db.update(warehouseView)
.set({ name: 'Updated Name' })
.where({ slug: 'main' })
.execute()
// With RETURNING
const [updated] = await db.update(warehouseView)
.set({ name: 'New Name' })
.where({ slug: 'main' })
.returning('*')
.execute()Only the keys provided to .set() are included in the SET clause.
DELETE
await db.deleteFrom(warehouseView)
.where({ slug: 'old' })
.execute()
// With RETURNING
const [deleted] = await db.deleteFrom(warehouseView)
.where({ slug: 'old' })
.returning('*')
.execute()
// Stale cleanup using notAny
await db.deleteFrom(areaView)
.where({ slug: { notAny: keptSlugs } })
.execute()Safety guard: calling .execute() without .where() throws an error. To delete all rows, explicitly call .all():
await db.deleteFrom(warehouseView).all().execute()Request-scoped context (db.withContext)
Postgres has current_setting('app.key', true) for session-level config values. pgbo lets you bind those from a request ctx and use them directly in views — no code-level filtering needed.
const db = createDatabase({
connectionString,
sessionParams: {
'app.locale': (ctx) => ctx.locale,
'app.tenant_id': (ctx) => ctx.tenantId ?? '',
'app.user_id': (ctx) => ctx.userId ?? '',
},
})
// Any view in the DB can now filter by these:
// CREATE VIEW area_localized AS
// SELECT a.id, a.slug, t.name
// FROM area a
// LEFT JOIN area_translation t
// ON t.area_id = a.id
// AND t.locale = current_setting('app.locale', true);
// In the request handler:
const rows = await db.withContext(req.user, async tx => {
return tx.from(areaLocalizedView).execute()
})
// → every query inside this scope sees app.locale / app.tenant_id / app.user_id
// exactly as resolved from req.userwithContext opens a dedicated connection, wraps everything in a transaction, emits SET LOCAL for each resolved param, runs fn, commits (or rolls back on error), and releases the connection. Zero per-query overhead — one transaction for the whole request.
The scoped client — tx inside the callback is a TransactionClient with the same API as db (from, into, update, deleteFrom, transaction, savepoint, query, raw).
Safety — resolver returning undefined or null skips that SET LOCAL. Parameter names are validated against /^[A-Za-z][A-Za-z0-9_.]*$/ to prevent SQL injection via config. Values with single quotes are properly escaped.
Transactions
const result = await db.transaction(async (tx) => {
await tx.into(warehouseView).values({ slug: 'new', name: 'New' }).execute()
await tx.update(warehouseView).set({ name: 'Renamed' }).where({ slug: 'new' }).execute()
const [row] = await tx.from(warehouseView).where({ slug: 'new' }).execute()
return row
})
// result is the returned rowtx has the same API as db (from, into, update, deleteFrom, transaction, plus savepoint). All operations in the callback run on a single connection.
Transactions auto-rollback on error:
try {
await db.transaction(async (tx) => {
await tx.into(warehouseView).values({ slug: 'a', name: 'A' }).execute()
throw new Error('abort') // rolls back automatically
})
} catch (e) {
// row 'a' was NOT inserted
}Savepoints
await db.transaction(async (tx) => {
await tx.into(warehouseView).values({ slug: 'a', name: 'A' }).execute()
try {
await tx.savepoint('sp1', async (sp) => {
await sp.into(warehouseView).values({ slug: 'b', name: 'B' }).execute()
throw new Error('rollback savepoint only')
})
} catch {
// 'b' rolled back, 'a' still pending
}
// Outer transaction commits with just 'a'
})Raw SQL
For queries that don't fit the builder pattern:
const stats = await db.raw<{ status: string; total: number }>`
SELECT status, COUNT(*) AS total
FROM warehouse
WHERE tenant_id = ${tenantId}
GROUP BY status
`Values are automatically parameterized ($1, $2, etc.) — no SQL injection risk.
SQL Inspection
Use .toQuery() on any builder to get the generated SQL without executing:
const { text, values } = db.from(warehouseView)
.where({ status: 'ACTIVE' })
.orderBy('name', 'asc')
.toQuery()
// text: 'SELECT * FROM warehouse_view WHERE status = $1 ORDER BY name ASC'
// values: ['ACTIVE']Type Inference
Query results are typed via InferViewRow<typeof view>:
- Simple view (
view('x').from(table)) — row type matches the source table's columns - View with
.columns({...})— row type is computed from the columns map:- Unqualified
col('name')resolves against the view's source table - Qualified
col('slug', appTable)infers type from the joined table's column translated('name')→string | null
- Unqualified
- Custom override via
.as<T>()— escape hatch for complex cases
const rows = await db.from(tileDetailView).execute()
rows[0].appSlug // ✅ string — no cast needed
rows[0].typo // ❌ compile error — property doesn't exist