Learn-skills.dev api-database-postgresql
Direct PostgreSQL access with node-postgres (pg) -- connection pools, parameterized queries, transactions, streaming, LISTEN/NOTIFY, error handling
git clone https://github.com/NeverSight/learn-skills.dev
T=$(mktemp -d) && git clone --depth=1 https://github.com/NeverSight/learn-skills.dev "$T" && mkdir -p ~/.claude/skills && cp -r "$T/data/skills-md/agents-inc/skills/api-database-postgresql" ~/.claude/skills/neversight-learn-skills-dev-api-database-postgresql && rm -rf "$T"
data/skills-md/agents-inc/skills/api-database-postgresql/SKILL.mdPostgreSQL Patterns (node-postgres)
Quick Guide: Use the
package (v8.x) for direct PostgreSQL access. Always usepg-- never create individualPoolinstances in application code. Use parameterized queries (Client,$1) for ALL user input -- never interpolate strings into SQL. For transactions, check out a dedicated client with$2and usepool.connect()/BEGIN/COMMITin aROLLBACK/try/catchthat always callsfinally. Handle the poolclient.release()event to prevent process crashes from idle client errors. Useerrorfor large result sets to avoid loading everything into memory.pg-query-stream
<critical_requirements>
CRITICAL: Before Using This Skill
All code must follow project conventions in CLAUDE.md (kebab-case, named exports, import ordering,
, named constants)import type
(You MUST use parameterized queries (
, $1
, ...) for ALL values -- NEVER concatenate or interpolate user input into SQL strings)$2
(You MUST use
for all database access -- NEVER create standalone Pool
instances in application code)Client
(You MUST release clients back to the pool in a
block after finally
-- leaked clients exhaust the pool and hang the application)pool.connect()
(You MUST handle the
event on Pool instances -- unhandled idle client errors crash the Node.js process)error
</critical_requirements>
Examples
- Core Patterns -- Pool setup, parameterized queries, type-safe results, error handling
- Transactions -- BEGIN/COMMIT/ROLLBACK, savepoints, retry logic, advisory locks
- Streaming -- Cursors, pg-query-stream, LISTEN/NOTIFY for real-time
- Advanced -- SSL/TLS, prepared statements, migrations, testing patterns
Additional resources:
- reference.md -- Pool options, error codes, QueryResult properties, production checklist
Auto-detection: PostgreSQL, pg, node-postgres, Pool, Client, pool.query, pool.connect, client.query, $1, parameterized query, BEGIN, COMMIT, ROLLBACK, LISTEN, NOTIFY, pg_notify, pg-query-stream, pg-cursor, Cursor, QueryResult, QueryResultRow, connectionString, PGHOST, PGDATABASE, unique_violation, 23505, deadlock, 40P01, advisory lock
When to use:
- Direct SQL queries against PostgreSQL (not behind an ORM)
- Connection pool management for Node.js/PostgreSQL applications
- Transactions spanning multiple queries that must be atomic
- Streaming large result sets without loading everything into memory
- Real-time change notifications via LISTEN/NOTIFY
- Integration testing with transaction rollback isolation
Key patterns covered:
- Pool configuration and lifecycle (creation, error handling, graceful shutdown)
- Parameterized queries with
-style placeholders (SQL injection prevention)$1 - Type-safe query results using TypeScript generics
- Transaction management with dedicated clients
- Streaming with pg-cursor and pg-query-stream
- LISTEN/NOTIFY for real-time PostgreSQL event handling
- PostgreSQL error code handling (constraint violations, deadlocks, serialization failures)
- SSL/TLS connection configuration
- Testing with transaction rollback isolation
When NOT to use:
- You need an ORM or query builder -- use your ORM/query builder skill instead
- You need in-memory caching -- use a caching solution
- You need document storage without relational constraints -- use a document database
- Simple key-value lookups at sub-millisecond latency -- use an in-memory data store
<philosophy>
Philosophy
pg (node-postgres) is a low-level PostgreSQL client that gives you full control over SQL, connections, and transactions. The core principle: write SQL directly, let PostgreSQL do the heavy lifting.
Core principles:
- Pool, never Client -- Application code should always use
. The pool manages connections, handles reconnection, and prevents connection exhaustion. UsePool
for single queries,pool.query()
when you need a dedicated client (transactions).pool.connect() - Parameterized everything -- Never build SQL by string concatenation. Use
,$1
placeholders. This prevents SQL injection AND enables PostgreSQL query plan caching.$2 - Release in finally -- Any client obtained via
must be released in apool.connect()
block. A leaked client sits checked out forever, and oncefinally
clients leak, the pool deadlocks.max - Fail loudly -- Handle the pool's
event. Handle query errors with specific PostgreSQL error codes. Never swallow errors silently.error - Stream large results -- Don't
a million rows into memory. UseSELECT *
orpg-cursor
for large result sets.pg-query-stream
<patterns>
Core Patterns
Pattern 1: Pool Setup
Create a single pool per database at application startup. See examples/core.md for full configuration examples.
// ✅ Good Example - Pool with error handling import pg from "pg"; const POOL_MAX_CLIENTS = 20; const IDLE_TIMEOUT_MS = 30_000; const CONNECTION_TIMEOUT_MS = 5_000; function createPool(): pg.Pool { const pool = new pg.Pool({ connectionString: process.env.DATABASE_URL, max: POOL_MAX_CLIENTS, idleTimeoutMillis: IDLE_TIMEOUT_MS, connectionTimeoutMillis: CONNECTION_TIMEOUT_MS, }); pool.on("error", (err) => { console.error("Unexpected idle client error:", err.message); }); return pool; } export { createPool };
Why good: Named constants for pool config, environment variable for connection string, error handler prevents process crash from idle client errors
// ❌ Bad Example - No pool, standalone client import pg from "pg"; const client = new pg.Client("postgres://localhost/mydb"); await client.connect(); // One connection for entire app -- no pooling, no reconnection, // no concurrency. If client disconnects, app crashes.
Why bad: Standalone Client has no connection pooling, no automatic reconnection, no concurrency -- every query blocks on a single connection
Pattern 2: Parameterized Queries
Always use
$1-style placeholders. See examples/core.md for typed query helpers.
// ✅ Good Example - Parameterized query with typed result interface UserRow { id: number; name: string; email: string; } const result = await pool.query<UserRow>( "SELECT id, name, email FROM users WHERE id = $1", [userId], ); const user = result.rows[0]; // UserRow | undefined
Why good:
$1 placeholder prevents SQL injection, generic <UserRow> types the rows array, result is properly typed
// ❌ Bad Example - String interpolation (SQL INJECTION!) const result = await pool.query( `SELECT * FROM users WHERE name = '${userName}'`, ); // userName = "'; DROP TABLE users; --" -> catastrophic
Why bad: String interpolation allows SQL injection, no type safety on result rows,
SELECT * returns untyped columns
Pattern 3: Transactions
Use
pool.connect() to get a dedicated client for the transaction. See examples/transactions.md for savepoints, retries, and advisory locks.
// ✅ Good Example - Transaction with proper cleanup async function transferFunds( pool: pg.Pool, fromId: number, toId: number, amount: number, ): Promise<void> { const client = await pool.connect(); try { await client.query("BEGIN"); await client.query( "UPDATE accounts SET balance = balance - $1 WHERE id = $2", [amount, fromId], ); await client.query( "UPDATE accounts SET balance = balance + $1 WHERE id = $2", [amount, toId], ); await client.query("COMMIT"); } catch (err) { await client.query("ROLLBACK"); throw err; } finally { client.release(); } }
Why good: Dedicated client via
pool.connect(), ROLLBACK on error, client.release() in finally guarantees the client returns to the pool
// ❌ Bad Example - Transaction with pool.query() await pool.query("BEGIN"); await pool.query("UPDATE accounts SET balance = balance - 100 WHERE id = 1"); await pool.query("UPDATE accounts SET balance = balance + 100 WHERE id = 2"); await pool.query("COMMIT"); // Each pool.query() may use a DIFFERENT client -- the BEGIN/COMMIT // execute on different connections, so there is no transaction at all
Why bad:
pool.query() checks out a random client each time -- BEGIN, UPDATEs, and COMMIT may run on different connections, so there is no actual transaction
Pattern 4: Error Handling with PostgreSQL Error Codes
PostgreSQL errors include a
code field with the SQLSTATE error code. See reference.md for the full error code table.
// ✅ Good Example - Handling specific PostgreSQL errors const PG_UNIQUE_VIOLATION = "23505"; const PG_FOREIGN_KEY_VIOLATION = "23503"; const PG_DEADLOCK_DETECTED = "40P01"; const PG_SERIALIZATION_FAILURE = "40001"; interface PgError extends Error { code: string; constraint?: string; detail?: string; table?: string; column?: string; } function isPgError(err: unknown): err is PgError { return err instanceof Error && "code" in err; } try { await pool.query("INSERT INTO users (email) VALUES ($1)", [email]); } catch (err) { if (isPgError(err) && err.code === PG_UNIQUE_VIOLATION) { throw new ConflictError(`Email already exists: ${err.constraint}`); } if (isPgError(err) && err.code === PG_DEADLOCK_DETECTED) { // Retry the operation } throw err; }
Why good: Named constants for error codes (no magic strings), type guard for safe property access, specific handling per error type, re-throws unknown errors
Pattern 5: Streaming Large Result Sets
Use
pg-cursor or pg-query-stream for queries returning many rows. See examples/streaming.md for full streaming patterns.
// ✅ Good Example - Cursor for batch processing import Cursor from "pg-cursor"; const BATCH_SIZE = 100; async function processAllOrders(pool: pg.Pool): Promise<void> { const client = await pool.connect(); try { const cursor = client.query( new Cursor("SELECT * FROM orders WHERE status = $1", ["pending"]), ); let rows = await cursor.read(BATCH_SIZE); while (rows.length > 0) { await processBatch(rows); rows = await cursor.read(BATCH_SIZE); } await cursor.close(); } finally { client.release(); } }
Why good: Processes rows in fixed-size batches without loading entire result set into memory, proper client release in
finally
Pattern 6: LISTEN/NOTIFY
PostgreSQL can push real-time notifications to connected clients. See examples/streaming.md for full examples.
// ✅ Good Example - LISTEN/NOTIFY with dedicated client const CHANNEL = "order_updates"; async function listenForUpdates(pool: pg.Pool): Promise<pg.PoolClient> { const client = await pool.connect(); client.on("notification", (msg) => { if (msg.channel === CHANNEL && msg.payload) { const data = JSON.parse(msg.payload); handleOrderUpdate(data); } }); await client.query(`LISTEN ${CHANNEL}`); return client; // Caller is responsible for release on shutdown } // Publishing from another connection await pool.query("SELECT pg_notify($1, $2)", [CHANNEL, JSON.stringify(data)]);
Why good: Dedicated client stays checked out for the lifetime of the listener,
pg_notify() with parameterized channel/payload prevents injection, JSON payload for structured data
When to use: Real-time notifications where sub-second latency matters and the volume is low-to-moderate (hundreds per second). For high-throughput streaming, use a dedicated message broker.
</patterns><decision_framework>
Decision Framework
pool.query() vs pool.connect()
Do I need a dedicated client? ├─ Single query, no transaction? -> pool.query() (auto-releases) ├─ Multiple queries in a transaction? -> pool.connect() + BEGIN/COMMIT/ROLLBACK ├─ LISTEN for notifications? -> pool.connect() (keep client for lifetime of listener) ├─ Cursor/streaming? -> pool.connect() (cursor binds to a connection) └─ Prepared statements across queries? -> pool.connect() (plan caches per connection)
Error Handling Strategy
What kind of PostgreSQL error? ├─ 23505 (unique_violation)? -> Map to 409 Conflict, include constraint name ├─ 23503 (foreign_key_violation)? -> Map to 400 Bad Request, entity not found ├─ 23502 (not_null_violation)? -> Map to 400 Bad Request, missing required field ├─ 23514 (check_violation)? -> Map to 400 Bad Request, validation failed ├─ 40P01 (deadlock_detected)? -> Retry with backoff (safe to retry) ├─ 40001 (serialization_failure)? -> Retry with backoff (safe to retry) ├─ 57014 (query_canceled)? -> Timeout, consider increasing statement_timeout ├─ 08xxx (connection_exception)? -> Pool handles reconnection, log and retry └─ Other? -> Log full error, return 500
Streaming Decision
How many rows will the query return? ├─ < 1,000 rows? -> pool.query() is fine (result fits in memory) ├─ 1,000 - 100,000 rows? -> pg-cursor with batch processing ├─ 100,000+ rows? -> pg-query-stream piped to a writable stream └─ Need to export to file? -> pg-query-stream piped to file write stream
</decision_framework>
<red_flags>
RED FLAGS
High Priority Issues:
- Using string interpolation/concatenation for SQL values -- this is SQL injection, the most dangerous vulnerability in database code
- Using
for transactions -- each call may use a different connection, so BEGIN/COMMIT have no effectpool.query() - Not releasing clients after
-- leaked clients exhaust the pool; oncepool.connect()
clients leak, the app deadlocks onmaxpool.connect() - Missing
handler -- idle client errors are emitted on the pool; unhandled, they crash the Node.js processpool.on("error") - Using standalone
in application code -- no pooling, no reconnection, no concurrencyClient
Medium Priority Issues:
in production queries -- returns unnecessary columns, breaks when schema changes, prevents index-only scansSELECT *- Loading millions of rows with
instead of streaming -- causes memory exhaustion and GC pressurepool.query() - Hardcoded connection strings -- prevents environment-specific configuration, risks credential leaks in version control
- Not handling specific PostgreSQL error codes -- generic error handling loses valuable information (which constraint, which column)
- Using
withLISTEN
-- notifications bind to a specific connection; pool.query releases the connection immediatelypool.query()
Common Mistakes:
- Forgetting that
can beresult.rows[0]
when no rows match -- always check before accessingundefined - Relying on
forresult.rowCount
emptiness checks -- useSELECT
instead;result.rows.length
isrowCount
for some commands (e.g.,null
) andLOCK
is universally reliablerows.length - Using
inside string literals in SQL --$1
is a literal string, not a parameter; use'$1'
outside quotes$1 - Forgetting that PostgreSQL arrays in parameters are automatically converted --
becomes[1, 2, 3]
which works for{1,2,3}
but not for= ANY($1)
(useIN ($1)
instead of= ANY($1::int[])
)IN - Calling
routinely -- passingclient.release(true)
destroys the client instead of returning it to the pool; only use after unrecoverable errorstrue
Gotchas & Edge Cases:
- Pool
event vs query errors: Poolerror
fires for idle client backend disconnections (e.g., server restart). Query errors are thrown/rejected from the query call itself. You need both handlers.error
(default) means no timeout -- connections wait forever if the pool is exhausted. Always set a timeout in production.connectionTimeoutMillis: 0
only affects clients that have been returned to the pool -- a checked-out client that is never released will never be cleaned up.idleTimeoutMillis- PostgreSQL
/numeric
types are returned as strings by default (to avoid JavaScript floating-point precision loss). Parse them explicitly if you need numbers.decimal
survives transactions -- if youLISTEN
,BEGIN
,LISTEN channel
, the listener is still active. LISTEN is not transactional.ROLLBACK
waits for all checked-out clients to be released. If a client is leaked (never released),pool.end()
hangs forever.pool.end()- SSL connections: if the connection string contains any SSL parameters (
,sslmode
,sslcert
,sslkey
), the entiresslrootcert
config object is replaced -- use one or the other, not both.ssl
</red_flags>
<critical_reminders>
CRITICAL REMINDERS
All code must follow project conventions in CLAUDE.md (kebab-case, named exports, import ordering,
, named constants)import type
(You MUST use parameterized queries (
, $1
, ...) for ALL values -- NEVER concatenate or interpolate user input into SQL strings)$2
(You MUST use
for all database access -- NEVER create standalone Pool
instances in application code)Client
(You MUST release clients back to the pool in a
block after finally
-- leaked clients exhaust the pool and hang the application)pool.connect()
(You MUST handle the
event on Pool instances -- unhandled idle client errors crash the Node.js process)error
Failure to follow these rules will cause SQL injection vulnerabilities, connection pool exhaustion, application hangs, and process crashes.
</critical_reminders>