Awesome-omni-skills postgresql
PostgreSQL Table Design workflow skill. Use this skill when the user needs Design a PostgreSQL-specific schema. Covers best-practices, data types, indexing, constraints, performance patterns, and advanced features and the operator should preserve the upstream workflow, copied support files, and provenance before merging or handing off.
git clone https://github.com/diegosouzapw/awesome-omni-skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/postgresql" ~/.claude/skills/diegosouzapw-awesome-omni-skills-postgresql && rm -rf "$T"
skills/postgresql/SKILL.mdPostgreSQL Table Design
Overview
This public intake copy packages
plugins/antigravity-awesome-skills-claude/skills/postgresql from https://github.com/sickn33/antigravity-awesome-skills into the native Omni Skills editorial shape without hiding its origin.
Use it when the operator needs the upstream workflow, support files, and repository context to stay intact while the public validator and private enhancer continue their normal downstream flow.
This intake keeps the copied upstream files intact and uses
metadata.json plus ORIGIN.md as the provenance anchor for review.
PostgreSQL Table Design
Imported source sections that did not map cleanly to the public headings are still preserved below or in the support files. Notable imported sections: Safety, PostgreSQL “Gotchas”, Data Types, Table Types, Row-Level Security, Constraints.
When to Use This Skill
Use this section as the trigger filter. It should make the activation boundary explicit before the operator loads files, runs commands, or opens a pull request.
- Designing a schema for PostgreSQL
- Selecting data types and constraints
- Planning indexes, partitions, or RLS policies
- Reviewing tables for scale and maintainability
- You are targeting a non-PostgreSQL database
- You only need query tuning without schema changes
Operating Table
| Situation | Start here | Why it matters |
|---|---|---|
| First-time use | | Confirms repository, branch, commit, and imported path before touching the copied workflow |
| Provenance review | | Gives reviewers a plain-language audit trail for the imported source |
| Workflow execution | | Starts with the smallest copied file that materially changes execution |
| Supporting context | | Adds the next most relevant copied source file without loading the entire package |
| Handoff decision | | Helps the operator switch to a stronger native skill when the task drifts |
Workflow
This workflow is intentionally editorial and operational at the same time. It keeps the imported source useful to the operator while still satisfying the public intake standards that feed the downstream enhancer flow.
- Capture entities, access patterns, and scale targets (rows, QPS, retention).
- Choose data types and constraints that enforce invariants.
- Add indexes for real query paths and validate with EXPLAIN.
- Plan partitioning or RLS where required by scale or access control.
- Review migration impact and apply changes safely.
- Confirm the user goal, the scope of the imported workflow, and whether this skill is still the right router for the task.
- Read the overview and provenance files before loading any copied upstream support files.
Imported Workflow Notes
Imported: Instructions
- Capture entities, access patterns, and scale targets (rows, QPS, retention).
- Choose data types and constraints that enforce invariants.
- Add indexes for real query paths and validate with
.EXPLAIN - Plan partitioning or RLS where required by scale or access control.
- Review migration impact and apply changes safely.
Imported: Safety
- Avoid destructive DDL on production without backups and a rollback plan.
- Use migrations and staging validation before applying schema changes.
Examples
Example 1: Ask for the upstream workflow directly
Use @postgresql to handle <task>. Start from the copied upstream workflow, load only the files that change the outcome, and keep provenance visible in the answer.
Explanation: This is the safest starting point when the operator needs the imported workflow, but not the entire repository.
Example 2: Ask for a provenance-grounded review
Review @postgresql against metadata.json and ORIGIN.md, then explain which copied upstream files you would load first and why.
Explanation: Use this before review or troubleshooting when you need a precise, auditable explanation of origin and file selection.
Example 3: Narrow the copied support files before execution
Use @postgresql for <task>. Load only the copied references, examples, or scripts that change the outcome, and name the files explicitly before proceeding.
Explanation: This keeps the skill aligned with progressive disclosure instead of loading the whole copied package by default.
Example 4: Build a reviewer packet
Review @postgresql using the copied upstream files plus provenance, then summarize any gaps before merge.
Explanation: This is useful when the PR is waiting for human review and you want a repeatable audit packet.
Imported Usage Notes
Imported: Examples
Users
CREATE TABLE users ( user_id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY, email TEXT NOT NULL UNIQUE, name TEXT NOT NULL, created_at TIMESTAMPTZ NOT NULL DEFAULT now() ); CREATE UNIQUE INDEX ON users (LOWER(email)); CREATE INDEX ON users (created_at);
Orders
CREATE TABLE orders ( order_id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY, user_id BIGINT NOT NULL REFERENCES users(user_id), status TEXT NOT NULL DEFAULT 'PENDING' CHECK (status IN ('PENDING','PAID','CANCELED')), total NUMERIC(10,2) NOT NULL CHECK (total > 0), created_at TIMESTAMPTZ NOT NULL DEFAULT now() ); CREATE INDEX ON orders (user_id); CREATE INDEX ON orders (created_at);
JSONB
CREATE TABLE profiles ( user_id BIGINT PRIMARY KEY REFERENCES users(user_id), attrs JSONB NOT NULL DEFAULT '{}', theme TEXT GENERATED ALWAYS AS (attrs->>'theme') STORED ); CREATE INDEX profiles_attrs_gin ON profiles USING GIN (attrs);
Best Practices
Treat the generated public skill as a reviewable packaging layer around the upstream repository. The goal is to keep provenance explicit and load only the copied source material that materially improves execution.
- Define a PRIMARY KEY for reference tables (users, orders, etc.). Not always needed for time-series/event/log data. When used, prefer BIGINT GENERATED ALWAYS AS IDENTITY; use UUID only when global uniqueness/opacity is needed.
- Normalize first (to 3NF) to eliminate data redundancy and update anomalies; denormalize only for measured, high-ROI reads where join performance is proven problematic. Premature denormalization creates maintenance burden.
- Add NOT NULL everywhere it’s semantically required; use DEFAULTs for common values.
- Create indexes for access paths you actually query: PK/unique (auto), FK columns (manual!), frequent filters/sorts, and join keys.
- Prefer TIMESTAMPTZ for event time; NUMERIC for money; TEXT for strings; BIGINT for integer values, DOUBLE PRECISION for floats (or NUMERIC for exact decimal arithmetic).
- Keep the imported skill grounded in the upstream repository; do not invent steps that the source material cannot support.
- Prefer the smallest useful set of support files so the workflow stays auditable and fast to review.
Imported Operating Notes
Imported: Core Rules
- Define a PRIMARY KEY for reference tables (users, orders, etc.). Not always needed for time-series/event/log data. When used, prefer
; useBIGINT GENERATED ALWAYS AS IDENTITY
only when global uniqueness/opacity is needed.UUID - Normalize first (to 3NF) to eliminate data redundancy and update anomalies; denormalize only for measured, high-ROI reads where join performance is proven problematic. Premature denormalization creates maintenance burden.
- Add NOT NULL everywhere it’s semantically required; use DEFAULTs for common values.
- Create indexes for access paths you actually query: PK/unique (auto), FK columns (manual!), frequent filters/sorts, and join keys.
- Prefer TIMESTAMPTZ for event time; NUMERIC for money; TEXT for strings; BIGINT for integer values, DOUBLE PRECISION for floats (or
for exact decimal arithmetic).NUMERIC
Troubleshooting
Problem: The operator skipped the imported context and answered too generically
Symptoms: The result ignores the upstream workflow in
plugins/antigravity-awesome-skills-claude/skills/postgresql, fails to mention provenance, or does not use any copied source files at all.
Solution: Re-open metadata.json, ORIGIN.md, and the most relevant copied upstream files. Load only the files that materially change the answer, then restate the provenance before continuing.
Problem: The imported workflow feels incomplete during review
Symptoms: Reviewers can see the generated
SKILL.md, but they cannot quickly tell which references, examples, or scripts matter for the current task.
Solution: Point at the exact copied references, examples, scripts, or assets that justify the path you took. If the gap is still real, record it in the PR instead of hiding it.
Problem: The task drifted into a different specialization
Symptoms: The imported skill starts in the right place, but the work turns into debugging, architecture, design, security, or release orchestration that a native skill handles better. Solution: Use the related skills section to hand off deliberately. Keep the imported provenance visible so the next skill inherits the right context instead of starting blind.
Related Skills
- Use when the work is better handled by that native specialization after this imported skill establishes context.@00-andruia-consultant-v2
- Use when the work is better handled by that native specialization after this imported skill establishes context.@10-andruia-skill-smith-v2
- Use when the work is better handled by that native specialization after this imported skill establishes context.@20-andruia-niche-intelligence-v2
- Use when the work is better handled by that native specialization after this imported skill establishes context.@2d-games
Additional Resources
Use this support matrix and the linked files below as the operator packet for this imported skill. They should reflect real copied source material, not generic scaffolding.
| Resource family | What it gives the reviewer | Example path |
|---|---|---|
| copied reference notes, guides, or background material from upstream | |
| worked examples or reusable prompts copied from upstream | |
| upstream helper scripts that change execution or validation | |
| routing or delegation notes that are genuinely part of the imported package | |
| supporting assets or schemas copied from the source package | |
Imported Reference Notes
Imported: Indexing
- B-tree: default for equality/range queries (
,=
,<
,>
,BETWEEN
)ORDER BY - Composite: order matters—index used if equality on leftmost prefix (
uses index onWHERE a = ? AND b > ?
, but(a,b)
does not). Put most selective/frequently filtered columns first.WHERE b = ? - Covering:
- includes non-key columns for index-only scans without visiting table.CREATE INDEX ON tbl (id) INCLUDE (name, email) - Partial: for hot subsets (
→WHERE status = 'active'
). Any query withCREATE INDEX ON tbl (user_id) WHERE status = 'active'
can use this index.status = 'active' - Expression: for computed search keys (
). Expression must match exactly in WHERE clause:CREATE INDEX ON tbl (LOWER(email))
.WHERE LOWER(email) = 'user@example.com' - GIN: JSONB containment/existence, arrays (
,@>
), full-text search (?
)@@ - GiST: ranges, geometry, exclusion constraints
- BRIN: very large, naturally ordered data (time-series)—minimal storage overhead. Effective when row order on disk correlates with indexed column (insertion order or after
).CLUSTER
Imported: PostgreSQL “Gotchas”
- Identifiers: unquoted → lowercased. Avoid quoted/mixed-case names. Convention: use
for table/column names.snake_case - Unique + NULLs: UNIQUE allows multiple NULLs. Use
(PG15+) to restrict to one NULL.UNIQUE (...) NULLS NOT DISTINCT - FK indexes: PostgreSQL does not auto-index FK columns. Add them.
- No silent coercions: length/precision overflows error out (no truncation). Example: inserting 999 into
fails with error, unlike some databases that silently truncate or round.NUMERIC(2,0) - Sequences/identity have gaps (normal; don't "fix"). Rollbacks, crashes, and concurrent transactions create gaps in ID sequences (1, 2, 5, 6...). This is expected behavior—don't try to make IDs consecutive.
- Heap storage: no clustered PK by default (unlike SQL Server/MySQL InnoDB);
is one-off reorganization, not maintained on subsequent inserts. Row order on disk is insertion order unless explicitly clustered.CLUSTER - MVCC: updates/deletes leave dead tuples; vacuum handles them—design to avoid hot wide-row churn.
Imported: Data Types
- IDs:
preferred (BIGINT GENERATED ALWAYS AS IDENTITY
also fine);GENERATED BY DEFAULT
when merging/federating/used in a distributed system or for opaque IDs. Generate withUUID
(preferred if using PG18+) oruuidv7()
(if using an older PG version).gen_random_uuid() - Integers: prefer
unless storage space is critical;BIGINT
for smaller ranges; avoidINTEGER
unless constrained.SMALLINT - Floats: prefer
overDOUBLE PRECISION
unless storage space is critical. UseREAL
for exact decimal arithmetic.NUMERIC - Strings: prefer
; if length limits needed, useTEXT
instead ofCHECK (LENGTH(col) <= n)
; avoidVARCHAR(n)
. UseCHAR(n)
for binary data. Large strings/binary (>2KB default threshold) automatically stored in TOAST with compression. TOAST storage:BYTEA
(no TOAST),PLAIN
(compress + out-of-line),EXTENDED
(out-of-line, no compress),EXTERNAL
(compress, keep in-line if possible). DefaultMAIN
usually optimal. Control withEXTENDED
andALTER TABLE tbl ALTER COLUMN col SET STORAGE strategy
for threshold. Case-insensitive: for locale/accent handling use non-deterministic collations; for plain ASCII use expression indexes onALTER TABLE tbl SET (toast_tuple_target = 4096)
(preferred unless column needs case-insensitive PK/FK/UNIQUE) orLOWER(col)
.CITEXT - Money:
(never float).NUMERIC(p,s) - Time:
for timestamps;TIMESTAMPTZ
for date-only;DATE
for durations. AvoidINTERVAL
(without timezone). UseTIMESTAMP
for transaction start time,now()
for current wall-clock time.clock_timestamp() - Booleans:
withBOOLEAN
constraint unless tri-state values are required.NOT NULL - Enums:
for small, stable sets (e.g. US states, days of week). For business-logic-driven and evolving values (e.g. order statuses) → use TEXT (or INT) + CHECK or lookup table.CREATE TYPE ... AS ENUM - Arrays:
,TEXT[]
, etc. Use for ordered lists where you query elements. Index with GIN for containment (INTEGER[]
,@>
) and overlap (<@
) queries. Access:&&
(1-indexed),arr[1]
(slicing). Good for tags, categories; avoid for relations—use junction tables instead. Literal syntax:arr[1:3]
or'{val1,val2}'
.ARRAY[val1,val2] - Range types:
,daterange
,numrange
for intervals. Support overlap (tstzrange
), containment (&&
), operators. Index with GiST. Good for scheduling, versioning, numeric ranges. Pick a bounds scheme and use it consistently; prefer@>
(inclusive/exclusive) by default.[) - Network types:
for IP addresses,INET
for network ranges,CIDR
for MAC addresses. Support network operators (MACADDR
,<<
,>>
).&& - Geometric types:
,POINT
,LINE
,POLYGON
for 2D spatial data. Index with GiST. Consider PostGIS for advanced spatial features.CIRCLE - Text search:
for full-text search documents,TSVECTOR
for search queries. IndexTSQUERY
with GIN. Always specify language:tsvector
andto_tsvector('english', col)
. Never use single-argument versions. This applies to both index expressions and queries.to_tsquery('english', 'query') - Domain types:
for reusable custom types with validation. Enforces constraints across tables.CREATE DOMAIN email AS TEXT CHECK (VALUE ~ '^[^@]+@[^@]+$') - Composite types:
for structured data within columns. Access withCREATE TYPE address AS (street TEXT, city TEXT, zip TEXT)
syntax.(col).field - JSONB: preferred over JSON; index with GIN. Use only for optional/semi-structured attrs. ONLY use JSON if the original ordering of the contents MUST be preserved.
- Vector types:
type byvector
for vector similarity search for embeddings.pgvector
Do not use the following data types
- DO NOT use
(without time zone); DO usetimestamp
instead.timestamptz - DO NOT use
orchar(n)
; DO usevarchar(n)
instead.text - DO NOT use
type; DO usemoney
instead.numeric - DO NOT use
type; DO usetimetz
instead.timestamptz - DO NOT use
or any other precision specification; DO usetimestamptz(0)
insteadtimestamptz - DO NOT use
type; DO useserial
instead.generated always as identity
Imported: Table Types
- Regular: default; fully durable, logged.
- TEMPORARY: session-scoped, auto-dropped, not logged. Faster for scratch work.
- UNLOGGED: persistent but not crash-safe. Faster writes; good for caches/staging.
Imported: Row-Level Security
Enable with
ALTER TABLE tbl ENABLE ROW LEVEL SECURITY. Create policies: CREATE POLICY user_access ON orders FOR SELECT TO app_users USING (user_id = current_user_id()). Built-in user-based access control at the row level.
Imported: Constraints
- PK: implicit UNIQUE + NOT NULL; creates a B-tree index.
- FK: specify
action (ON DELETE/UPDATE
,CASCADE
,RESTRICT
,SET NULL
). Add explicit index on referencing column—speeds up joins and prevents locking issues on parent deletes/updates. UseSET DEFAULT
for circular FK dependencies checked at transaction end.DEFERRABLE INITIALLY DEFERRED - UNIQUE: creates a B-tree index; allows multiple NULLs unless
(PG15+). Standard behavior:NULLS NOT DISTINCT
and(1, NULL)
are allowed. With(1, NULL)
: only oneNULLS NOT DISTINCT
allowed. Prefer(1, NULL)
unless you specifically need duplicate NULLs.NULLS NOT DISTINCT - CHECK: row-local constraints; NULL values pass the check (three-valued logic). Example:
allows NULL prices. Combine withCHECK (price > 0)
to enforce:NOT NULL
.price NUMERIC NOT NULL CHECK (price > 0) - EXCLUDE: prevents overlapping values using operators.
prevents double-booking rooms. Requires appropriate index type (often GiST).EXCLUDE USING gist (room_id WITH =, booking_period WITH &&)
Imported: Partitioning
- Use for very large tables (>100M rows) where queries consistently filter on partition key (often time/date).
- Alternate use: use for tables where data maintenance tasks dictates e.g. data pruned or bulk replaced periodically
- RANGE: common for time-series (
). Create partitions:PARTITION BY RANGE (created_at)
. TimescaleDB automates time-based or ID-based partitioning with retention policies and compression.CREATE TABLE logs_2024_01 PARTITION OF logs FOR VALUES FROM ('2024-01-01') TO ('2024-02-01') - LIST: for discrete values (
). Example:PARTITION BY LIST (region)
.FOR VALUES IN ('us-east', 'us-west') - HASH: for even distribution when no natural key (
). Creates N partitions with modulus.PARTITION BY HASH (user_id) - Constraint exclusion: requires
constraints on partitions for query planner to prune. Auto-created for declarative partitioning (PG10+).CHECK - Prefer declarative partitioning or hypertables. Do NOT use table inheritance.
- Limitations: no global UNIQUE constraints—include partition key in PK/UNIQUE. FKs from partitioned tables not supported; use triggers.
Imported: Special Considerations
Update-Heavy Tables
- Separate hot/cold columns—put frequently updated columns in separate table to minimize bloat.
- Use
to leave space for HOT updates that avoid index maintenance.fillfactor=90 - Avoid updating indexed columns—prevents beneficial HOT updates.
- Partition by update patterns—separate frequently updated rows in a different partition from stable data.
Insert-Heavy Workloads
- Minimize indexes—only create what you query; every index slows inserts.
- Use
or multi-rowCOPY
instead of single-row inserts.INSERT - UNLOGGED tables for rebuildable staging data—much faster writes.
- Defer index creation for bulk loads—>drop index, load data, recreate indexes.
- Partition by time/hash to distribute load. TimescaleDB automates partitioning and compression of insert-heavy data.
- Use a natural key for primary key such as a (timestamp, device_id) if enforcing global uniqueness is important many insert-heavy tables don't need a primary key at all.
- If you do need a surrogate key, Prefer
overBIGINT GENERATED ALWAYS AS IDENTITY
.UUID
Upsert-Friendly Design
- Requires UNIQUE index on conflict target columns—
needs exact matching unique index (partial indexes don't work).ON CONFLICT (col1, col2) - Use
to reference would-be-inserted values; only update columns that actually changed to reduce write overhead.EXCLUDED.column
faster thanDO NOTHING
when no actual update needed.DO UPDATE
Safe Schema Evolution
- Transactional DDL: most DDL operations can run in transactions and be rolled back—
for safe testing.BEGIN; ALTER TABLE...; ROLLBACK; - Concurrent index creation:
avoids blocking writes but can't run in transactions.CREATE INDEX CONCURRENTLY - Volatile defaults cause rewrites: adding
columns with volatile defaults (e.g.,NOT NULL
,now()
) rewrites entire table. Non-volatile defaults are fast.gen_random_uuid() - Drop constraints before columns:
thenALTER TABLE DROP CONSTRAINT
to avoid dependency issues.DROP COLUMN - Function signature changes:
with different arguments creates overloads, not replacements. DROP old version if no overload desired.CREATE OR REPLACE
Imported: Generated Columns
for computed, indexable fields. PG18+ adds... GENERATED ALWAYS AS (<expr>) STORED
columns (computed on read, not stored).VIRTUAL
Imported: Extensions
:pgcrypto
for password hashing.crypt()
: alternative UUID functions; preferuuid-ossp
for new projects.pgcrypto
: fuzzy text search withpg_trgm
operator,%
function. Index with GIN forsimilarity()
acceleration.LIKE '%pattern%'
: case-insensitive text type. Prefer expression indexes oncitext
unless you need case-insensitive constraints.LOWER(col)
/btree_gin
: enable mixed-type indexes (e.g., GIN index on both JSONB and text columns).btree_gist
: key-value pairs; mostly superseded by JSONB but useful for simple string mappings.hstore
: essential for time-series—automated partitioning, retention, compression, continuous aggregates.timescaledb
: comprehensive geospatial support beyond basic geometric types—essential for location-based applications.postgis
: vector similarity search for embeddings.pgvector
: audit logging for all database activity.pgaudit
Imported: JSONB Guidance
- Prefer
with GIN index.JSONB - Default:
→ accelerates:CREATE INDEX ON tbl USING GIN (jsonb_col);- Containment
jsonb_col @> '{"k":"v"}' - Key existence
, any/all keysjsonb_col ? 'k'
,?\|?& - Path containment on nested docs
- Disjunction
jsonb_col @> ANY(ARRAY['{"status":"active"}', '{"status":"pending"}'])
- Containment
- Heavy
workloads: consider opclass@>
for smaller/faster containment-only indexes:jsonb_path_opsCREATE INDEX ON tbl USING GIN (jsonb_col jsonb_path_ops);- Trade-off: loses support for key existence (
,?
,?|
) queries—only supports containment (?&
)@>
- Equality/range on a specific scalar field: extract and index with B-tree (generated column or expression):
ALTER TABLE tbl ADD COLUMN price INT GENERATED ALWAYS AS ((jsonb_col->>'price')::INT) STORED;CREATE INDEX ON tbl (price);- Prefer queries like
(uses B-tree) overWHERE price BETWEEN 100 AND 500
without index.WHERE (jsonb_col->>'price')::INT BETWEEN 100 AND 500
- Arrays inside JSONB: use GIN +
for containment (e.g., tags). Consider@>
if only doing containment.jsonb_path_ops - Keep core relations in tables; use JSONB for optional/variable attributes.
- Use constraints to limit allowed JSONB values in a column e.g.
config JSONB NOT NULL CHECK(jsonb_typeof(config) = 'object')