Claude-skill-registry context-database-schema
git clone https://github.com/majiayu000/claude-skill-registry
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/context-database-schema" ~/.claude/skills/majiayu000-claude-skill-registry-context-database-schema && rm -rf "$T"
skills/data/context-database-schema/SKILL.mdDatabase Schema
Navigate Supabase database schema, 86+ migrations, and type definitions.
Overview
PostgreSQL schema via Supabase with RLS policies. See
docs/database/SCHEMA.md.
Database Access
⚠️ CRITICAL: Must be in Railway shell for all database operations
# Verify environment first echo $RAILWAY_ENVIRONMENT # Must be non-empty # If empty, enter Railway shell: railway shell
Common Queries:
# Quick introspection psql "$DATABASE_URL" -c "\dt" # List all tables psql "$DATABASE_URL" -c "\d users" # Describe users table psql "$DATABASE_URL" -c "\d+ accounts" # Detailed table info with indexes # Core tables psql "$DATABASE_URL" -c "SELECT * FROM users LIMIT 5;" psql "$DATABASE_URL" -c "SELECT * FROM accounts WHERE user_id = 'user_xxx';" psql "$DATABASE_URL" -c "SELECT * FROM holdings WHERE account_id = 'acc_xxx';"
Migrations:
# Inside railway shell + supabase directory cd supabase supabase db push # Apply local migrations (see health check below) supabase db diff -f new_migration_name # Generate migration from changes
See AGENTS.md "Database Access" section for complete guide.
Migrations
- All 86+ migrations (sequential)supabase/migrations/- Format:
YYYYMMDD_description.sql - Key prefixes:
,*clerk*
,*plaid**eodhd*
End-to-end Checklist (schema changes)
Any time you change schema (tables, columns, indexes, RLS) or add Supabase migrations:
-
Work in Railway dev shell
- Confirm:
is non-empty.echo $RAILWAY_ENVIRONMENT - Use the dev
andDATABASE_URL
.SUPABASE_PROJECT_ID
- Confirm:
-
Apply migrations to dev
cd supabase supabase db push- If this fails on old migrations, follow the health check flow below (registry repair or idempotent DDL) before proceeding.
-
Regenerate types and manifest
# From repo root, still in Railway shell export SUPABASE_PROJECT_ID=klrrntdswlvjdqusahdk # or injected value make schema:generate- This must update:
supabase/types/database.types.tssupabase/generated/schema_manifest.jsonbackend/schemas/generated/**
- This must update:
-
Verify schema parity
cd backend PYTHONPATH=. poetry run python ../scripts/verify_generated_schemas.py- This is the same check Tier 2 Auth Stub uses in CI.
- Fix any reported mismatches (missing columns, wrong nullability) before committing.
-
Commit everything together
- In a feature branch:
changessupabase/migrations/**
changessupabase/schemas/**- Regenerated types and manifests
- Updated
backend/schemas/generated/**
- Do not split schema SQL and generated artifacts into separate feature branches; they must land atomically.
- In a feature branch:
Migration Health Check (bd-k1c learnings)
Before adding or merging new migrations:
-
Verify registry vs schema
- Run in Railway shell:
cd supabase supabase db push - If it fails on old migrations (tables/triggers already exist), it means the schema was initialized by
/golden_schema.sql
/ manual SQL and the migration registry is behind.all_migrations.sql
- Run in Railway shell:
-
If db push fails on old migrations
- Do not hack the schema via ad‑hoc SQL.
- Instead, repair the registry or make old migrations idempotent:
- Option A (registry repair): mark older versions as applied in
(seesupabase_migrations.schema_migrations
for the bd‑k1c repair).supabase/scripts/fix_migration_registry_bd_k1c.sql - Option B (idempotent migrations): wrap non‑idempotent DDL (e.g.
) inCREATE TRIGGER
blocks so replaying them is safe.IF NOT EXISTS
- Option A (registry repair): mark older versions as applied in
- Re‑run
after repair; only then add/merge new migrations.supabase db push
-
New migrations (forward-only rule)
- Prefer
,CREATE TABLE IF NOT EXISTS
, andADD COLUMN IF NOT EXISTS
when possible to make replays safe.CREATE INDEX IF NOT EXISTS - For new migrations you add from now on, use a unique timestamp prefix per file (e.g.
).20251206152000_... - RPC Functions: ALWAYS provide default values for arguments (e.g.
) to avoid signature mismatches with standard backends.filter jsonb DEFAULT '{}' - Standard Scrape/Doc Schema:
: MUST haveraw_scrapes
(text).storage_uri
: MUST havedocuments
(text).source
- Prefer
-
Dev/test-data migrations hygiene
- Dev/test-data migrations must live under
, NOTsupabase/dev_migrations/supabase/migrations/ - Schema migrations (DDL, RLS, indexes, FKs) go in
supabase/migrations/ - Test-data seeding goes in
orscripts/db-commands/supabase/dev_migrations/ - Do not use
on historical migrations; treat it as debugging tool onlysupabase db push --include-all - See
for usagesupabase/dev_migrations/README.md
- Dev/test-data migrations must live under
Migration Registry Repair (When CLI Fails)
If
supabase db push fails with "Remote migration versions not found" (Drift), do NOT run manual SQL in Dashboard. This creates a vicious cycle.
Fix:
- Ensure
is set in Railway (required for CLI).DATABASE_URL - Run repair to sync registry with local files:
# In Railway shell supabase migration repair --status applied <version_id> # Or for batch: supabase migration repair --status applied 20251129... 20251204... - Then run
for new migrations.supabase db push
Schema Definitions
- Table definitionssupabase/schemas/public/
- Individual table filessupabase/schemas/public/tables/
Type Generation
- Generated TypeScript typessupabase/types/database.types.ts- Generate via:
supabase gen types typescript
Backend Types
- Generated Python types (if any)backend/schemas/generated/
Scripts
- Database utilitiesscripts/db-commands/
- Alembic migrations (if used)backend/migrations/versions/
Key Tables and Recent Changes
Holdings Table (public.holdings
)
public.holdingsCore columns:
,id
,account_id
,security_id
,quantitycost_basis
,created_at
,updated_atclosed_at
Active vs Closed Holdings (bd-k1c.4):
- Active holdings:
- current portfolio positionsclosed_at IS NULL - Closed positions:
,closed_at IS NOT NULL
conventionallyquantity0 - Plaid pipeline soft-closes positions that disappear from broker snapshots (doesn't delete)
- Manual holdings are NOT auto-closed by provider sync
Index:
- Partial index
for efficient active holdings queriesidx_holdings_closed_at ON holdings(closed_at) WHERE closed_at IS NULL
Current portfolio views: Filter with
WHERE closed_at IS NULL
Holdings Snapshots Table (public.holdings_snapshots
)
public.holdings_snapshotsPurpose: Append-only time-series snapshots for historical portfolio analytics (bd-k1c.6)
Core columns:
(TIMESTAMPTZ) - snapshot time, typically daily at market closesnapshot_at
,user_id
,account_idsecurity_id
,quantity
,cost_basis
,market_valueprice_source
Key constraints:
for idempotencyUNIQUE (snapshot_at, account_id, security_id)- Indexes on
,(user_id, snapshot_at DESC)
,(account_id, snapshot_at DESC)(security_id, snapshot_at DESC)
Relationship to holdings:
- Snapshots derived from active holdings + price data
- Snapshot job:
backend/scripts/create_holdings_snapshot.py
Provider Security Mappings (public.provider_security_mappings
)
public.provider_security_mappingsPurpose: Map provider security IDs to canonical securities
Natural key (bd-k1c.3):
UNIQUE (brokerage_connection_id, provider_security_id)
- stable provider-side identifier (e.g., Plaidprovider_security_id VARCHAR(255)
)security_id
- retained for audit, NOT in uniqueness constraintprovider_payload JSONB
Index:
for fast lookupsidx_provider_security_mappings_provider_security_id
Used by:
RawDataService.get_existing_security_mapping
(upserts on natural key)SecurityResolver._link_provider_mapping
Recent bd-k1c Changes
The bd-k1c epic (Plaid portfolio pipeline hardening) introduced several schema enhancements:
- Holdings soft-close semantics (
column) - distinguishes active vs closed positionsclosed_at - Time-series snapshots (
table) - enables historical analyticsholdings_snapshots - Provider mapping refinement (
natural key) - more robust brokerage integrationsprovider_security_id
See
docs/bd-k1c/EPIC_OVERVIEW.md for full context and child features.
Documentation
- Internal:
docs/database/SCHEMA.md
Related Areas
- See
for RLS patternscontext-clerk-integration - See
for plaid_prices table and provider mappingscontext-plaid-integration - See
for securities tablecontext-symbol-resolution - See
for holdings views and analyticscontext-portfolio