Awesome-omni-skill db-migration
Supabase migration patterns, RLS audit, schema validation. Guides safe DDL operations.
install
source · Clone the upstream repo
git clone https://github.com/diegosouzapw/awesome-omni-skill
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skill "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/testing-security/db-migration" ~/.claude/skills/diegosouzapw-awesome-omni-skill-db-migration && rm -rf "$T"
manifest:
skills/testing-security/db-migration/SKILL.mdsource content
Database Migration Skill
Supabase migration patterns, RLS audit, schema validation. Guides safe DDL operations against the Supabase database.
Quand proposer ce skill
| Contexte detecte | Proposition |
|---|---|
| User mentionne migration, DDL, ALTER TABLE | |
| Modification schema Supabase ou RLS | |
| Ajout/suppression colonne ou index | |
| Nouveau module avec tables propres | |
Workflow 4 Phases (OBLIGATOIRE)
Phase 1 — Analyse
- Identifier l'operation : CREATE TABLE, ALTER TABLE (ADD/DROP/RENAME COLUMN), CREATE INDEX, DROP TABLE, RLS policy
- Evaluer l'impact :
- Taille de la table cible :
SELECT pg_size_pretty(pg_total_relation_size('table_name')); - Nombre de rows :
SELECT count(*) FROM table_name; - Dependances : FK, triggers, RPC functions, views
- Taille de la table cible :
- Classifier le risque :
| Operation | Risque | Lock | Downtime potentiel |
|---|---|---|---|
| ADD COLUMN (nullable) | Bas | ACCESS EXCLUSIVE (instant) | 0 |
| ADD COLUMN (DEFAULT) | Moyen | ACCESS EXCLUSIVE (rewrite < PG14) | Minutes si table large |
| DROP COLUMN | Haut | ACCESS EXCLUSIVE | Perte de donnees irreversible |
| CREATE INDEX | Moyen | SHARE lock (bloque writes) | Minutes si table large |
| CREATE INDEX CONCURRENTLY | Bas | Pas de lock | 0 (mais plus lent) |
| DROP TABLE | Critique | ACCESS EXCLUSIVE | Perte complete |
| ALTER TYPE | Haut | ACCESS EXCLUSIVE (rewrite) | Minutes |
Phase 2 — Plan
- Ecrire le SQL de migration dans
backend/supabase/migrations/YYYYMMDD_description.sql - Ecrire le SQL de rollback (voir section Rollback ci-dessous)
- Preparer les validations before/after (voir section Data Validation)
- Verifier les safe patterns (IF NOT EXISTS, BEGIN/COMMIT, etc.)
Phase 3 — Execute
- Executer les queries de validation BEFORE
- Appliquer la migration via
mcp__supabase__apply_migration - Verifier le succes dans
mcp__supabase__list_migrations - Executer les queries de validation AFTER
Phase 4 — Verify
→ verifier le schemamcp__supabase__list_tables
→ verifier RLSmcp__supabase__get_advisors(type: "security")
→ verifier indexmcp__supabase__get_advisors(type: "performance")- Tester les queries applicatives impactees
Rollback Templates
Preparer le rollback AVANT d'executer la migration :
-- Rollback: ADD COLUMN ALTER TABLE my_table DROP COLUMN IF EXISTS new_column; -- Rollback: DROP COLUMN (IMPOSSIBLE sans backup) -- ⚠️ Sauvegarder les donnees AVANT : -- CREATE TABLE _backup_my_table_col AS SELECT id, dropped_col FROM my_table; -- Rollback: CREATE TABLE DROP TABLE IF EXISTS my_table; -- Rollback: CREATE INDEX DROP INDEX IF EXISTS idx_name; -- Rollback: ALTER TYPE ALTER TABLE my_table ALTER COLUMN col TYPE old_type USING col::old_type; -- Rollback: RLS policy DROP POLICY IF EXISTS policy_name ON my_table;
Regle : Si le rollback est impossible (DROP COLUMN, DROP TABLE), exiger confirmation utilisateur avant execution.
Backfill Strategy (tables larges)
Pour les tables > 100K rows, utiliser le batching :
-- Backfill par batch de 10K rows DO $$ DECLARE batch_size INT := 10000; total_updated INT := 0; rows_affected INT; BEGIN LOOP UPDATE my_table SET new_column = compute_value(old_column) WHERE new_column IS NULL LIMIT batch_size; GET DIAGNOSTICS rows_affected = ROW_COUNT; total_updated := total_updated + rows_affected; RAISE NOTICE 'Updated % rows (total: %)', rows_affected, total_updated; EXIT WHEN rows_affected = 0; PERFORM pg_sleep(0.1); -- Pause pour ne pas saturer END LOOP; END $$;
Index sur tables larges : toujours utiliser
CONCURRENTLY :
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_name ON my_table (column);
Data Validation Queries
Executer AVANT et APRES la migration :
-- Row counts SELECT count(*) AS row_count FROM my_table; -- Null check sur nouvelle colonne SELECT count(*) FILTER (WHERE new_col IS NULL) AS nulls, count(*) FILTER (WHERE new_col IS NOT NULL) AS filled FROM my_table; -- Constraint check SELECT conname, contype FROM pg_constraint WHERE conrelid = 'my_table'::regclass; -- Index check SELECT indexname, indexdef FROM pg_indexes WHERE tablename = 'my_table'; -- RLS check SELECT polname, polcmd, polroles FROM pg_policy WHERE polrelid = 'my_table'::regclass;
Migration File Location
backend/supabase/migrations/YYYYMMDD_description.sql
Pre-Migration Checklist
- Test SQL in dev environment first (use
)mcp__supabase__execute_sql - Verify no breaking changes to existing RPC functions
- Check RLS impact on existing queries
- Verify key access patterns — service_role (backend) vs anon (frontend)
- Run advisors after migration —
for security + performancemcp__supabase__get_advisors
Safe Patterns
-- Always use IF NOT EXISTS / IF EXISTS CREATE TABLE IF NOT EXISTS my_table (...); CREATE INDEX IF NOT EXISTS idx_name ON my_table (column); DROP TABLE IF EXISTS old_table; -- Wrap multi-statement migrations BEGIN; ALTER TABLE my_table ADD COLUMN new_col TEXT; CREATE INDEX idx_new ON my_table (new_col); COMMIT; -- Add comments explaining purpose COMMENT ON TABLE my_table IS 'Description of table purpose';
RLS Audit Patterns
- Every new table MUST have RLS enabled :
ALTER TABLE my_table ENABLE ROW LEVEL SECURITY; - Verify policies cover : SELECT, INSERT, UPDATE, DELETE
- Backend uses
key (bypasses RLS)service_role - Frontend uses
key (subject to RLS)anon - After adding RLS, test with both keys
MCP Tools Available
| Tool | Use |
|---|---|
| DDL operations (CREATE, ALTER, DROP) |
| DML/queries (SELECT, INSERT, UPDATE) |
| Verify schema after changes |
| Security + performance check |
| Check existing migrations |
Anti-Patterns (BLOCK)
without backup/confirmationDROP TABLE
with data loss potential (dropping columns with data)ALTER TABLE- Disabling RLS on tables with user data
- Running DDL directly via
(useexecute_sql
for audit trail)apply_migration - Missing
on CREATE statementsIF NOT EXISTS - Forgetting to add RLS policies on new tables
Format de Sortie
## Migration Report — [nom_migration] ### Phase 1 — Analyse - Table(s) cible : [liste] - Taille : [size] / [row_count] rows - Operation : [type] - Risque : Bas / Moyen / Haut / Critique ### Phase 2 — Plan - SQL migration : [resume] - SQL rollback : [resume] - Validations preparees : [N] queries ### Phase 3 — Execution - Migration appliquee : OUI/NON - Version : [timestamp] ### Phase 4 — Verification | Check | Status | |-------|--------| | Schema correct | PASS/FAIL | | RLS advisors | PASS/FAIL | | Performance advisors | PASS/FAIL | | Queries applicatives | PASS/FAIL |
Interaction avec Autres Skills
| Skill | Direction | Declencheur |
|---|---|---|
| ← recoit | Modifications schema , |
| ← recoit | Modifications schema , |
| ← recoit | detecte des fichiers dans le diff |