Skillshub database-migrations
Database migration best practices for schema changes, data migrations, rollbacks, and zero-downtime deployments across PostgreSQL, MySQL, and common ORMs (Prisma, Drizzle, Kysely, Django, TypeORM, golang-migrate).
install
source · Clone the upstream repo
git clone https://github.com/ComeOnOliver/skillshub
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/ComeOnOliver/skillshub "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/affaan-m/everything-claude-code/database-migrations" ~/.claude/skills/comeonoliver-skillshub-database-migrations && rm -rf "$T"
manifest:
skills/affaan-m/everything-claude-code/database-migrations/SKILL.mdsource content
Database Migration Patterns
Safe, reversible database schema changes for production systems.
When to Activate
- Creating or altering database tables
- Adding/removing columns or indexes
- Running data migrations (backfill, transform)
- Planning zero-downtime schema changes
- Setting up migration tooling for a new project
Core Principles
- Every change is a migration — never alter production databases manually
- Migrations are forward-only in production — rollbacks use new forward migrations
- Schema and data migrations are separate — never mix DDL and DML in one migration
- Test migrations against production-sized data — a migration that works on 100 rows may lock on 10M
- Migrations are immutable once deployed — never edit a migration that has run in production
Migration Safety Checklist
Before applying any migration:
- Migration has both UP and DOWN (or is explicitly marked irreversible)
- No full table locks on large tables (use concurrent operations)
- New columns have defaults or are nullable (never add NOT NULL without default)
- Indexes created concurrently (not inline with CREATE TABLE for existing tables)
- Data backfill is a separate migration from schema change
- Tested against a copy of production data
- Rollback plan documented
PostgreSQL Patterns
Adding a Column Safely
-- GOOD: Nullable column, no lock ALTER TABLE users ADD COLUMN avatar_url TEXT; -- GOOD: Column with default (Postgres 11+ is instant, no rewrite) ALTER TABLE users ADD COLUMN is_active BOOLEAN NOT NULL DEFAULT true; -- BAD: NOT NULL without default on existing table (requires full rewrite) ALTER TABLE users ADD COLUMN role TEXT NOT NULL; -- This locks the table and rewrites every row
Adding an Index Without Downtime
-- BAD: Blocks writes on large tables CREATE INDEX idx_users_email ON users (email); -- GOOD: Non-blocking, allows concurrent writes CREATE INDEX CONCURRENTLY idx_users_email ON users (email); -- Note: CONCURRENTLY cannot run inside a transaction block -- Most migration tools need special handling for this
Renaming a Column (Zero-Downtime)
Never rename directly in production. Use the expand-contract pattern:
-- Step 1: Add new column (migration 001) ALTER TABLE users ADD COLUMN display_name TEXT; -- Step 2: Backfill data (migration 002, data migration) UPDATE users SET display_name = username WHERE display_name IS NULL; -- Step 3: Update application code to read/write both columns -- Deploy application changes -- Step 4: Stop writing to old column, drop it (migration 003) ALTER TABLE users DROP COLUMN username;
Removing a Column Safely
-- Step 1: Remove all application references to the column -- Step 2: Deploy application without the column reference -- Step 3: Drop column in next migration ALTER TABLE orders DROP COLUMN legacy_status; -- For Django: use SeparateDatabaseAndState to remove from model -- without generating DROP COLUMN (then drop in next migration)
Large Data Migrations
-- BAD: Updates all rows in one transaction (locks table) UPDATE users SET normalized_email = LOWER(email); -- GOOD: Batch update with progress DO $$ DECLARE batch_size INT := 10000; rows_updated INT; BEGIN LOOP UPDATE users SET normalized_email = LOWER(email) WHERE id IN ( SELECT id FROM users WHERE normalized_email IS NULL LIMIT batch_size FOR UPDATE SKIP LOCKED ); GET DIAGNOSTICS rows_updated = ROW_COUNT; RAISE NOTICE 'Updated % rows', rows_updated; EXIT WHEN rows_updated = 0; COMMIT; END LOOP; END $$;
Prisma (TypeScript/Node.js)
Workflow
# Create migration from schema changes npx prisma migrate dev --name add_user_avatar # Apply pending migrations in production npx prisma migrate deploy # Reset database (dev only) npx prisma migrate reset # Generate client after schema changes npx prisma generate
Schema Example
model User { id String @id @default(cuid()) email String @unique name String? avatarUrl String? @map("avatar_url") createdAt DateTime @default(now()) @map("created_at") updatedAt DateTime @updatedAt @map("updated_at") orders Order[] @@map("users") @@index([email]) }
Custom SQL Migration
For operations Prisma cannot express (concurrent indexes, data backfills):
# Create empty migration, then edit the SQL manually npx prisma migrate dev --create-only --name add_email_index
-- migrations/20240115_add_email_index/migration.sql -- Prisma cannot generate CONCURRENTLY, so we write it manually CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_users_email ON users (email);
Drizzle (TypeScript/Node.js)
Workflow
# Generate migration from schema changes npx drizzle-kit generate # Apply migrations npx drizzle-kit migrate # Push schema directly (dev only, no migration file) npx drizzle-kit push
Schema Example
import { pgTable, text, timestamp, uuid, boolean } from "drizzle-orm/pg-core"; export const users = pgTable("users", { id: uuid("id").primaryKey().defaultRandom(), email: text("email").notNull().unique(), name: text("name"), isActive: boolean("is_active").notNull().default(true), createdAt: timestamp("created_at").notNull().defaultNow(), updatedAt: timestamp("updated_at").notNull().defaultNow(), });
Kysely (TypeScript/Node.js)
Workflow (kysely-ctl)
# Initialize config file (kysely.config.ts) kysely init # Create a new migration file kysely migrate make add_user_avatar # Apply all pending migrations kysely migrate latest # Rollback last migration kysely migrate down # Show migration status kysely migrate list
Migration File
// migrations/2024_01_15_001_create_user_profile.ts import { type Kysely, sql } from 'kysely' // IMPORTANT: Always use Kysely<any>, not your typed DB interface. // Migrations are frozen in time and must not depend on current schema types. export async function up(db: Kysely<any>): Promise<void> { await db.schema .createTable('user_profile') .addColumn('id', 'serial', (col) => col.primaryKey()) .addColumn('email', 'varchar(255)', (col) => col.notNull().unique()) .addColumn('avatar_url', 'text') .addColumn('created_at', 'timestamp', (col) => col.defaultTo(sql`now()`).notNull() ) .execute() await db.schema .createIndex('idx_user_profile_avatar') .on('user_profile') .column('avatar_url') .execute() } export async function down(db: Kysely<any>): Promise<void> { await db.schema.dropTable('user_profile').execute() }
Programmatic Migrator
import { Migrator, FileMigrationProvider } from 'kysely' import { promises as fs } from 'fs' import * as path from 'path' // ESM only — CJS can use __dirname directly import { fileURLToPath } from 'url' const migrationFolder = path.join( path.dirname(fileURLToPath(import.meta.url)), './migrations', ) // `db` is your Kysely<any> database instance const migrator = new Migrator({ db, provider: new FileMigrationProvider({ fs, path, migrationFolder, }), // WARNING: Only enable in development. Disables timestamp-ordering // validation, which can cause schema drift between environments. // allowUnorderedMigrations: true, }) const { error, results } = await migrator.migrateToLatest() results?.forEach((it) => { if (it.status === 'Success') { console.log(`migration "${it.migrationName}" executed successfully`) } else if (it.status === 'Error') { console.error(`failed to execute migration "${it.migrationName}"`) } }) if (error) { console.error('migration failed', error) process.exit(1) }
Django (Python)
Workflow
# Generate migration from model changes python manage.py makemigrations # Apply migrations python manage.py migrate # Show migration status python manage.py showmigrations # Generate empty migration for custom SQL python manage.py makemigrations --empty app_name -n description
Data Migration
from django.db import migrations def backfill_display_names(apps, schema_editor): User = apps.get_model("accounts", "User") batch_size = 5000 users = User.objects.filter(display_name="") while users.exists(): batch = list(users[:batch_size]) for user in batch: user.display_name = user.username User.objects.bulk_update(batch, ["display_name"], batch_size=batch_size) def reverse_backfill(apps, schema_editor): pass # Data migration, no reverse needed class Migration(migrations.Migration): dependencies = [("accounts", "0015_add_display_name")] operations = [ migrations.RunPython(backfill_display_names, reverse_backfill), ]
SeparateDatabaseAndState
Remove a column from the Django model without dropping it from the database immediately:
class Migration(migrations.Migration): operations = [ migrations.SeparateDatabaseAndState( state_operations=[ migrations.RemoveField(model_name="user", name="legacy_field"), ], database_operations=[], # Don't touch the DB yet ), ]
golang-migrate (Go)
Workflow
# Create migration pair migrate create -ext sql -dir migrations -seq add_user_avatar # Apply all pending migrations migrate -path migrations -database "$DATABASE_URL" up # Rollback last migration migrate -path migrations -database "$DATABASE_URL" down 1 # Force version (fix dirty state) migrate -path migrations -database "$DATABASE_URL" force VERSION
Migration Files
-- migrations/000003_add_user_avatar.up.sql ALTER TABLE users ADD COLUMN avatar_url TEXT; CREATE INDEX CONCURRENTLY idx_users_avatar ON users (avatar_url) WHERE avatar_url IS NOT NULL; -- migrations/000003_add_user_avatar.down.sql DROP INDEX IF EXISTS idx_users_avatar; ALTER TABLE users DROP COLUMN IF EXISTS avatar_url;
Zero-Downtime Migration Strategy
For critical production changes, follow the expand-contract pattern:
Phase 1: EXPAND - Add new column/table (nullable or with default) - Deploy: app writes to BOTH old and new - Backfill existing data Phase 2: MIGRATE - Deploy: app reads from NEW, writes to BOTH - Verify data consistency Phase 3: CONTRACT - Deploy: app only uses NEW - Drop old column/table in separate migration
Timeline Example
Day 1: Migration adds new_status column (nullable) Day 1: Deploy app v2 — writes to both status and new_status Day 2: Run backfill migration for existing rows Day 3: Deploy app v3 — reads from new_status only Day 7: Migration drops old status column
Anti-Patterns
| Anti-Pattern | Why It Fails | Better Approach |
|---|---|---|
| Manual SQL in production | No audit trail, unrepeatable | Always use migration files |
| Editing deployed migrations | Causes drift between environments | Create new migration instead |
| NOT NULL without default | Locks table, rewrites all rows | Add nullable, backfill, then add constraint |
| Inline index on large table | Blocks writes during build | CREATE INDEX CONCURRENTLY |
| Schema + data in one migration | Hard to rollback, long transactions | Separate migrations |
| Dropping column before removing code | Application errors on missing column | Remove code first, drop column next deploy |