Learn-skills.dev api-baas-planetscale
Serverless MySQL platform with branching, deploy requests, and edge-compatible driver
git clone https://github.com/NeverSight/learn-skills.dev
T=$(mktemp -d) && git clone --depth=1 https://github.com/NeverSight/learn-skills.dev "$T" && mkdir -p ~/.claude/skills && cp -r "$T/data/skills-md/agents-inc/skills/api-baas-planetscale" ~/.claude/skills/neversight-learn-skills-dev-api-baas-planetscale && rm -rf "$T"
data/skills-md/agents-inc/skills/api-baas-planetscale/SKILL.mdPlanetScale Serverless MySQL Patterns
Quick Guide: Use
for edge/serverless MySQL access via HTTP (Fetch API). Use@planetscale/databaseto create per-request connections,Clientfor parameterized queries, andconn.execute()for atomic operations. Never run DDL directly on production -- use deploy requests with safe migrations enabled. PlanetScale runs on Vitess: foreign keys are supported but opt-in, stored procedures are not supported, and all schema changes go through online DDL. The built-inconn.transaction()handles regular integers and floats automatically, but provide a customcastfor BigInt, Date, and boolean columns. Branch your database like git branches for dev/preview environments.cast
<critical_requirements>
CRITICAL: Before Using This Skill
All code must follow project conventions in CLAUDE.md (kebab-case, named exports, import ordering,
, named constants)import type
(You MUST use
with parameterized queries -- never interpolate user input into SQL strings)conn.execute(sql, params)
(You MUST use deploy requests for ALL schema changes on production branches with safe migrations enabled -- direct DDL is rejected)
(You MUST create a fresh
per request in serverless environments -- do not reuse connections across invocations)Client.connection()
(You MUST handle the Vitess/MySQL compatibility differences: no stored procedures, no
via direct DDL, no RENAME COLUMN
operator, no :=
)LOAD DATA INFILE
(You MUST provide a custom
function for BigInt (INT64/UINT64), Date (DATETIME/TIMESTAMP), and boolean (TINYINT(1)) columns -- the default cast handles regular integers and floats but leaves these as strings)cast
</critical_requirements>
Auto-detection: PlanetScale, @planetscale/database, planetscale serverless driver, pscale, deploy request, safe migrations, Vitess, database branching, planetscale branch, planetscale boost, mysql serverless, pscale CLI, planetscale connection
When to use:
- Querying MySQL from edge/serverless functions via the PlanetScale serverless driver
- Managing schema changes through deploy requests and safe migrations
- Creating database branches for dev, preview, or CI environments
- Setting up connections with
(host/username/password or URL)@planetscale/database - Running transactions in serverless contexts
- Handling Vitess-specific SQL compatibility constraints
- Programmatic branch management via
CLIpscale
Key patterns covered:
/connect()
connection setup with host, username, passwordClient
with positional (conn.execute()
) and named (?
) parameters:param
for atomic multi-statement operationsconn.transaction()- Custom
functions for type-safe value conversion (BigInt, Date, boolean)cast - Deploy request workflow (branch, change schema, create DR, review, deploy)
- Safe migrations and the no-direct-DDL enforcement model
- Database branching for dev/preview/CI environments
- Vitess SQL compatibility constraints and workarounds
CLI for branch and deploy request managementpscale
When NOT to use:
- Long-running server processes with persistent TCP MySQL connections (use
driver)mysql2 - Complex ORM-specific patterns (use your ORM's own skill)
- General MySQL query syntax (use a SQL/MySQL skill)
- PostgreSQL workloads (use Neon or another Postgres provider)
Detailed Resources:
- For decision frameworks, CLI reference, and quick lookup tables, see reference.md
Driver & Queries:
- examples/core.md -- Connection setup, parameterized queries, transactions, type casting
Branching & Schema Changes:
- examples/branching.md -- Dev branches, deploy requests, safe migrations, pscale CLI, CI/CD workflows
<philosophy>
Philosophy
PlanetScale is a serverless MySQL platform built on Vitess, the same technology that powers YouTube's database infrastructure. The
@planetscale/database driver uses HTTP (Fetch API) instead of TCP, making MySQL accessible from edge runtimes that lack TCP support.
Core principles:
- HTTP-based, stateless connections -- Every query is an HTTP request. There are no persistent connections to manage, no connection pools to configure. Create a connection, execute queries, done. PlanetScale handles connection pooling at the infrastructure level (Vitess VTTablet + Global Routing).
- Schema changes via deploy requests, never direct DDL -- Production branches with safe migrations reject direct
,CREATE
,ALTER
statements. All schema changes go through deploy requests: branch, modify schema on the branch, create a deploy request, review the diff, deploy with zero downtime via online DDL.DROP - Branches are cheap -- Database branches are isolated copies of your schema (and optionally data). Create them for feature development, PR previews, CI runs. Delete when done.
- Vitess under the hood -- PlanetScale runs Vitess, which adds horizontal scaling but introduces SQL compatibility differences. No stored procedures, no
in DDL, noRENAME COLUMN
operator. Foreign keys are supported but opt-in and come with performance trade-offs.:= - Default cast handles common types, customize for the rest -- The driver's built-in
function automatically converts INT8-32 and FLOAT32/64 to JavaScript numbers, and parses JSON. However, INT64/UINT64 (BigInt), DATETIME/TIMESTAMP (Date), DECIMAL, and TINYINT(1) (boolean) remain as strings -- provide a customcast
function for these.cast
When to use PlanetScale serverless driver:
- Edge/serverless functions that cannot open TCP connections
- Applications using PlanetScale's branching and deploy request workflow
- High-concurrency serverless apps benefiting from PlanetScale's infrastructure-level pooling
- Teams wanting git-like database workflows (branch, review, merge)
When NOT to use:
- Long-running server processes (use
with TCP for persistent connections)mysql2 - Workloads requiring stored procedures, triggers, or events (Vitess does not support them)
- Applications requiring
(not supported)LOAD DATA INFILE
<patterns>
Core Patterns
Pattern 1: Connection Setup
The driver provides two connection methods:
connect() for a single connection and Client for a connection factory. Use Client in serverless (fresh connection per request), connect() for single long-lived connection objects.
import { connect } from "@planetscale/database"; const conn = connect({ host: process.env.DATABASE_HOST!, username: process.env.DATABASE_USERNAME!, password: process.env.DATABASE_PASSWORD!, }); const { rows } = await conn.execute( "SELECT id, name FROM users WHERE active = ?", [true], );
See examples/core.md for full connection patterns including
Client factory, URL-based config, and custom fetch for HTTP/2.
Pattern 2: Parameterized Queries
The driver supports positional (
?) and named (:param) parameter styles. Both are auto-escaped preventing SQL injection. Never mix styles in a single execute() call.
// Positional: array of values await conn.execute("SELECT id, name FROM users WHERE id = ? AND active = ?", [ userId, true, ]); // Named: object of values await conn.execute("SELECT id, name FROM users WHERE role = :role", { role: "admin", });
See examples/core.md for complex named parameter queries and bad examples to avoid.
Pattern 3: Transactions
conn.transaction() executes multiple queries atomically with automatic rollback on error. Each tx.execute() is an HTTP round trip, but conditional logic runs client-side within the callback.
const result = await conn.transaction(async (tx) => { const debit = await tx.execute( "UPDATE accounts SET balance = balance - ? WHERE id = ? AND balance >= ?", [amount, fromId, amount], ); if (debit.rowsAffected === 0) throw new Error("Insufficient funds"); // triggers rollback await tx.execute("UPDATE accounts SET balance = balance + ? WHERE id = ?", [ amount, toId, ]); return debit; });
See examples/core.md for full transaction examples with inventory checks and
FOR UPDATE locking.
Pattern 4: Custom Type Casting
The built-in
cast handles INT8-32 and FLOAT32/64 automatically. Provide a custom cast for INT64/UINT64 (BigInt), DATETIME/TIMESTAMP (Date), and TINYINT(1) (boolean) -- these remain as strings by default.
import { connect, cast } from "@planetscale/database"; import type { Field } from "@planetscale/database"; function customCast(field: Field, value: any): any { if (value == null) return null; if (field.type === "INT64" || field.type === "UINT64") return BigInt(value); if (field.type === "DATETIME" || field.type === "TIMESTAMP") return new Date(value + "Z"); if (field.type === "INT8" && field.columnLength === 1) return value === "1"; return cast(field, value); } const conn = connect({ url: process.env.DATABASE_URL, cast: customCast });
See examples/core.md for per-query cast overrides and type-specific cast variants.
Pattern 5: Deploy Request Workflow
Schema changes on production branches with safe migrations must go through deploy requests. Direct DDL is rejected. The workflow is: branch, modify schema, create deploy request, review diff, deploy.
pscale branch create my-database add-user-roles # 1. Create dev branch pscale shell my-database add-user-roles # 2. Make schema changes (DDL) pscale deploy-request create my-database add-user-roles --into main # 3. Create DR pscale deploy-request diff my-database 1 # 4. Review schema diff pscale deploy-request deploy my-database 1 # 5. Deploy (online DDL) pscale deploy-request revert my-database 1 # 6. Revert within 30 min if needed
See examples/branching.md for gated deployments, instant deployments, and CI/CD workflows.
Pattern 6: Database Branching
Branches are isolated copies of your database schema. Development branches allow direct DDL. Production branches require deploy requests when safe migrations is enabled.
pscale branch create my-database dev-alice # Create dev branch pscale shell my-database dev-alice # Interactive MySQL shell pscale password create my-database dev-alice my-password # Generate app credentials pscale branch delete my-database dev-alice # Clean up when done
See examples/branching.md for PR preview branches, safe column renames, FK setup, and branch cleanup scripts.
Pattern 7: Vitess SQL Compatibility
PlanetScale runs on Vitess, which introduces SQL differences from standard MySQL. Key constraints: no stored procedures/triggers/events, no
RENAME COLUMN (use three-step add/migrate/drop pattern), no := operator, no LOAD DATA INFILE, no CREATE DATABASE.
See reference.md for the full supported/unsupported SQL compatibility table.
</patterns><decision_framework>
Decision Framework
Connection Method
What is the runtime environment? +-- Edge/serverless (Cloudflare Workers, Vercel Edge, etc.) | +-- Use @planetscale/database (HTTP-based, no TCP needed) +-- Traditional Node.js server (always-on) | +-- Need PlanetScale branching/deploy workflow? | | +-- YES --> @planetscale/database works fine (HTTP) | | +-- NO --> mysql2 driver with TCP may be simpler +-- ORM integration? +-- Check your ORM's docs for its PlanetScale/serverless adapter
connect() vs Client
How many connections per process? +-- Single connection (scripts, simple handlers) --> connect() +-- Multiple connections (serverless, per-request) --> Client + client.connection()
Schema Change Strategy
Is the target branch a production branch with safe migrations? +-- YES --> Deploy requests ONLY (direct DDL is rejected) | +-- Simple change (add column, add index) --> Standard deploy request | +-- Needs controlled cutover timing --> Gated deployment (--disable-auto-apply) | +-- Instant-eligible change --> Deploy with --instant flag +-- NO (development branch) --> Direct DDL is allowed +-- Experimenting --> pscale shell <db> <branch> +-- Scripted migration --> Connect to branch, run DDL
Foreign Keys
Do you need foreign key constraints? +-- YES --> Enable in database settings (opt-in) | +-- Aware of limitations? | | +-- Deploy requests don't validate existing referential integrity | | +-- Reverts can create orphaned rows | | +-- Performance impact in high-concurrency workloads | +-- Sharded database? --> FK only supported on unsharded databases +-- NO --> Use application-level referential integrity +-- ORM-level relationship definitions +-- Application validation before INSERT/DELETE
</decision_framework>
<red_flags>
RED FLAGS
High Priority Issues:
- String interpolation in SQL --
SELECT * FROM users WHERE id = '${id}'`)conn.execute(\
?bypasses parameterization. Always use
:param` placeholders with the params argument.or - Direct DDL on production with safe migrations --
statements are silently rejected on production branches with safe migrations enabled. All schema changes must go through deploy requests.ALTER TABLE - No custom cast for BigInt/Date columns -- The default cast handles regular integers and floats, but INT64/UINT64 remain as strings and DATETIME/TIMESTAMP are not converted to Date objects. Provide a custom
for these types.cast
Medium Priority Issues:
- Reusing connections across serverless invocations -- Each serverless invocation gets a fresh execution context. Do not store connection state in global variables expecting it to persist.
- Using
in deploy requests -- Column renames can be destructive through Vitess online DDL. Use the three-step pattern: add new column, migrate data, drop old column.RENAME COLUMN - Missing revert window awareness -- Deploy requests can be reverted within 30 minutes. After that window closes, you must create a new deploy request to undo changes. Plan accordingly.
- Foreign keys enabled without understanding implications -- FK constraints on PlanetScale don't validate existing referential integrity during
. Orphaned rows will silently remain.ALTER TABLE ADD FOREIGN KEY
Common Mistakes:
- Wrong package name -- The package is
, not@planetscale/database
,planetscale
, ormysql-planetscale
.@planetscale/serverless - Expecting connection pooling in the driver --
does not do client-side connection pooling. PlanetScale handles pooling at the infrastructure level (Vitess VTTablet + Global Routing). Do not wrap it in a pool library.@planetscale/database - Using positional and named params together -- A single
call uses eitherexecute()
with an array OR?
with an object. Never mix them.:param - Expecting Node.js
compatibility --mysql2
has a different API from@planetscale/database
. There is nomysql2
, nopool.query()
. The API isconnection.query()
.conn.execute(sql, params) - Running
orCREATE DATABASE
-- Database creation/deletion is managed via the PlanetScale dashboard, API, orDROP DATABASE
CLI, not SQL.pscale
Gotchas & Edge Cases:
- INT64/UINT64 and dates remain as strings with the default cast --
returnsSELECT count(*) as total
(INT64 is an exception -- it stays as{ total: 42 }
string). DATETIME returns"42"
. Regular INT32 and FLOAT types are auto-converted."2024-01-15 10:30:00"
is 0 for SELECT -- Only DML statements (INSERT, UPDATE, DELETE) populaterowsAffected
. For SELECT, checkrowsAffected
orrows.length
.size
is a string -- Even though MySQL auto-increment IDs are integers,insertId
in the result is always a string. Cast if needed:insertId
.BigInt(result.insertId)- Transactions over HTTP are not interactive -- Unlike traditional MySQL transactions, PlanetScale's HTTP transactions send all statements in a single request. You CAN use conditional logic within the
callback (it runs client-side), but eachtransaction()
is an HTTP round trip.tx.execute()
values lack timezone -- MySQLDATETIME
is stored without timezone info. The driver returns it as a string likeDATETIME
. Append"2024-01-15 10:30:00"
when parsing as UTC, or handle timezone explicitly."Z"- 64KB query limit per execute -- Individual SQL statements have a size limit. For bulk inserts, batch into multiple
calls.execute() - SQL mode is session-only --
only lasts for the current connection. On PlanetScale's HTTP driver, that means a single request. Global SQL mode changes are not allowed.SET sql_mode = '...' - PlanetScale Boost requires explicit opt-in -- Boost query caching is available on Scaler Pro plans and above. Enable per-query via
in a session@@boost_cached_queries = true
before the boosted query. Not all queries are eligible.SET - Empty schemas are invalid -- Production branches require at least one table. You cannot have an empty database on a production branch.
- Instant deployments cannot be reverted -- Using
on a deploy request uses MySQL's--instant
and skips the revert window entirely.ALGORITHM=INSTANT
</red_flags>
<critical_reminders>
CRITICAL REMINDERS
All code must follow project conventions in CLAUDE.md (kebab-case, named exports, import ordering,
, named constants)import type
(You MUST use
with parameterized queries -- never interpolate user input into SQL strings)conn.execute(sql, params)
(You MUST use deploy requests for ALL schema changes on production branches with safe migrations enabled -- direct DDL is rejected)
(You MUST create a fresh
per request in serverless environments -- do not reuse connections across invocations)Client.connection()
(You MUST handle the Vitess/MySQL compatibility differences: no stored procedures, no
via direct DDL, no RENAME COLUMN
operator, no :=
)LOAD DATA INFILE
(You MUST provide a custom
function for BigInt (INT64/UINT64), Date (DATETIME/TIMESTAMP), and boolean (TINYINT(1)) columns -- the default cast handles regular integers and floats but leaves these as strings)cast
Failure to follow these rules will cause SQL injection vulnerabilities, failed deploy requests, or silent type coercion bugs.
</critical_reminders>