Learn-skills.dev api-baas-turso
Edge-hosted SQLite database with libSQL driver and embedded replicas
git clone https://github.com/NeverSight/learn-skills.dev
T=$(mktemp -d) && git clone --depth=1 https://github.com/NeverSight/learn-skills.dev "$T" && mkdir -p ~/.claude/skills && cp -r "$T/data/skills-md/agents-inc/skills/api-baas-turso" ~/.claude/skills/neversight-learn-skills-dev-api-baas-turso && rm -rf "$T"
data/skills-md/agents-inc/skills/api-baas-turso/SKILL.mdTurso / libSQL Patterns
Quick Guide: Use
for all Turso database access. Use@libsql/clientfor single queries,execute()for atomic multi-statement operations (preferred over interactive transactions), andbatch()only when subsequent queries depend on prior results. For edge/serverless runtimes without filesystem access, import fromtransaction(). For zero-latency reads, configure embedded replicas with a local file URL +@libsql/client/web. All writes are forwarded to the primary -- design for 15-50ms write latency. Turso is SQLite under the hood: single-writer model, nosyncUrl, no stored procedures.ALTER TABLE ... ADD CONSTRAINT
<critical_requirements>
CRITICAL: Before Using This Skill
All code must follow project conventions in CLAUDE.md (kebab-case, named exports, import ordering,
, named constants)import type
(You MUST use
with a transaction mode for multi-statement atomic operations -- it is faster and safer than interactive batch()
because it executes in a single round trip)transaction()
(You MUST import from
in edge/serverless runtimes that lack filesystem access (Cloudflare Workers, Vercel Edge Functions) -- the base @libsql/client/web
import pulls in native bindings that fail in these environments)@libsql/client
(You MUST specify a transaction mode (
, "write"
, or "read"
) as the second argument to "deferred"
and batch()
-- the default is transaction()
, which silently fails to acquire a write lock for INSERT/UPDATE/DELETE)"deferred"
(You MUST call
when the client is no longer needed in short-lived processes -- open clients hold connections and file handles)client.close()
(You MUST NOT access the local embedded replica database file directly while the client is running -- concurrent access causes data corruption)
</critical_requirements>
Auto-detection: Turso, libSQL, @libsql/client, createClient, turso.io, embedded replica, syncUrl, syncInterval, turso db, turso group, libsql, .turso.io, TURSO_DATABASE_URL, TURSO_AUTH_TOKEN
When to use:
- Querying a Turso-hosted SQLite database from any runtime (Node.js, edge, serverless)
- Setting up embedded replicas for zero-latency local reads synced from a remote primary
- Multi-tenant SaaS with database-per-tenant (Turso supports millions of databases)
- Serverless/edge functions needing a database without connection pooling complexity
- Running atomic multi-statement operations with
or interactivebatch()transaction() - Managing database groups and multi-region placement via the Turso CLI
When NOT to use:
- Write-heavy workloads requiring strong multi-writer consistency (Turso is single-writer, writes forwarded to primary)
- Complex relational queries needing PostgreSQL features (CTEs with mutating subqueries, stored procedures, advanced constraints)
- Complex distributed transactions across multiple databases
- Large analytical datasets (SQLite row-size and concurrency limitations apply)
Detailed Resources:
- examples/core.md -- Client setup, execute, batch, transactions, import paths
- examples/embedded-replicas.md -- Local replicas, sync, offline mode, encryption
- reference.md -- Decision frameworks, type definitions, CLI commands, lookup tables
<philosophy>
Philosophy
Turso brings SQLite to the edge by hosting libSQL (a fork of SQLite) as a managed service with multi-region replication. The
@libsql/client driver provides a unified API that works identically whether you are connecting to a remote Turso database, a local SQLite file, an in-memory database, or an embedded replica that syncs from a remote primary.
Core principles:
- Batch over transaction --
sends all statements in a single round trip and executes them in an implicit transaction. Interactivebatch()
requires multiple round trips and holds a database lock (5-second timeout). Usetransaction()
unless you need conditional logic between queries.batch() - Writes always hit the primary -- Even with embedded replicas, writes are forwarded to the remote primary database. Write latency is 15-50ms depending on distance to the primary region. Design for this: optimistic UI, background sync, avoid write-heavy hot paths.
- Embedded replicas for reads -- A local SQLite file synced from the remote primary. Reads are microsecond-level. Writes forward to remote. The local file updates after a successful write (read-your-writes semantics).
- Two import paths --
includes native SQLite bindings for Node.js and supports@libsql/client
URLs.file:
is pure JS/WASM for edge runtimes (Cloudflare Workers, Vercel Edge Functions) and cannot open local files.@libsql/client/web - SQLite semantics -- Turso is SQLite. No
, no stored procedures, noADD CONSTRAINT
, single-writer WAL mode. Know SQLite's limitations before choosing Turso.LISTEN/NOTIFY
<patterns>
Core Patterns
Pattern 1: Client Setup
Create a client with
createClient(). The url determines the connection type: libsql:// for remote Turso, file: for local SQLite (Node.js only), :memory: for in-memory (tests). Always use environment variables for authToken -- never hardcode credentials.
See examples/core.md for full setup patterns including singleton modules and bad examples.
Pattern 2: Executing Queries
execute() runs a single SQL statement. Always use parameterized queries with args -- never string interpolation.
// Positional: args as array await client.execute({ sql: "SELECT * FROM users WHERE id = ?", args: [userId], }); // Named: args as object (bare names match :name, @name, $name in SQL) await client.execute({ sql: "INSERT INTO users (name, email) VALUES (:name, :email)", args: { name, email }, });
Returns
ResultSet with rows (Array<Row>), columns, rowsAffected, lastInsertRowid (bigint). See examples/core.md for typed result mapping and bad examples.
Pattern 3: Batch Operations
batch() executes multiple statements atomically in a single round trip. All succeed or all roll back. Always specify the transaction mode as the second argument.
const results = await client.batch( [ { sql: "INSERT INTO users (name) VALUES (?)", args: ["Alice"] }, { sql: "INSERT INTO audit_log (action, entity_id) VALUES (?, last_insert_rowid())", args: ["user_created"], }, ], "write", // Required for INSERT/UPDATE/DELETE );
Use
"write" for mutations, "read" for SELECT-only (allows parallel execution), "deferred" to start read-only and escalate. last_insert_rowid() works across statements in the same batch. See examples/core.md for multi-insert and read-only batch patterns, and reference.md for the transaction mode comparison table.
Pattern 4: Interactive Transactions
Use
transaction() only when subsequent queries depend on results of earlier queries. It holds a database lock (5-second idle timeout) and requires multiple round trips. Always use try/catch/finally with close().
const tx = await client.transaction("write"); try { const { rows } = await tx.execute({ sql: "SELECT balance FROM accounts WHERE id = ?", args: [fromId], }); // ... conditional logic based on results ... await tx.commit(); } catch (error) { await tx.rollback(); throw error; } finally { tx.close(); }
If all statements are known upfront with no conditional logic, use
batch() instead. See examples/core.md for a complete purchase-with-stock-check example.
Pattern 5: Import Paths for Different Runtimes
// Node.js, Bun, Deno (has filesystem access, native bindings) import { createClient } from "@libsql/client"; // Edge/serverless runtimes WITHOUT filesystem (Cloudflare Workers, Vercel Edge) import { createClient } from "@libsql/client/web";
The base
@libsql/client bundles native SQLite bindings that fail in edge runtimes. @libsql/client/web is pure JS/WASM but cannot open local file: URLs. See examples/core.md for a full Cloudflare Worker example.
Pattern 6: Embedded Replicas
A local SQLite file that syncs from a remote Turso primary. Reads are local (microseconds), writes forward to remote (15-50ms).
const client = createClient({ url: "file:local-replica.db", syncUrl: process.env.TURSO_DATABASE_URL, authToken: process.env.TURSO_AUTH_TOKEN, syncInterval: 60, // Auto-sync every 60 seconds }); await client.sync(); // Populate local replica before first read
Key points:
file: URL for local replica, syncUrl for remote primary, call sync() on startup, reads are local, writes forward to remote with read-your-writes semantics.
When to use: VMs, VPS, containers, or any long-running process with filesystem access. Not available in serverless/edge runtimes without filesystem. See examples/embedded-replicas.md for full patterns including manual sync, offline mode, and encryption.
Pattern 7: Database Groups and Multi-Region
Turso organizes databases into groups. Each group has a primary region and optional replica locations. All databases in a group inherit its locations.
All writes go to the primary region regardless of which replica handles the read. Adding more locations improves read latency globally but does not reduce write latency. Write latency is determined by distance to the primary region.
See reference.md for the full Turso CLI command reference (
turso db create, turso group create, turso db shell, etc.).
</patterns>
<decision_framework>
Decision Framework
batch() vs transaction()
Are all SQL statements known upfront (no conditional logic between them)? +-- YES --> Use batch() (single round trip, implicit transaction, preferred) +-- NO --> Do later statements depend on results of earlier statements? +-- YES --> Use transaction() (interactive, multiple round trips, holds lock) +-- NO --> Use batch()
Import Path Selection
What runtime environment? +-- Node.js / Bun / Deno / VM / container | +-- Need embedded replicas (local file)? | | +-- YES --> @libsql/client with file: URL + syncUrl | | +-- NO --> @libsql/client with libsql:// URL +-- Cloudflare Workers / Vercel Edge / browser / serverless without filesystem +-- @libsql/client/web (remote connections only, no file: URLs)
Embedded Replica vs Remote-Only
Is the process long-lived with filesystem access? +-- YES --> Do you need sub-millisecond read latency? | +-- YES --> Embedded replica (file: URL + syncUrl) | +-- NO --> Remote-only is simpler (libsql:// URL) +-- NO (serverless, edge, short-lived) +-- Remote-only (@libsql/client/web, libsql:// URL)
Transaction Mode Selection
What operations will the batch/transaction perform? +-- Only SELECT queries --> "read" (allows parallel execution on replicas) +-- Any INSERT / UPDATE / DELETE --> "write" (acquires exclusive lock) +-- Unsure at call time --> "deferred" (starts read, escalates if needed)
</decision_framework>
<red_flags>
RED FLAGS
High Priority Issues:
- Missing transaction mode in
/batch()
-- Omitting the second argument defaults totransaction()
, which starts read-only and may silently fail to acquire a write lock for INSERT/UPDATE/DELETE. Always specify the mode explicitly."deferred" - Using
in edge runtimes -- The base package bundles native SQLite bindings that fail in Cloudflare Workers, Vercel Edge Functions, and similar environments. Use@libsql/client
instead.@libsql/client/web - String interpolation in SQL --
SELECT * FROM users WHERE id = '${id}'`)execute(\
args`.is a SQL injection vulnerability. Always use parameterized queries with - Accessing embedded replica file directly -- Opening the local
file with another SQLite client while the libSQL client is running causes data corruption. Only access through the client..db
Medium Priority Issues:
- Using
whentransaction()
suffices -- Interactive transactions hold a database lock with a 5-second timeout, require multiple round trips, and block other writers. Usebatch()
for predetermined statement sets.batch() - Not calling
-- In short-lived processes (CLI scripts, test teardown), forgetting to close the client leaves connections and file handles open.client.close() - Ignoring write latency with embedded replicas -- Reads are microseconds (local), but writes are 15-50ms (forwarded to remote primary). Design accordingly -- avoid tight write loops.
- Setting
too low -- Each sync pulls all changed frames (4KB each). Sub-second intervals on write-heavy databases generate significant network and I/O overhead.syncInterval
Common Mistakes:
- Wrong package name -- The package is
, not@libsql/client
,libsql-client
, or@turso/client
.turso-client - Named parameter prefix in args object -- Args use bare names:
matches{ name: "Alice" }
,:name
, and@name
in SQL. Do not include the prefix:$name
will not match.{ ":name": "Alice" } - Expecting
to be a number -- It islastInsertRowid
. If you need a number, explicitly convert:bigint | undefined
, but be aware of precision loss for very large rowids.Number(result.lastInsertRowid) - Using
for atomic operations --executeMultiple()
runs raw SQL text (semicolon-separated) with no parameterization and no implicit transaction. UseexecuteMultiple()
for atomic parameterized operations.batch()
Gotchas & Edge Cases:
statements share a transaction but are NOT parallel -- They execute sequentially.batch()
in a later statement reflects the previous statement's insert.last_insert_rowid()
has a 5-second idle timeout -- If no statement is executed within 5 seconds after the last one, the transaction is automatically rolled back. This matters on high-latency connections.transaction()- Embedded replica
is not atomic with reads -- If you read immediately aftersync()
, another sync could start. The client handles this internally, but be aware thatsync()
syncs happen in the background.syncInterval - Frame-based sync overhead -- Embedded replica sync operates in 4KB frames. A 1-byte write still transfers a full 4KB frame. B-tree splits and WAL checkpoint operations can trigger unexpectedly large sync payloads.
affects how SQLite integers are returned -- Default isintMode
, which loses precision for integers > 2^53. Use"number"
for large IDs or counters, or"bigint"
for universal safety."string"- SQLite type affinity -- Turso is SQLite. A
column will happily store an integer without error. There is no strict type enforcement unless you useTEXT
tables.STRICT - No
-- SQLite (and Turso) do not support adding constraints after table creation. You must recreate the table.ALTER TABLE ... ADD CONSTRAINT - Single-writer model -- Only one write transaction can execute at a time across all clients. Concurrent write attempts queue behind the current writer. This is fundamental to SQLite/libSQL, not a Turso limitation.
</red_flags>
<critical_reminders>
CRITICAL REMINDERS
All code must follow project conventions in CLAUDE.md (kebab-case, named exports, import ordering,
, named constants)import type
(You MUST use
with a transaction mode for multi-statement atomic operations -- it is faster and safer than interactive batch()
because it executes in a single round trip)transaction()
(You MUST import from
in edge/serverless runtimes that lack filesystem access (Cloudflare Workers, Vercel Edge Functions) -- the base @libsql/client/web
import pulls in native bindings that fail in these environments)@libsql/client
(You MUST specify a transaction mode (
, "write"
, or "read"
) as the second argument to "deferred"
and batch()
-- the default is transaction()
, which silently fails to acquire a write lock for INSERT/UPDATE/DELETE)"deferred"
(You MUST call
when the client is no longer needed in short-lived processes -- open clients hold connections and file handles)client.close()
(You MUST NOT access the local embedded replica database file directly while the client is running -- concurrent access causes data corruption)
Failure to follow these rules will cause data corruption, runtime crashes in edge environments, or silent data inconsistency.
</critical_reminders>