MoviePilot database-operation
git clone https://github.com/jxxghp/MoviePilot
T=$(mktemp -d) && git clone --depth=1 https://github.com/jxxghp/MoviePilot "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/database-operation" ~/.claude/skills/jxxghp-moviepilot-database-operation && rm -rf "$T"
skills/database-operation/SKILL.mdDatabase Query (数据库查询)
This skill guides you through executing SQL against the MoviePilot database. Both read and write operations are supported.
Prerequisites
You need the following tools:
- Execute shell commands to run database queriesexecute_command
Getting Database Connection Info
The system prompt
<system_info> section already contains all the database connection details you need:
- 数据库类型 —
orsqlitepostgresql - 数据库 — Full connection info:
- For SQLite: the database file path, e.g.
SQLite (/config/db/moviepilot.db) - For PostgreSQL: the connection string, e.g.
PostgreSQL (user:password@host:port/database)
- For SQLite: the database file path, e.g.
Do NOT run any detection commands. Extract the database type and connection details directly from
<system_info>.
Executing Queries
SQLite Mode
Extract the database file path from
<system_info> (the path inside the parentheses after SQLite).
Use
execute_command to run queries:
sqlite3 -header -column <DB_PATH> "YOUR SQL QUERY HERE;"
For JSON-formatted output (easier to parse):
sqlite3 -json <DB_PATH> "YOUR SQL QUERY HERE;"
List all tables:
sqlite3 -header -column <DB_PATH> "SELECT name FROM sqlite_master WHERE type='table' ORDER BY name;"
View table schema:
sqlite3 <DB_PATH> ".schema tablename"
PostgreSQL Mode
Extract the connection parameters from
<system_info> (parse user:password@host:port/database from the parentheses after PostgreSQL).
Use
execute_command to run queries via psql:
PGPASSWORD=<password> psql -h <host> -p <port> -U <user> -d <database> -c "YOUR SQL QUERY HERE;"
List all tables:
PGPASSWORD=<password> psql -h <host> -p <port> -U <user> -d <database> -c "SELECT tablename FROM pg_tables WHERE schemaname='public' ORDER BY tablename;"
View table schema:
PGPASSWORD=<password> psql -h <host> -p <port> -U <user> -d <database> -c "\d tablename"
Interpret Results
After executing the query, analyze the results and present them in a clear, user-friendly format. Use aggregation, sorting, and filtering as needed.
Database Schema Reference
MoviePilot uses the following core tables:
downloadhistory (下载历史)
Key columns:
id, path, type, title, year, tmdbid, imdbid, doubanid, seasons, episodes, downloader, download_hash, torrent_name, torrent_site, userid, username, date, media_category
downloadfiles (下载文件)
Key columns:
id, downloader, download_hash, fullpath, savepath, filepath, torrentname, state
transferhistory (整理历史)
Key columns:
id, src, dest, mode, type, category, title, year, tmdbid, seasons, episodes, download_hash, status (boolean: true=success, false=failed), errmsg, date
subscribe (订阅)
Key columns:
id, name, year, type, tmdbid, doubanid, season, total_episode, start_episode, lack_episode, state ('N'=new, 'R'=running, 'S'=paused), filter, include, exclude, quality, resolution, sites, best_version, date, username
subscribehistory (订阅历史)
Key columns:
id, name, year, type, tmdbid, doubanid, season, total_episode, start_episode, date, username
user (用户)
Key columns:
id, name, email, is_active, is_superuser, permissions, settings
site (站点)
Key columns:
id, name, domain, url, pri (priority), cookie, proxy, is_active, downloader, limit_interval, limit_count
siteuserdata (站点用户数据)
Key columns:
id, domain, name, username, user_level, bonus, upload, download, ratio, seeding, leeching, seeding_size, updated_day
sitestatistic (站点统计)
Key columns:
id, domain, success, fail, seconds, lst_state, lst_mod_date
mediaserveritem (媒体库条目)
Key columns:
id, server, library, item_id, item_type, title, original_title, year, tmdbid, imdbid, tvdbid, path
systemconfig (系统配置)
Key columns:
id, key, value (JSON)
userconfig (用户配置)
Key columns:
id, username, key, value (JSON)
plugindata (插件数据)
Key columns:
id, plugin_id, key, value (JSON)
message (消息)
Key columns:
id, channel, source, mtype, title, text, image, link, userid, reg_time
workflow (工作流)
Key columns:
id, name, description, timer, trigger_type, event_type, state ('W'=waiting, 'R'=running), run_count, actions, flows, last_time
passkey (通行密钥)
Key columns:
id, user_id, credential_id, public_key, name, created_at, last_used_at, is_active
siteicon (站点图标)
Key columns:
id, name, domain, url, base64
Common Query Examples
Count total downloads
SELECT COUNT(*) AS total FROM downloadhistory;
Recent download history
SELECT title, year, type, torrent_site, date FROM downloadhistory ORDER BY id DESC LIMIT 10;
Failed transfers
SELECT id, title, src, errmsg, date FROM transferhistory WHERE status = 0 ORDER BY id DESC LIMIT 10;
Active subscriptions
SELECT name, year, type, season, state, lack_episode FROM subscribe WHERE state = 'R';
Site upload/download statistics
SELECT name, domain, upload, download, ratio, bonus, seeding, user_level FROM siteuserdata ORDER BY upload DESC;
Media library statistics
SELECT server, library, COUNT(*) AS count FROM mediaserveritem GROUP BY server, library;
Site access success rate
SELECT domain, success, fail, ROUND(success * 100.0 / (success + fail), 1) AS success_rate FROM sitestatistic WHERE success + fail > 0 ORDER BY success_rate DESC;
Plugin data inspection
SELECT plugin_id, key FROM plugindata ORDER BY plugin_id, key;
Delete old download history (write operation)
DELETE FROM downloadhistory WHERE date < '2024-01-01';
Update subscription state (write operation)
UPDATE subscribe SET state = 'S' WHERE id = 123;
Clean up failed transfer records (write operation)
DELETE FROM transferhistory WHERE status = 0 AND date < '2024-06-01';
Safety Rules
- Confirm before writing — For any
,INSERT
,UPDATE
,DELETE
,DROP
, orALTER
operation, always describe what the statement will do and ask the user to confirm before executing. ForTRUNCATE
queries, execute directly without confirmationSELECT - Back up before destructive operations — Before executing
,DELETE
, orDROP
on important tables, suggest the user back up the data first (e.g., export withTRUNCATE
for SQLite or.dump
for PostgreSQL)pg_dump - Use WHERE clauses — Never run
orUPDATE
without aDELETE
clause unless the user explicitly intends to affect all rowsWHERE - Use LIMIT for queries — When querying large tables with
, addSELECT
to prevent excessive outputLIMIT - Sensitive data — The
table containssite
,cookie
, andapikey
fields. NEVER display these values to the user. Exclude them from SELECT or replace withtoken'***' - Password data — The
table containsuser
andhashed_password
fields. NEVER display these valuesotp_secret - Output limits — If the query results are very long, summarize or truncate them
SQL Dialect Differences
When writing queries, be aware of differences between SQLite and PostgreSQL:
| Feature | SQLite | PostgreSQL |
|---|---|---|
| Boolean values | / | / |
| String concat | | or |
| Current time | | |
| LIMIT syntax | | |
| JSON access | | |
| Case sensitivity | Case-insensitive by default | Case-sensitive |
| LIKE | Case-insensitive | Use for case-insensitive |
Troubleshooting
- sqlite3 not found: The
CLI should be pre-installed in the MoviePilot Docker container. If missing, you can try using Python:sqlite3python3 -c "import sqlite3; ..." - psql not found: For PostgreSQL, if
is not available, use Python:psqlpython3 -c "import psycopg2; ..." - Permission denied: Database queries require admin privileges
- Table not found: Use the "list all tables" query first to verify table names