xnatctl
Use when running xnatctl commands, writing scripts that call xnatctl, troubleshooting XNAT CLI operations, or helping users with XNAT neuroimaging server administration via the command line. Triggers on mentions of xnatctl, XNAT CLI, session download/upload, prearchive, scan management, or XNAT project administration.
install
source · Clone the upstream repo
git clone https://github.com/rickyltwong/xnatctl
Claude Code · Install into ~/.claude/skills/
git clone --depth=1 https://github.com/rickyltwong/xnatctl ~/.claude/skills/rickyltwong-xnatctl-xnatctl
manifest:
SKILL.mdsource content
xnatctl CLI Reference
Modern CLI for XNAT neuroimaging server administration. Resource-centric commands with consistent output formats, parallel operations, and profile-based configuration.
Command Hierarchy
xnatctl config init | show | use-context | current-context | add-profile | remove-profile auth login | logout | status | test project list | show | create subject list | show | rename | delete session list | show | download | upload | upload-exam scan list | show | delete | download resource list | show | upload | download prearchive list | archive | delete | rebuild | move pipeline list | run | status | jobs | cancel admin refresh-catalogs | user add | audit dicom validate | inspect | list-tags | anonymize (requires xnatctl[dicom]) api get | post | put | delete (raw REST escape hatch) whoami health ping completion [bash|zsh|fish]
Global Options (all commands)
| Flag | Short | Description |
|---|---|---|
| | Named profile from config |
| | or (default: table) |
| | IDs only output |
| | Debug logging |
Parent-Resource Options (session & scan commands)
| Flag | Short | Purpose | Key Rule |
|---|---|---|---|
| | Project ID | Enables label lookup for -E |
| | Subject ID/label | Used in (filter) and |
| | Experiment ID or label | Labels require -P (explicit or via profile ) |
Critical:
-E LABEL without -P fails. -E XNAT_E00001 (accession ID) works without -P.
Quick Reference: Common Commands
Setup & Auth
# Initialize config with profile xnatctl config init --url https://xnat.example.org --profile myserver # Add another profile xnatctl config add-profile prod --url https://prod.xnat.org --project DEFAULT_PROJ # Switch profiles xnatctl config use-context prod # Login (prompts for credentials if not in config/env) xnatctl auth login -p myserver # Test connection xnatctl auth test
Projects & Subjects
# List projects xnatctl project list # Show project details xnatctl project show MYPROJ # List subjects with filter (NOTE: colon syntax, not equals) xnatctl subject list -P MYPROJ --filter "label:CTRL_*" # Delete subject (with safety) xnatctl subject delete SUB001 -P MYPROJ --dry-run xnatctl subject delete SUB001 -P MYPROJ --yes
Sessions
# List sessions in project xnatctl session list -P MYPROJ # Show session (by label - needs -P) xnatctl session show -P MYPROJ -E MR_Session_01 # Show session (by accession ID - no -P needed) xnatctl session show -E XNAT_E00001 # Download session - single ZIP (default) xnatctl session download -P MYPROJ -E MR_Session_01 --out ./data # Download session - parallel per-scan (workers > 1) xnatctl session download -P MYPROJ -E MR_Session_01 --out ./data -w 8
Uploads
# Upload DICOM directory (parallel batches) xnatctl session upload ./dicoms -P NEURO -S SUB001 -E SESS001 --workers 4 # Upload DICOM archive file xnatctl session upload ./archive.tar -P NEURO -S SUB001 -E SESS001 # Gradual per-file upload (parallel) xnatctl session upload ./dicoms -P NEURO -S SUB001 -E SESS001 --gradual --workers 16 # Upload exam root (DICOM + resources) # Directory structure: top-level dirs become resources, DICOMs found recursively xnatctl session upload-exam ./exam_root -P NEURO -S SUB001 -E SESS001 -w 4 # Attach resources only (skip DICOM upload) xnatctl session upload-exam ./exam_root -P NEURO -S SUB001 -E SESS001 --attach-only
upload vs upload-exam:
upload handles DICOM files only. upload-exam handles a mixed directory (DICOMs + non-DICOM resource files like PDFs, spreadsheets). In upload-exam, top-level directories become session-level resources by name.
Scans
# List scans xnatctl scan list -E XNAT_E00001 # Delete specific scans (comma-separated with -s flag) xnatctl scan delete -E XNAT_E00042 -P BRAIN -s 1,3,5 --dry-run xnatctl scan delete -E XNAT_E00042 -P BRAIN -s 1,3,5 --yes # Delete ALL scans xnatctl scan delete -E XNAT_E00042 -s "*" --yes # Download scans as ZIP xnatctl scan download -E XNAT_E00001 -s 1,2,3 --out ./scans
Resources
# List resources on session xnatctl resource list XNAT_E00001 # List resources on scan xnatctl resource list XNAT_E00001 --scan 1 # Upload file/directory as resource xnatctl resource upload XNAT_E00001 MY_RESOURCE ./data/ # Download resource xnatctl resource download XNAT_E00001 MY_RESOURCE --file ./output.zip
Prearchive
Note: Prearchive commands use POSITIONAL args:
PROJECT TIMESTAMP SESSION_NAME
# List prearchive sessions xnatctl prearchive list xnatctl prearchive list --project MYPROJ # Archive (move to main archive) xnatctl prearchive archive MYPROJ 20240115_143022 SessionFolder # Delete from prearchive xnatctl prearchive delete MYPROJ 20240115_143022 SessionFolder --yes # Rebuild (refresh metadata) xnatctl prearchive rebuild MYPROJ 20240115_143022 SessionFolder # Move to different project xnatctl prearchive move MYPROJ 20240115_143022 SessionFolder TARGET_PROJ
Pipelines
# List pipelines xnatctl pipeline list --project MYPROJ # Run pipeline and wait for completion xnatctl pipeline run dcm2niix -e XNAT_E00001 -P key1=val1 -P key2=val2 --wait # Check job status (with watch mode) xnatctl pipeline status JOB_ID --watch # Cancel job xnatctl pipeline cancel JOB_ID --yes
Admin
# Refresh catalogs with parallel workers xnatctl admin refresh-catalogs MYPROJ -O checksum -O populateStats --parallel --workers 8 # Add user to project groups xnatctl admin user add jsmith Owners --projects MYPROJ # View audit log xnatctl admin audit -P MYPROJ --since 7d --limit 50
Raw API (escape hatch)
# GET with query parameters (use -P key=value, NOT query strings in path) xnatctl api get /data/projects/MYPROJ/subjects -P format=json # POST with data xnatctl api post /data/projects -d '{"ID":"NEW_PROJ","name":"New Project"}' # DELETE with confirmation skip xnatctl api delete /data/experiments/XNAT_E00001 --yes
DICOM Tools (requires xnatctl[dicom]
)
xnatctl[dicom]# Validate DICOM files xnatctl dicom validate ./dicoms -r # Inspect headers xnatctl dicom inspect ./scan.dcm -t PatientID -t Modality # Anonymize xnatctl dicom anonymize ./input ./output --patient-id ANON001 --remove-private -r
Configuration
Config file:
~/.config/xnatctl/config.yaml
default_profile: default output_format: table profiles: default: url: https://xnat.example.org verify_ssl: true timeout: 21600 # 6 hours (default for large transfers) default_project: MYPROJ username: admin # optional password: secret # optional
Environment variables (priority: CLI args > env vars > profile > prompt):
| Variable | Purpose |
|---|---|
| Session token (highest auth priority, skips credential prompt) |
| Server URL (auto-creates profile) |
| Username |
| Password |
| Active profile |
| / |
| Timeout seconds |
Gotchas
is output FORMAT (-o
/json
), NOT output directory. Usetable
for download destination.--out- Filter uses colon:
not--filter "label:CTRL_*"
.label=CTRL_* - Scan IDs use
flag:-s
(comma-separated) or-s 1,3,5
(all). NOT positional args.-s "*" - Prearchive uses positional args:
. NOTPROJECT TIMESTAMP SESSION_NAME
/-P
flags.-E - API params use
: NOT query strings appended to path.-P key=value - Workers flag varies: session download uses
, upload uses-w
. Both control parallelism.--workers
flag is overloaded: In session/scan commands,-P
means-P
. In--project
andapi
commands,pipeline
means parameter (-P
). Context matters.key=value- Default timeout is 6 hours (21600s) for large DICOM transfers.
- upload-exam waits for archive: By default waits for XNAT to finish archiving before attaching resources. Control with
/--wait-for-archive
.--no-wait-for-archive
fallback: If profile hasdefault_project
,default_project
can be omitted and session/scan commands auto-resolve.-P
Safety Decorators
Destructive commands include
--yes/-y (skip confirmation) and --dry-run (preview only). Always use --dry-run first for delete/rename operations.
Parallel commands include
--parallel/--no-parallel and --workers N.