Claude-ai-music-skills test
Runs automated tests to validate plugin integrity across 14 categories. Use before creating PRs, after making changes to skills or templates, or to verify plugin health.
git clone https://github.com/bitwize-music-studio/claude-ai-music-skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/bitwize-music-studio/claude-ai-music-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/test" ~/.claude/skills/bitwize-music-studio-claude-ai-music-skills-test && rm -rf "$T"
skills/test/SKILL.mdYour Task
Input: $ARGUMENTS
Run automated tests to validate plugin integrity. Execute each test methodically and report results clearly.
Default: Run all tests if no argument provided.
Plugin Test Suite
You are the plugin's automated test runner. Execute each test, track pass/fail, and report actionable results.
Quick Automated Tests (/test quick
)
/test quickFor fast automated validation, run the pytest suite:
~/.bitwize-music/venv/bin/python3 -m pytest ${CLAUDE_PLUGIN_ROOT}/tests/ -v
This covers:
- plugin tests (
) - Frontmatter, templates, references, links, terminology, consistency, config, state, genres, integrationtests/plugin/ - unit tests (
) - State parsers/indexer, shared utilities, mastering functionstests/unit/
Run specific categories:
~/.bitwize-music/venv/bin/python3 -m pytest ${CLAUDE_PLUGIN_ROOT}/tests/plugin/test_skills.py -v # Skills only ~/.bitwize-music/venv/bin/python3 -m pytest ${CLAUDE_PLUGIN_ROOT}/tests/plugin/ -v # All plugin tests ~/.bitwize-music/venv/bin/python3 -m pytest ${CLAUDE_PLUGIN_ROOT}/tests/unit/ -v # All unit tests ~/.bitwize-music/venv/bin/python3 -m pytest ${CLAUDE_PLUGIN_ROOT}/tests/ -m "not slow" -v # Skip slow tests
Pytest catches common issues fast. For deep behavioral tests, use the full test suite below.
Output Format
════════════════════════════════════════ CATEGORY: Test Category Name ════════════════════════════════════════ [PASS] Test name [FAIL] Test name → Problem: what's wrong → File: path/to/file:line → Fix: specific fix instruction ──────────────────────────────────────── Category: X passed, Y failed ────────────────────────────────────────
At the end:
════════════════════════════════════════ FINAL RESULTS ════════════════════════════════════════ config: X passed, Y failed skills: X passed, Y failed templates: X passed, Y failed ... ──────────────────────────────────────── TOTAL: X passed, Y failed, Z skipped ════════════════════════════════════════
TEST CATEGORIES
All test definitions are in test-definitions.md.
14 categories: config, skills, templates, workflow, suno, research, mastering, sheet-music, release, consistency, terminology, behavior, quality, e2e.
Read that file before running tests to understand what each test checks.
RUNNING TESTS
Commands
| Command | Description |
|---|---|
or | Run all tests |
| Run Python test runner (fast automated checks) |
| Configuration system tests |
| Skill definitions and docs |
| Template file tests |
| Album workflow documentation |
| Suno integration tests |
| Research workflow tests |
| Mastering workflow tests |
| Sheet music generation tests |
| Release workflow tests |
| Cross-reference checks |
| Consistent language tests |
| Scenario-based tests |
| Code quality checks |
| End-to-end integration test |
Quick Tests via Pytest
For rapid validation during development, use pytest directly:
# Run all tests ~/.bitwize-music/venv/bin/python3 -m pytest ${CLAUDE_PLUGIN_ROOT}/tests/ -v # Run specific test modules ~/.bitwize-music/venv/bin/python3 -m pytest ${CLAUDE_PLUGIN_ROOT}/tests/plugin/test_skills.py ${CLAUDE_PLUGIN_ROOT}/tests/plugin/test_templates.py -v # Verbose with short tracebacks ~/.bitwize-music/venv/bin/python3 -m pytest ${CLAUDE_PLUGIN_ROOT}/tests/ -v --tb=short # Quiet mode (for CI/logs) ~/.bitwize-music/venv/bin/python3 -m pytest ${CLAUDE_PLUGIN_ROOT}/tests/ -q --tb=line
Test modules in
tests/plugin/:
- Frontmatter, required fields, model validationtest_skills.py
- Template existence and structuretest_templates.py
- Reference doc existencetest_references.py
- Internal markdown linkstest_links.py
- Deprecated terms checktest_terminology.py
- Version sync, skill countstest_consistency.py
- Config file validationtest_config.py
- State cache tool validationtest_state.py
- Genre directory cross-referencetest_genres.py
- Cross-skill prerequisite chainstest_integration.py
Adding New Tests
When bugs are found:
- Identify which category the test belongs to
- Add a test that would have caught the bug
- Run
to verify test fails/test [category] - Fix the bug
- Run
to verify test passes/test [category] - Commit both the fix and the new test
Rule: Every bug fix should add a regression test.
EXECUTION TIPS
- Use Grep with
andoutput_mode: content
for line numbers-n - Use Glob to find files by pattern
- Use Read to check file contents
- Use Bash sparingly (YAML/JSON validation)
- Report exact file:line for failures
- Provide specific, actionable fix instructions
- Group related tests for readability
- Skip tests gracefully if prerequisites missing