Awesome-omni-skill pytest-runner
Execute Python tests with pytest, supporting fixtures, markers, coverage, and parallel execution. Use for Python test automation.
install
source · Clone the upstream repo
git clone https://github.com/diegosouzapw/awesome-omni-skill
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skill "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/cli-automation/pytest-runner" ~/.claude/skills/diegosouzapw-awesome-omni-skill-pytest-runner && rm -rf "$T"
manifest:
skills/cli-automation/pytest-runner/SKILL.mdsafety · automated scan (low risk)
This is a pattern-based risk scan, not a security review. Our crawler flagged:
- pip install
Always read a skill's source content before installing. Patterns alone don't mean the skill is malicious — but they warrant attention.
source content
Pytest Runner Skill
Purpose
Single responsibility: Execute and manage pytest test suites with proper configuration, coverage reporting, and failure analysis. (BP-4)
Grounding Checkpoint (Archetype 1 Mitigation)
Before executing, VERIFY:
- Python virtual environment is active or available
- pytest is installed (
)pip show pytest - Test directory exists with test files
- pytest.ini or pyproject.toml configured (optional)
DO NOT run tests without verifying environment.
Uncertainty Escalation (Archetype 2 Mitigation)
ASK USER instead of guessing when:
- Multiple test directories detected - which to run?
- Coverage threshold unclear
- Parallel execution appropriate?
- Specific markers or keywords needed?
NEVER modify test configurations without user approval.
Context Scope (Archetype 3 Mitigation)
| Context Type | Included | Excluded |
|---|---|---|
| RELEVANT | Test files, pytest config, fixtures | Application code details |
| PERIPHERAL | Coverage reports, test markers | CI/CD pipelines |
| DISTRACTOR | Other language tests | Deployment configs |
Workflow Steps
Step 1: Environment Check (Grounding)
# Verify virtual environment if [ -z "$VIRTUAL_ENV" ]; then # Activate if exists if [ -f "venv/bin/activate" ]; then source venv/bin/activate elif [ -f ".venv/bin/activate" ]; then source .venv/bin/activate else echo "WARNING: No virtual environment active" fi fi # Verify pytest installed python -m pytest --version || pip install pytest
Step 2: Discover Tests
# List all test files find . -name "test_*.py" -o -name "*_test.py" | head -20 # Show pytest collection python -m pytest --collect-only -q
Step 3: Execute Tests
Basic execution:
python -m pytest tests/ -v
With coverage:
python -m pytest tests/ -v --cov=src --cov-report=term-missing --cov-report=html
Parallel execution:
python -m pytest tests/ -v -n auto # requires pytest-xdist
With markers:
python -m pytest tests/ -v -m "unit" python -m pytest tests/ -v -m "not slow"
Step 4: Analyze Results
# Parse test results python -m pytest tests/ -v --tb=short 2>&1 | tee test_results.txt # Extract failures grep -E "^FAILED|^ERROR" test_results.txt # Coverage summary python -m pytest --cov=src --cov-report=term | grep -E "^TOTAL|^Name"
Recovery Protocol (Archetype 4 Mitigation)
On error:
- PAUSE - Capture test output
- DIAGNOSE - Check error type:
→ Check dependencies, PYTHONPATHImportError
→ Check conftest.pyFixtureError
→ Check test file syntaxCollectionError
→ Reduce test scope or add markersTimeout
- ADAPT - Adjust test selection or configuration
- RETRY - With narrower scope (max 3 attempts)
- ESCALATE - Report failures with context
Checkpoint Support
State saved to:
.aiwg/working/checkpoints/pytest-runner/
checkpoints/pytest-runner/ ├── test_collection.json # Discovered tests ├── test_results.json # Last run results ├── coverage_report.json # Coverage data └── failure_analysis.md # Failure diagnostics
Common Pytest Options
| Option | Purpose |
|---|---|
| Verbose output |
| Stop on first failure |
| Show print statements |
| Run last failed tests |
| Run failed tests first |
| Filter by name pattern |
| Filter by marker |
| Shorter tracebacks |
Configuration Templates
pytest.ini:
[pytest] testpaths = tests python_files = test_*.py *_test.py python_functions = test_* addopts = -v --tb=short markers = unit: Unit tests integration: Integration tests slow: Slow tests
pyproject.toml:
[tool.pytest.ini_options] testpaths = ["tests"] python_files = ["test_*.py", "*_test.py"] addopts = "-v --tb=short"
References
- pytest documentation: https://docs.pytest.org/
- REF-001: Production-Grade Agentic Workflows (BP-4 single responsibility)
- REF-002: LLM Failure Modes (Archetype 1 grounding)