Marketplace tdd-pytest
Python/pytest TDD specialist for test-driven development workflows. Use
install
source · Clone the upstream repo
git clone https://github.com/aiskillstore/marketplace
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/aiskillstore/marketplace "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/89jobrien/tdd-pytest" ~/.claude/skills/aiskillstore-marketplace-tdd-pytest && rm -rf "$T"
manifest:
skills/89jobrien/tdd-pytest/SKILL.mdsource content
TDD-Pytest Skill
Activate this skill when the user needs help with:
- Writing tests using TDD methodology (Red-Green-Refactor)
- Auditing existing pytest test files for quality
- Running tests with coverage
- Generating test reports to
TESTING_REPORT.local.md - Setting up pytest configuration in
pyproject.toml
TDD Workflow
Red-Green-Refactor Cycle
-
RED - Write a failing test first
- Test should fail for the right reason (not import errors)
- Test should be minimal and focused
- Show the failing test output
-
GREEN - Write minimal code to pass
- Only implement what's needed to pass the test
- No premature optimization
- Show the passing test output
-
REFACTOR - Improve code while keeping tests green
- Clean up duplication
- Improve naming
- Extract functions/classes if needed
- Run tests after each change
Test Organization
File Structure
project/ src/ module.py tests/ conftest.py # Shared fixtures test_module.py # Tests for module.py pyproject.toml # Pytest configuration
Naming Conventions
- Test files:
ortest_*.py*_test.py - Test functions:
test_* - Test classes:
Test* - Fixtures: Descriptive names (
,mock_database
)sample_user
Pytest Best Practices
Fixtures
import pytest @pytest.fixture def sample_config(): return {"key": "value"} @pytest.fixture def mock_client(mocker): return mocker.MagicMock()
Parametrization
@pytest.mark.parametrize("input,expected", [ ("hello", "HELLO"), ("world", "WORLD"), ("", ""), ]) def test_uppercase(input, expected): assert input.upper() == expected
Async Tests
import pytest @pytest.mark.asyncio async def test_async_function(): result = await async_operation() assert result == expected
Exception Testing
def test_raises_value_error(): with pytest.raises(ValueError, match="invalid input"): process_input(None)
Running Tests
With uv
uv run pytest # Run all tests uv run pytest tests/test_module.py # Run specific file uv run pytest -k "test_name" # Run by name pattern uv run pytest -v --tb=short # Verbose with short traceback uv run pytest --cov=src --cov-report=term # With coverage
Common Flags
/-v
- Detailed output--verbose
/-x
- Stop on first failure--exitfirst
- Short tracebacks--tb=short
- No tracebacks--tb=no
- Run tests matching expression-k EXPR
- Run tests with marker-m MARKER
- Coverage for path--cov=PATH
- Show missing lines--cov-report=term-missing
pyproject.toml Configuration
Minimal Setup
[tool.pytest.ini_options] asyncio_mode = "auto" testpaths = ["tests"]
Full Configuration
[tool.pytest.ini_options] asyncio_mode = "auto" asyncio_default_fixture_loop_scope = "function" testpaths = ["tests"] python_files = ["test_*.py", "*_test.py"] python_functions = ["test_*"] python_classes = ["Test*"] addopts = "-v --tb=short" markers = [ "slow: marks tests as slow", "integration: marks integration tests", ] filterwarnings = [ "ignore::DeprecationWarning", ] [tool.coverage.run] source = ["src"] branch = true omit = ["tests/*", "*/__init__.py"] [tool.coverage.report] exclude_lines = [ "pragma: no cover", "if TYPE_CHECKING:", "raise NotImplementedError", ] fail_under = 80 show_missing = true
Report Generation
The
TESTING_REPORT.local.md file should contain:
- Test execution summary (passed/failed/skipped)
- Coverage metrics by module
- Audit findings by severity
- Recommendations with file:line references
- Evidence (command outputs)
Integration with Conversation
When the user asks to write tests:
- Check conversation history for context about what to test
- Identify the code/feature being discussed
- If unclear, ask clarifying questions:
- "What specific behavior should I test?"
- "Should I include edge cases for X?"
- "Do you want unit tests, integration tests, or both?"
- Follow TDD: Write failing test first, then implement
Commands Available
- Initialize pytest configuration/tdd-pytest:init
- Write tests using TDD (context-aware)/tdd-pytest:test [path]
- Run all tests/tdd-pytest:test-all
- Generate/update TESTING_REPORT.local.md/tdd-pytest:report