Marketplace pytest-mastery
install
source · Clone the upstream repo
git clone https://github.com/aiskillstore/marketplace
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/aiskillstore/marketplace "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/abdullahmalik17/pytest-mastery" ~/.claude/skills/aiskillstore-marketplace-pytest-mastery && rm -rf "$T"
manifest:
skills/abdullahmalik17/pytest-mastery/SKILL.mdsource content
pytest Testing with uv
Quick Reference
# Run all tests uv run pytest # Run with verbose output uv run pytest -v # Run specific file uv run pytest tests/test_example.py # Run specific test function uv run pytest tests/test_example.py::test_function_name # Run tests matching pattern uv run pytest -k "pattern" # Run with coverage uv run pytest --cov=src --cov-report=html
Installation
# Add pytest as dev dependency uv add --dev pytest # Add coverage support uv add --dev pytest-cov # Add async support (for FastAPI) uv add --dev pytest-asyncio httpx
Test Discovery
pytest automatically discovers tests following these conventions:
- Files:
ortest_*.py*_test.py - Functions:
test_* - Classes:
(noTest*
method)__init__ - Methods:
insidetest_*
classesTest*
Standard project structure:
project/ ├── src/ │ └── myapp/ ├── tests/ │ ├── __init__.py │ ├── conftest.py # Shared fixtures │ ├── test_unit.py │ └── integration/ │ └── test_api.py └── pyproject.toml
Fixtures
Fixtures provide reusable test setup/teardown:
import pytest @pytest.fixture def sample_user(): return {"id": 1, "name": "Test User"} @pytest.fixture def db_connection(): conn = create_connection() yield conn # Test runs here conn.close() # Teardown def test_user_name(sample_user): assert sample_user["name"] == "Test User"
Fixture Scopes
@pytest.fixture(scope="function") # Default: new instance per test @pytest.fixture(scope="class") # Once per test class @pytest.fixture(scope="module") # Once per module @pytest.fixture(scope="session") # Once per test session
Shared Fixtures (conftest.py)
Place in
tests/conftest.py for automatic availability:
# tests/conftest.py import pytest @pytest.fixture def api_client(): return TestClient(app)
Parametrization
Run same test with multiple inputs:
import pytest @pytest.mark.parametrize("input,expected", [ (1, 2), (2, 4), (3, 6), ]) def test_double(input, expected): assert input * 2 == expected @pytest.mark.parametrize("value", [None, "", [], {}]) def test_falsy_values(value): assert not value
Common Options
| Option | Description |
|---|---|
| Verbose output |
| More verbose |
| Quiet mode |
| Stop on first failure |
| Run last failed tests only |
| Run failures first |
| Filter by name expression |
| Run marked tests only |
| Shorter traceback |
| No traceback |
| Show print statements |
| Show 10 slowest tests |
| Parallel execution (pytest-xdist) |
Coverage Reports
# Terminal report uv run pytest --cov=src # HTML report (creates htmlcov/) uv run pytest --cov=src --cov-report=html # With minimum threshold (fails if below) uv run pytest --cov=src --cov-fail-under=80 # Multiple report formats uv run pytest --cov=src --cov-report=term --cov-report=xml
Markers
import pytest @pytest.mark.slow def test_slow_operation(): ... @pytest.mark.skip(reason="Not implemented") def test_future_feature(): ... @pytest.mark.skipif(condition, reason="...") def test_conditional(): ... @pytest.mark.xfail(reason="Known bug") def test_known_failure(): ...
Run by marker:
uv run pytest -m "not slow" uv run pytest -m "integration"
pyproject.toml Configuration
[tool.pytest.ini_options] testpaths = ["tests"] python_files = ["test_*.py"] python_functions = ["test_*"] addopts = "-v --tb=short" markers = [ "slow: marks tests as slow", "integration: integration tests", ] [tool.coverage.run] source = ["src"] omit = ["tests/*", "*/__init__.py"] [tool.coverage.report] exclude_lines = [ "pragma: no cover", "if TYPE_CHECKING:", ]
FastAPI Testing
See references/fastapi-testing.md for comprehensive FastAPI testing patterns including:
- TestClient setup
- Async testing with httpx
- Database fixture patterns
- Dependency overrides
- Authentication testing
Debugging Failed Tests
# Run with full traceback uv run pytest --tb=long # Drop into debugger on failure uv run pytest --pdb # Show local variables in traceback uv run pytest -l # Run only previously failed uv run pytest --lf