Awesome-claude-code s3-generate-tests

Generate pytest test suite for a given module

install
source · Clone the upstream repo
git clone https://github.com/pgagarinov/awesome-claude-code
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/pgagarinov/awesome-claude-code "$T" && mkdir -p ~/.claude/skills && cp -r "$T/examples/05-skills-n-agents/.claude/skills/s3-generate-tests" ~/.claude/skills/pgagarinov-awesome-claude-code-s3-generate-tests && rm -rf "$T"
manifest: examples/05-skills-n-agents/.claude/skills/s3-generate-tests/SKILL.md
source content

S3 — Generate Tests for
$ARGUMENTS

Generate a comprehensive pytest test suite for the module at

$ARGUMENTS
.

Project Test Configuration

!`cat pyproject.toml | grep -A 15 '\[tool.pytest'`

Existing Test Files

!`ls tests/`

Instructions

  1. Read the target module at
    $ARGUMENTS
    to understand its public API
  2. Generate tests following these patterns:

Test Structure (AAA Pattern)

class TestFunctionName:
    def test_happy_path(self):
        # Arrange
        input_data = ...
        # Act
        result = function_name(input_data)
        # Assert
        assert result == expected

    def test_edge_case(self):
        ...

    def test_error_case(self):
        with pytest.raises(ValueError):
            function_name(bad_input)

Requirements

  • One
    class
    per public function, named
    TestFunctionName
  • Use
    @pytest.fixture
    for shared setup (especially clearing in-memory stores)
  • Use
    @pytest.mark.parametrize
    for input variants
  • Test happy path, edge cases, and error cases
  • Use
    autouse=True
    fixtures for store cleanup
  • Match the naming and style of existing tests in
    tests/
  1. Write the test file to
    tests/test_<module_name>.py
  2. Run the tests to verify they pass