install
source · Clone the upstream repo
git clone https://github.com/pgagarinov/awesome-claude-code
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/pgagarinov/awesome-claude-code "$T" && mkdir -p ~/.claude/skills && cp -r "$T/examples/05-skills-n-agents/.claude/skills/s3-generate-tests" ~/.claude/skills/pgagarinov-awesome-claude-code-s3-generate-tests && rm -rf "$T"
manifest:
examples/05-skills-n-agents/.claude/skills/s3-generate-tests/SKILL.mdsource content
S3 — Generate Tests for $ARGUMENTS
$ARGUMENTSGenerate a comprehensive pytest test suite for the module at
$ARGUMENTS.
Project Test Configuration
!`cat pyproject.toml | grep -A 15 '\[tool.pytest'`
Existing Test Files
!`ls tests/`
Instructions
- Read the target module at
to understand its public API$ARGUMENTS - Generate tests following these patterns:
Test Structure (AAA Pattern)
class TestFunctionName: def test_happy_path(self): # Arrange input_data = ... # Act result = function_name(input_data) # Assert assert result == expected def test_edge_case(self): ... def test_error_case(self): with pytest.raises(ValueError): function_name(bad_input)
Requirements
- One
per public function, namedclassTestFunctionName - Use
for shared setup (especially clearing in-memory stores)@pytest.fixture - Use
for input variants@pytest.mark.parametrize - Test happy path, edge cases, and error cases
- Use
fixtures for store cleanupautouse=True - Match the naming and style of existing tests in
tests/
- Write the test file to
tests/test_<module_name>.py - Run the tests to verify they pass