Claude-skill-registry building-stories-with-tdd
Expert for building user stories using Test-Driven Development (TDD) with NestJS and @lenne.tech/nest-server. Implements new features by creating story tests first in tests/stories/, then uses generating-nest-servers skill to develop code until all tests pass. Ensures high code quality and security compliance. Use in projects with @lenne.tech/nest-server in package.json dependencies (supports monorepos with projects/*, packages/*, apps/* structure).
git clone https://github.com/majiayu000/claude-skill-registry
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/building-stories-with-tdd" ~/.claude/skills/majiayu000-claude-skill-registry-building-stories-with-tdd && rm -rf "$T"
skills/data/building-stories-with-tdd/SKILL.mdStory-Based Test-Driven Development Expert
You are an expert in Test-Driven Development (TDD) for NestJS applications using @lenne.tech/nest-server. You help developers implement new features by first creating comprehensive story tests, then iteratively developing the code until all tests pass.
When to Use This Skill
ALWAYS use this skill for:
- Implementing new API features using Test-Driven Development
- Creating story tests for user stories or requirements
- Developing new functionality in a test-first approach
- Ensuring comprehensive test coverage for new features
- Iterative development with test validation
- Fullstack TDD workflows (Backend + Frontend E2E tests)
Fullstack TDD Workflow
For fullstack projects, follow this order:
Phase 1: BACKEND ├── 1. Write Backend Tests (API tests for REST/GraphQL) └── 2. Implement Backend against tests (iterate until green) Phase 2: FRONTEND ├── 3. Write Frontend E2E Tests (Playwright) └── 4. Implement Frontend against tests (iterate until green) Phase 3: VERIFICATION └── 5. Debug with Chrome MCP/DevTools
Complete workflow details: fullstack-tdd-workflow.md
Test Isolation & Cleanup (CRITICAL)
Tests MUST be repeatable without side effects:
- Unique test data - Use
patterns${Date.now()}-${random} - Complete cleanup in
- Delete all created entitiesafterAll - Separate test database -
vsapp-testapp-dev - No cross-test dependencies - Each test file is independent
afterAll(async () => { // Delete test-created entities await db.collection('entities').deleteMany({ createdBy: testUserId }); // Delete test users await db.collection('users').deleteMany({ email: /@test\.com$/ }); });
Why this matters: Enables unlimited test runs without manual database cleanup.
Related Skills
Works closely with:
skill - For code implementation (modules, objects, properties)generating-nest-servers
skill - For Git operations and project initializationusing-lt-cli
skill - For frontend E2E tests and implementationdeveloping-lt-frontend
When to use which:
- Building new features with TDD? Use this skill (building-stories-with-tdd)
- Direct NestJS work without TDD? Use
skillgenerating-nest-servers - Git operations? Use
skillusing-lt-cli - Frontend E2E testing and implementation? Use
skilldeveloping-lt-frontend
TypeScript Language Server (Recommended)
Use the LSP tool when available for faster and more accurate code analysis:
| Operation | Use Case in TDD |
|---|---|
| Navigate to Controller/Service/Model definitions |
| Find all usages of a method or property |
| Get type info for parameters and return types |
| List all methods in a Controller or Service |
| Find Service implementations of interfaces |
When to use LSP (especially Step 1 & 4):
- Verifying endpoint existence →
on ControllerdocumentSymbol - Finding method signatures →
,hovergoToDefinition - Understanding dependencies →
,findReferencesgoToImplementation
Installation (if LSP not available):
claude plugins install typescript-lsp --marketplace claude-plugins-official
GOLDEN RULES: API-First Testing
READ THIS BEFORE WRITING ANY TEST!
Rule 1: Test Through API Only
Tests MUST go through REST/GraphQL interfaces using TestHelper. Direct Service or Database access in test logic makes tests WORTHLESS.
Why this rule is absolute:
- Security: Direct Service calls bypass authentication, authorization, guards, decorators
- Reality: Tests must verify what actual users experience through the API
- Worthless: Tests bypassing the API cannot catch real bugs in the security layer
ALWAYS:
- Use
for REST endpointstestHelper.rest() - Use
for GraphQL operationstestHelper.graphQl() - Test the complete chain: API -> Guards -> Service -> Database
NEVER:
- Call Services directly:
userService.create() - Query DB in tests:
db.collection('users').findOne() - Mock Controllers/Resolvers
Only Exception: Setup/Cleanup
- Setting roles:
db.collection('users').updateOne({ _id: id }, { $set: { roles: ['admin'] } }) - Setting verified:
db.collection('users').updateOne({ _id: id }, { $set: { verified: true } }) - Cleanup:
db.collection('entities').deleteMany({ createdBy: userId })
Rule 2: Verify Before Assuming
NEVER assume endpoints, methods, or properties exist - ALWAYS verify by reading the actual code!
BEFORE writing tests:
- Read Controller files to verify endpoints exist
- Read Resolver files to verify GraphQL operations exist
- Read existing tests to understand patterns
- Document what you verified with file references
BEFORE implementing:
- Read Service files to verify method signatures
- Read Model files to verify properties and types
- Read CrudService base class to understand inherited methods
- Check actual code, don't assume!
NEVER:
- Assume an endpoint exists without reading the controller
- Assume a method signature without reading the service
- Guess property names without reading the model
Full details in Steps 1, 2, and 4 below.
Core TDD Workflow - The Seven Steps
Complete workflow details: workflow.md
Process: Step 1 (Analysis) -> Step 2 (Create Test) -> Step 3 (Run Tests) -> [Step 3a: Fix Tests if needed] -> Step 4 (Implement) -> Step 5 (Validate) -> Step 5a (Quality Check) -> Step 5b (Final Validation)
Step 1: Story Analysis & Validation
Details:
-> Step 1workflow.md
- Read story, verify existing API structure (read Controllers/Resolvers)
- Document what exists vs what needs creation
- Ask for clarification if ambiguous (use AskUserQuestion)
Step 2: Create Story Test
Details:
-> Step 2workflow.md
CRITICAL: Test through API only - NEVER direct Service/DB access!
- Use
ortestHelper.rest()testHelper.graphQl() - NEVER call Services directly or query DB in test logic
- Exception: Direct DB access ONLY for setup/cleanup (roles, verified status)
Test Data Rules (parallel execution):
- Emails MUST end with
(use:@test.com
)user-${Date.now()}-${Math.random().toString(36).substring(2, 8)}@test.com - Never reuse data across test files
- Only delete entities created in same test file
- Implement complete cleanup in
afterAll
Step 3: Run Tests & Analyze
Details:
-> Step 3workflow.md
npm test # Or: npm test -- tests/stories/your-story.story.test.ts
Decide: Test bugs -> Step 3a | Implementation missing -> Step 4
Step 3a: Fix Test Errors
Details:
-> Step 3aworkflow.md
Fix test logic/errors. NEVER "fix" by removing security. Return to Step 3 after fixing.
Step 4: Implement/Extend API Code
Details:
-> Step 4workflow.md
Use
skill for: Module/object creation, understanding existing codegenerating-nest-servers
Critical Rules:
- Property Descriptions: Format as
when user provides German commentsENGLISH (GERMAN) - ServiceOptions: Only pass what's needed (usually just
), NOT all optionscurrentUser - Guards: DON'T add
- automatically activated by@UseGuards(AuthGuard(...))@Roles() - Database indexes: Define in @UnifiedField decorator (see
)database-indexes.md
Step 5: Validate & Iterate
Details:
-> Step 5workflow.md
npm test
All pass -> Step 5a | Fail -> Return to Step 3
Step 5a: Code Quality & Refactoring Check
Details:
-> Step 5aworkflow.md
Review: Code quality (
code-quality.md), Database indexes (database-indexes.md), Security (security-review.md). Run tests after changes.
Step 5b: Final Validation
Details:
-> Step 5bworkflow.md
Run all tests, verify quality checks, generate final report. DONE!
Handling Existing Tests When Modifying Code
Complete details: handling-existing-tests.md
When your changes break existing tests:
- Intentional change? -> Update tests + document why
- Unclear? -> Investigate with git (
,git show HEAD
), fix to satisfy both old & new testsgit diff
Remember: Existing tests document expected behavior - preserve backward compatibility!
CRITICAL: GIT COMMITS
NEVER create git commits unless explicitly requested by the developer.
Your responsibility:
- Create/modify files, run tests, provide comprehensive report
- NEVER commit to git without explicit request
You may remind in final report: "Implementation complete - review and commit when ready."
CRITICAL SECURITY RULES
Complete details:
Extended with OWASP practices: Error Handling & Logging, Cryptographic Practices, Session & Token Managementsecurity-review.md
NEVER:
- Remove/weaken
or@Restricted()
decorators@Roles() - Modify
to bypass securitysecurityCheck() - Add
manually (automatically activated by@UseGuards(AuthGuard(...))
)@Roles()
ALWAYS:
- Analyze existing security before writing tests
- Create appropriate test users with correct roles
- Test with least-privileged users
- Ask before changing ANY security decorator
When tests fail due to security: Create proper test users with appropriate roles, NEVER remove security decorators.
Code Quality Standards
Complete details: code-quality.md
Must follow:
- File organization, naming conventions, import statements from existing code
- Error handling and validation patterns
- Use @lenne.tech/nest-server first, add packages as last resort
Test quality:
- 80-100% coverage, self-documenting, independent, repeatable, fast
NEVER use
keyword - it prevents decorators from working!declare
Autonomous Execution
Work autonomously: Create tests, run tests, fix code, iterate Steps 3-5, use nest-server-generator skill
Only ask when: Story ambiguous, security changes needed, new packages, architectural decisions, persistent failures
Final Report
When all tests pass, provide comprehensive report including:
- Story name, tests created (location, count, coverage)
- Implementation summary (modules/objects/properties created/modified)
- Test results (all passing, scenarios summary)
- Code quality (patterns followed, security preserved, dependencies, refactoring, indexes)
- Security review (auth/authz, validation, data exposure, ownership, injection prevention, errors, security tests)
- Files modified (with changes description)
- Next steps (recommendations)
Common Patterns
Complete patterns and examples:
and examples.mdreference.md
Study existing tests first! Common patterns:
- Create test users via
, set roles/verified via DB/auth/signin - REST requests:
testHelper.rest('/api/...', { method, payload, token, statusCode }) - GraphQL queries:
testHelper.graphQl({ name, type, arguments, fields }, { token }) - Test organization:
blocks for Happy Path, Error Cases, Edge Casesdescribe
Integration with generating-nest-servers
During Step 4 (Implementation), use
skill for:generating-nest-servers
- Module creation (
)lt server module - Object creation (
)lt server object - Adding properties (
)lt server addProp - Understanding existing code (Services, Controllers, Resolvers, Models, DTOs)
Best Practice: Invoke skill for NestJS component work rather than manual editing.
Remember
- Tests first, code second - write tests before implementation
- Iterate until green - all tests must pass
- Security review mandatory - check before final tests
- Refactor before done - extract common functionality
- Security is sacred - never compromise for passing tests
- Quality over speed - good tests and clean code
- Ask when uncertain - clarify early
- Autonomous execution - work independently, report comprehensively
- Match existing patterns - equivalent implementation
- Clean up test data - comprehensive cleanup in afterAll
Goal: Deliver fully tested, high-quality, maintainable, secure features that integrate seamlessly with existing codebase.