Dotnet-skills code-testing-agent

install
source · Clone the upstream repo
git clone https://github.com/managedcode/dotnet-skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/managedcode/dotnet-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/external-sources/upstreams/dotnet-skills/dotnet-test/skills/code-testing-agent" ~/.claude/skills/managedcode-dotnet-skills-code-testing-agent-423bee && rm -rf "$T"
manifest: external-sources/upstreams/dotnet-skills/dotnet-test/skills/code-testing-agent/SKILL.md
source content

Code Testing Generation Skill

An AI-powered skill that generates comprehensive, workable unit tests for any programming language using a coordinated multi-agent pipeline.

When to Use This Skill

Use this skill when you need to:

  • Generate unit tests for an entire project or specific files
  • Improve test coverage for existing codebases
  • Create test files that follow project conventions
  • Write tests that actually compile and pass
  • Add tests for new features or untested code

When Not to Use

  • Running or executing existing tests (use the
    run-tests
    skill)
  • Migrating between test frameworks (use migration skills)
  • Writing tests specifically for MSTest patterns (use
    writing-mstest-tests
    )
  • Debugging failing test logic

How It Works

This skill coordinates multiple specialized agents in a Research → Plan → Implement pipeline:

Pipeline Overview

┌─────────────────────────────────────────────────────────────┐
│                     TEST GENERATOR                          │
│  Coordinates the full pipeline and manages state            │
└─────────────────────┬───────────────────────────────────────┘
                      │
        ┌─────────────┼─────────────┐
        ▼             ▼             ▼
┌───────────┐  ┌───────────┐  ┌───────────────┐
│ RESEARCHER│  │  PLANNER  │  │  IMPLEMENTER  │
│           │  │           │  │               │
│ Analyzes  │  │ Creates   │  │ Writes tests  │
│ codebase  │→ │ phased    │→ │ per phase     │
│           │  │ plan      │  │               │
└───────────┘  └───────────┘  └───────┬───────┘
                                      │
                    ┌─────────┬───────┼───────────┐
                    ▼         ▼       ▼           ▼
              ┌─────────┐ ┌───────┐ ┌───────┐ ┌───────┐
              │ BUILDER │ │TESTER │ │ FIXER │ │LINTER │
              │         │ │       │ │       │ │       │
              │ Compiles│ │ Runs  │ │ Fixes │ │Formats│
              │ code    │ │ tests │ │ errors│ │ code  │
              └─────────┘ └───────┘ └───────┘ └───────┘

Step-by-Step Instructions

Step 1: Determine the user request

Make sure you understand what user is asking and for what scope. When the user does not express strong requirements for test style, coverage goals, or conventions, source the guidelines from unit-test-generation.prompt.md. This prompt provides best practices for discovering conventions, parameterization strategies, coverage goals (aim for 80%), and language-specific patterns.

Step 2: Invoke the Test Generator

Start by calling the

code-testing-generator
agent with your test generation request:

Generate unit tests for [path or description of what to test], following the [unit-test-generation.prompt.md](unit-test-generation.prompt.md) guidelines

The Test Generator will manage the entire pipeline automatically.

Step 3: Research Phase (Automatic)

The

code-testing-researcher
agent analyzes your codebase to understand:

  • Language & Framework: Detects C#, TypeScript, Python, Go, Rust, Java, etc.
  • Testing Framework: Identifies MSTest, xUnit, Jest, pytest, go test, etc.
  • Project Structure: Maps source files, existing tests, and dependencies
  • Build Commands: Discovers how to build and test the project

Output:

.testagent/research.md

Step 4: Planning Phase (Automatic)

The

code-testing-planner
agent creates a structured implementation plan:

  • Groups files into logical phases (2-5 phases typical)
  • Prioritizes by complexity and dependencies
  • Specifies test cases for each file
  • Defines success criteria per phase

Output:

.testagent/plan.md

Step 5: Implementation Phase (Automatic)

The

code-testing-implementer
agent executes each phase sequentially:

  1. Read source files to understand the API
  2. Write test files following project patterns
  3. Build using the
    code-testing-builder
    sub-agent to verify compilation
  4. Test using the
    code-testing-tester
    sub-agent to verify tests pass
  5. Fix using the
    code-testing-fixer
    sub-agent if errors occur
  6. Lint using the
    code-testing-linter
    sub-agent for code formatting

Each phase completes before the next begins, ensuring incremental progress.

Coverage Types

  • Happy path: Valid inputs produce expected outputs
  • Edge cases: Empty values, boundaries, special characters
  • Error cases: Invalid inputs, null handling, exceptions

State Management

All pipeline state is stored in

.testagent/
folder:

FilePurpose
.testagent/research.md
Codebase analysis results
.testagent/plan.md
Phased implementation plan
.testagent/status.md
Progress tracking (optional)

Examples

Example 1: Full Project Testing

Generate unit tests for my Calculator project at C:\src\Calculator

Example 2: Specific File Testing

Generate unit tests for src/services/UserService.ts

Example 3: Targeted Coverage

Add tests for the authentication module with focus on edge cases

Agent Reference

AgentPurpose
code-testing-generator
Coordinates pipeline
code-testing-researcher
Analyzes codebase
code-testing-planner
Creates test plan
code-testing-implementer
Writes test files
code-testing-builder
Compiles code
code-testing-tester
Runs tests
code-testing-fixer
Fixes errors
code-testing-linter
Formats code

Requirements

  • Project must have a build/test system configured
  • Testing framework should be installed (or installable)
  • VS Code with GitHub Copilot extension

Troubleshooting

Tests don't compile

The

code-testing-fixer
agent will attempt to resolve compilation errors. Check
.testagent/plan.md
for the expected test structure. Check the
extensions/
folder for language-specific error code references (e.g.,
extensions/dotnet.md
for .NET).

Tests fail

Most failures in generated tests are caused by wrong expected values in assertions, not production code bugs:

  1. Read the actual test output
  2. Read the production code to understand correct behavior
  3. Fix the assertion, not the production code
  4. Never mark tests
    [Ignore]
    or
    [Skip]
    just to make them pass

Wrong testing framework detected

Specify your preferred framework in the initial request: "Generate Jest tests for..."

Environment-dependent tests fail

Tests that depend on external services, network endpoints, specific ports, or precise timing will fail in CI environments. Focus on unit tests with mocked dependencies instead.

Build fails on full solution

During phase implementation, build only the specific test project for speed. After all phases, run a full non-incremental workspace build to catch cross-project errors.