Dotnet-skills dotnet-tunit
Write, run, or repair .NET tests that use TUnit. Use when a repo uses `TUnit`, `TUnit.Playwright`, `[Test]`, `[Arguments]`, `ClassDataSource`, `SharedType.PerTestSession`, or Microsoft.Testing.Platform-based execution.
install
source · Clone the upstream repo
git clone https://github.com/managedcode/dotnet-skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/managedcode/dotnet-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/catalog/Testing/TUnit/skills/dotnet-tunit" ~/.claude/skills/managedcode-dotnet-skills-dotnet-tunit && rm -rf "$T"
manifest:
catalog/Testing/TUnit/skills/dotnet-tunit/SKILL.mdsource content
TUnit
Trigger On
- the repo uses TUnit
- you need to add, run, debug, or repair TUnit tests
- the repo uses Microsoft.Testing.Platform-based test execution
- the repo uses
,ClassDataSource<...>(Shared = SharedType.PerTestSession)
,ParallelLimiter
, orTUnit.Playwright--treenode-filter
Value
- produce a concrete project delta: code, docs, config, tests, CI, or review artifact
- reduce ambiguity through explicit planning, verification, and final validation skills
- leave reusable project context so future tasks are faster and safer
Do Not Use For
- xUnit projects
- MSTest projects
- generic test strategy with no TUnit-specific mechanics
Inputs
- the nearest
AGENTS.md - the test project file and package references
- the repo's current TUnit execution command
Quick Start
- Read the nearest
and confirm scope and constraints.AGENTS.md - Run this skill's
through theWorkflow
until outcomes are acceptable.Ralph Loop - Return the
with concrete artifacts and verification evidence.Required Result Format
Workflow
- Confirm the project really uses TUnit and not a different MTP-based framework.
- Read the repo's real
command fromtest
. If the repo has no explicit command yet, start withAGENTS.md
.dotnet test PROJECT_OR_SOLUTION - Keep the TUnit execution model intact:
- tests are source-generated at build time
- tests run in parallel by default
- built-in analyzers should remain enabled
- Choose the fixture level deliberately:
- plain TUnit tests for isolated logic
- shared AppHost/Aspire fixtures for HTTP, SignalR, SSE, or UI flows
layered over shared Aspire infra when tests need Host DI services,WebApplicationFactory
, or other runtime internalsIGrainFactory
- Reuse expensive fixtures with
instead of booting distributed infrastructure per test.ClassDataSource<Fixture>(Shared = SharedType.PerTestSession) - Fix isolation bugs instead of globally serializing the suite unless the repo already documented a justified exception.
- Run the narrowest useful scope first with
. Keep TUnit arguments afterdotnet test ... -- --treenode-filter "..."
.-- - Capture useful failure evidence: host log dumps, focused console output, coverage files, and Playwright screenshots/HTML for UI tests.
- Use
,[Test]
, hooks, and dependencies only when they make the scenario clearer, not because the framework allows it.[Arguments]
Bootstrap When Missing
If
TUnit is requested but not configured yet:
- Detect current state:
rg -n "TUnit|Microsoft\\.Testing\\.Platform" -g '*.csproj' -g 'Directory.Build.*' .
- Add the minimal package set to the test project:
dotnet add TEST_PROJECT.csproj package TUnit- add
only when the repo's chosen TUnit project shape requires it; do not blindly duplicate runner packagesMicrosoft.NET.Test.Sdk
- Keep the runner model explicit in
and CI:AGENTS.md- record that the repo uses Microsoft.Testing.Platform-compatible execution for this test project
- record the exact
command the repo will usedotnet test TEST_PROJECT.csproj
- Add one small executable test using
.[Test] - Run
and returndotnet test TEST_PROJECT.csproj
orstatus: configured
.status: improved - If the repo intentionally standardizes on xUnit or MSTest, return
unless migration is explicitly requested.status: not_applicable
Deliver
- TUnit tests that respect source generation and parallel execution
- commands that work in local and CI runs
- framework-specific verification guidance for the repo
- a fixture strategy that matches the actual test scope: logic-only, AppHost/API, Host DI/grains, or Playwright UI
Validate
- the command matches the repo's TUnit runner style
- focused runs use
rather than VSTest-style--treenode-filter--filter - shared distributed fixtures use
or an equivalent reuse patternSharedType.PerTestSession - shared state is isolated or explicitly controlled
- built-in TUnit analyzers remain active
- coverage tooling matches Microsoft.Testing.Platform if coverage is enabled
- UI failures capture artifacts and server-side failures expose enough logs to avoid blind reruns
Test Harness
flowchart LR A["TUnit task"] --> B{"What does the test need?"} B -->|"Single component only"| C["Plain TUnit test"] B -->|"HTTP / SignalR / resource graph"| D["Shared Aspire/AppHost fixture"] B -->|"Host DI / grains / runtime services"| E["Shared Aspire/AppHost fixture + WebApplicationFactory"] B -->|"Browser automation"| F["Shared Aspire/AppHost fixture + Playwright"] C & D & E & F --> G["Run focused with --treenode-filter"] G --> H["Capture logs, artifacts, and coverage"]
Ralph Loop
Use the Ralph Loop for every task, including docs, architecture, testing, and tooling work.
- Plan first (mandatory):
- analyze current state
- define target outcome, constraints, and risks
- write a detailed execution plan
- list final validation skills to run at the end, with order and reason
- Execute one planned step and produce a concrete delta.
- Review the result and capture findings with actionable next fixes.
- Apply fixes in small batches and rerun the relevant checks or review steps.
- Update the plan after each iteration.
- Repeat until outcomes are acceptable or only explicit exceptions remain.
- If a dependency is missing, bootstrap it or return
with explicit reason and fallback path.status: not_applicable
Required Result Format
:status
|complete
|clean
|improved
|configured
|not_applicableblocked
: concise plan and current iteration stepplan
: concrete changes madeactions_taken
: final skills run, or skipped with reasonsvalidation_skills
: commands, checks, or review evidence summaryverification
: top unresolved items orremainingnone
For setup-only requests with no execution, return
status: configured and exact next commands.
Load References
references/patterns.mdreferences/migration.mdreferences/tunit.mdreferences/integration-testing.md
Running Tests
TUnit uses Microsoft.Testing.Platform. Use
--treenode-filter for filtering (not --filter), and keep runner switches after --.
# Run all tests dotnet test MySolution.sln # Run one test project dotnet test tests/MyProject.Tests/MyProject.Tests.csproj # Filter by class dotnet test tests/MyProject.Tests/MyProject.Tests.csproj -- --treenode-filter "/*/*/CalculatorTests/*" # Filter by category dotnet test tests/MyProject.Tests/MyProject.Tests.csproj -- --treenode-filter "/*/*/*/*[Category=Integration]" # Coverage on Microsoft.Testing.Platform dotnet test MySolution.sln -- --coverage --coverage-output coverage.cobertura.xml --coverage-output-format cobertura # Raw runner help when the repo needs direct TUnit app switches dotnet run --project tests/MyProject.Tests/MyProject.Tests.csproj -- --help
Filter syntax:
/<Assembly>/<Namespace>/<Class>/<Test> with * wildcards. See references/patterns.md for full examples.
Example Requests
- "Run this TUnit project correctly."
- "Fix our TUnit CI command."
- "Add a regression test in TUnit without breaking parallelism."