Skills dotnet-trace-collect
Guide developers through capturing diagnostic artifacts to diagnose production .NET performance issues. Use when the user needs help choosing diagnostic tools, collecting performance data, or understanding tool trade-offs across different environments (Windows/Linux, .NET Framework/modern .NET, container/non-container).
git clone https://github.com/dotnet/skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/dotnet/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/plugins/dotnet-diag/skills/dotnet-trace-collect" ~/.claude/skills/dotnet-skills-dotnet-trace-collect && rm -rf "$T"
plugins/dotnet-diag/skills/dotnet-trace-collect/SKILL.md.NET Trace Collect
This skill helps developers diagnose production performance issues by recommending the right diagnostic tools for their environment, guiding data collection, and suggesting analysis approaches. It does not analyze code for anti-patterns or perform the analysis itself.
When to Use
- A developer needs to investigate a production performance issue (high CPU, memory leak, slow requests, excessive GC, networking errors, etc.)
- Choosing the right diagnostic tool for a specific runtime, OS, or deployment topology
- Setting up and running diagnostic tool commands for data collection
- Understanding trade-offs between available tools (e.g. PerfView vs dotnet-trace)
- Collecting diagnostics from containerized or Kubernetes workloads
When Not to Use
- Reviewing source code for performance anti-patterns (use a code review skill instead)
- Benchmarking during development (e.g. BenchmarkDotNet setup)
- Analyzing collected trace or dump files (this skill recommends tools for analysis, but does not perform it)
Inputs
| Input | Required | Description |
|---|---|---|
| Symptom | Yes | What the developer is observing (high CPU, memory growth, slow requests, hangs, excessive GC, HTTP 5xx errors, networking timeouts, connection failures, assembly loading failures, etc.) |
| Runtime | Yes | .NET Framework or modern .NET (and version, especially whether .NET 10+) |
| OS | Yes | Windows or Linux |
| Deployment | Yes | Non-container, container, or Kubernetes |
| Admin privileges | Recommended | Whether the developer has admin/root access on the target machine |
| Repro characteristics | Recommended | Whether the issue is easy to reproduce or requires a long time to manifest |
Workflow
Step 1: Understand the environment
Determine or ask the developer to clarify:
- Symptom: What they are observing (high CPU, memory leak, slow requests, hangs, excessive GC, HTTP 5xx errors, networking timeouts, connection failures, assembly loading failures, etc.)
- Runtime: .NET Framework or modern .NET? If modern .NET, which version? (Especially whether .NET 10 or later.)
- OS: Windows or Linux?
- Deployment: Running directly on the host, in a container, or in Kubernetes?
- Admin privileges: Do they have admin/root access on the target machine or container?
- Repro characteristics: Does the issue reproduce quickly, or does it take a long time to manifest?
- Workload context: Determine or ask the user if you are running in the context of the workload (i.e., on the same machine or connected to the same environment where the issue is occurring). If so, you can run diagnostic commands directly on their behalf. If not, provide the commands as guidance for the user to run themselves.
Use this information to select the right tool in Step 2.
Step 2: Recommend diagnostic tools
Select tools based on the environment using the priority rules below. Once a tool is selected, load the corresponding reference file for detailed command-line usage.
Tool reference lookup
| Environment | Reference file(s) |
|---|---|
| Windows + modern .NET + admin | |
| Windows + modern .NET, no admin | |
| Windows + .NET Framework | |
| Linux + .NET 10+ + root | |
| Linux + pre-.NET 10 | |
| Linux + native stacks needed | |
| Container/K8s (console access) | (or ) |
| Container/K8s (no console) | |
Quick decision matrix (first-pass triage)
| Environment | Preferred tool | Fallback / Notes |
|---|---|---|
| Windows + modern .NET + admin | PerfView | If admin is unavailable, use |
| Windows + .NET Framework + admin | PerfView | Without admin, there is no trace fallback; for hangs/memory leaks, provide dump commands directly ( or Task Manager) since does not support .NET Framework |
| Linux + .NET 10+ + root | | Use if root or kernel prerequisites are not met |
| Linux + pre-.NET 10 | | Add when native stacks are needed (requires root) |
| Linux container/Kubernetes | Console tools if in workload context; if no console access | See Linux Container / Kubernetes section for details |
Windows (non-container, modern .NET)
- PerfView (preferred) — produces richer ETW-based data; requires admin privileges. For slow requests, add
to capture thread-level wait and block detail./ThreadTime
— fallback when admin privileges are not available.dotnet-trace- For long-running repros: use PerfView with a
trigger that fires on the symptom you want to capture (e.g.,/StopOn
,/StopOnPerfCounter
,/StopOnGCEvent
) and a circular buffer (/StopOnException
+/CircularMB
). Critical: the stop trigger must fire on the interesting event, not the recovery. The circular buffer continuously overwrites old data, so if you trigger on recovery, the buffer may have already overwritten the interesting behavior by the time collection stops. Only add/BufferSizeMB
if the start event is known to precede the stop event. For slow requests, do not include a stop trigger by default — let the user design one based on their specific scenario./StartOn
Windows containers
-
PerfView — most Windows containers (including Kubernetes on Windows) use process-isolation by default. Collect from the host with
. After collection, you have two options:/EnableEventsInContainers- Analyze locally while the container is still running — PerfView can reach into the live container to resolve symbols, so you can open the trace immediately on the host machine.
- Analyze off-machine — before the container shuts down, copy the
into the container and run.etl.zip
inside it to embed symbol information. Then copy the merged trace out. Without this merge step, symbols for binaries inside the container will be unresolvable on other machines.PerfViewCollect merge /ImageIDsOnly
For the less common Hyper-V containers, collect inside the container directly. See references/perfview.md for detailed commands.
-
,dotnet-monitor
— inside the container if the tools are installed in the image. For dumps, invoke thedotnet-trace
skill.dump-collect
Windows (.NET Framework)
- PerfView — the primary diagnostic tool for .NET Framework on Windows. Requires admin.
- Same trigger guidance for long repros: use
triggers that fire on the symptom (e.g.,/StopOn
,/StopOnPerfCounter
,/StopOnGCEvent
) with/StopOnException
+/CircularMB
./BufferSizeMB - Without admin: PerfView requires admin, and there are no alternative trace tools for .NET Framework. Process dumps can still be captured without admin — provide dump commands directly (e.g.,
or Task Manager) since theprocdump -ma <PID>
skill does not support .NET Framework. Dumps can help diagnose hangs and memory leaks. However, for high CPU, slow requests, and excessive GC, there is no way to investigate on .NET Framework without admin access. Advise the user to obtain admin privileges.dump-collect
Linux (non-container, .NET 10+)
(preferred) — usesdotnet-trace collect-linux
for richer traces including native call stacks and kernel events. Captures machine-wide by default (no PID required). Requires root and kernel >= 6.4.perf_events
— fallback when root privileges are not available or kernel requirements are not met. Managed stacks only.dotnet-trace
Linux (non-container, pre-.NET 10)
(preferred) — managed trace collection; no admin required.dotnet-trace
— when native call stacks are needed (requires admin/root).perfcollect
Linux Container / Kubernetes
If running in the context of the workload (i.e., you have console access to the container), prefer console-based tools. These are easier to set up than
dotnet-monitor, which requires authentication configuration and sidecar deployment:
(.NET 10+ with root) — produces the richest traces including native call stacks and kernel events.dotnet-trace collect-linux
— inside the container if the tool is installed in the image. For dumps, invoke thedotnet-trace
skill.dump-collect
— inside the container when native stacks are needed on pre-.NET 10 (requiresperfcollect
/SYS_ADMIN
).--privileged
If not running in the workload context (no console access), or if
dotnet-monitor is already deployed:
— designed for containers; runs as a sidecar. No tools needed in the app container. Easiest option when console access is not available.dotnet-monitor
Memory dumps
When dumps are needed (memory leaks, hangs), do not provide dump collection commands directly for modern .NET — invoke the
skill instead. The dump-collect
dump-collect skill only supports modern .NET (.NET Core 3.0+). For .NET Framework, provide dump collection guidance directly (e.g., procdump -ma <PID> or Task Manager). This skill focuses on trace collection only.
Memory leaks
- Capture two dumps as memory is increasing (e.g., one early, one after significant growth). Invoke the
skill for dump collection — do not provide dump commands directly. Diff the dumps in PerfView to see which objects have increased — this is the most effective way to identify what is leaking.dump-collect - Without admin privileges: Two process dumps can give a sense of what's growing on the heap, but may not be enough to identify the root cause. If dumps aren't sufficient, reproduce the issue in an environment where admin privileges are available to collect richer data (traces).
- Modern .NET on Linux (pre-.NET 10): Recommend two dump captures (invoke
skill) for heap diff, plusdump-collect
while memory is growing (for allocation tracking). No trigger needed — capture during the growth period. Both together give the best picture.dotnet-trace - Modern .NET 10+ on Linux with admin: Recommend two dump captures (invoke
skill) for heap diff, plusdump-collect
while memory is growing (richer data including native stacks). No trigger needed.dotnet-trace collect-linux - .NET Framework: Recommend two dumps plus a PerfView trace while memory is growing to see what is being allocated. The
skill does not support .NET Framework, so provide dump commands directly (e.g.,dump-collect
or right-click → Create Dump File in Task Manager). No trigger is needed — just capture the trace during the growth period. Do not wait for anprocdump -ma <PID>
.OutOfMemoryException
Excessive GC
Excessive GC requires a trace to analyze GC events, pause times, and allocation patterns — a dump is not sufficient.
- Windows (PerfView): Use
to capture GC events.PerfView collect /GCCollectOnly - Linux (dotnet-trace): Use
.dotnet-trace collect -p <PID> --profile gc-verbose - Linux .NET 10+ with root: Use
for richer data with native stacks.dotnet-trace collect-linux --profile gc-verbose - Containers:
can capture GC traces via its REST API (dotnet-monitor
)./trace?profile=gc-verbose
Slow Requests
Slow requests require a thread time trace to see where threads are spending time — waiting on locks, I/O, external calls, etc. Use larger buffers since thread time traces generate more data. For ASP.NET Core applications, also enable
Microsoft.AspNetCore.Hosting and Microsoft-AspNetCore-Server-Kestrel providers to get server-side request lifecycle timing (when requests arrive, how long they take to process).
- Windows (PerfView): Use
. ThePerfView /ThreadTime collect /BufferSizeMB:1024 /CircularMB:2048
argument adds thread-level wait and block detail. For ASP.NET Core, add Kestrel providers:/ThreadTime
. Do not include a stop trigger by default — let the user design one based on their specific scenario.PerfView /ThreadTime collect /BufferSizeMB:1024 /CircularMB:2048 /Providers:*Microsoft.AspNetCore.Hosting,*Microsoft-AspNetCore-Server-Kestrel - Linux (dotnet-trace):
captures thread time data by default — no special arguments needed. Usedotnet-trace
. For ASP.NET Core, add Kestrel providers:dotnet-trace collect -p <PID>
.dotnet-trace collect -p <PID> --providers Microsoft.AspNetCore.Hosting,Microsoft-AspNetCore-Server-Kestrel - Linux .NET 10+ with root: Use
for richer data with native stacks. For ASP.NET Core, add:dotnet-trace collect-linux --profile thread-time
.--providers Microsoft.AspNetCore.Hosting,Microsoft-AspNetCore-Server-Kestrel - Containers:
can capture traces via its REST API (dotnet-monitor
)./trace?pid=<PID>&durationSeconds=30
Hangs
- Start with a trace to understand what threads are doing. Use the appropriate trace tool for the environment (PerfView with
on Windows,/ThreadTime
on Linux,dotnet-trace
on .NET 10+ Linux with root). The trace can reveal:dotnet-trace collect-linux --profile thread-time- Livelocks (threads spinning without forward progress) — threads appear busy but the application makes no progress.
- Thread starvation — the ThreadPool is exhausted and queued work items are not being processed. This can look like a deadlock but has a different root cause.
- Whether there is any forward progress at all — if some threads are making progress, the issue may be a bottleneck rather than a true hang.
- If the trace does not explain the hang, the issue may be a true deadlock (threads waiting on each other in a cycle). In this case, invoke the
skill to collect a process dump — do not provide dump commands directly.dump-collect - Analyze the dump with a debugger to inspect thread stacks and identify the lock cycle:
- Windows: Visual Studio or WinDbg with the SOS debugger extension.
- Linux:
with the SOS debugger extension.lldb
Networking Issues
Networking issues (HTTP 5xx errors from downstream services, request timeouts, connection failures, DNS resolution failures, TLS handshake failures, connection pool exhaustion) require both a thread-time trace and networking event providers. The thread-time trace shows where threads are blocked (slow downstream calls, thread starvation), while the networking events show the request lifecycle — which requests failed, what status codes came back, how long DNS resolution and TLS handshakes took, and how long requests waited for a connection from the pool.
For .NET Framework,
PerfView /ThreadTime already collects the relevant networking events (from the System.Net ETW provider) — no additional providers are needed.
For modern .NET, you must explicitly enable the
System.Net.* EventSource providers:
| Provider | What it covers |
|---|---|
| HttpClient/SocketsHttpHandler — request lifecycle, HTTP status codes, connection pool |
| DNS lookups (start/stop, duration) |
| TLS/SSL handshakes (SslStream) |
| Low-level socket connect/disconnect |
Key events from
System.Net.Http: RequestStart (scheme, host, port, path), RequestStop (statusCode — -1 if no response was received), RequestFailed (exception message for timeouts, connection refused, etc.), RequestLeftQueue (time waiting for a connection from the pool — indicates connection pool exhaustion), ConnectionEstablished, ConnectionClosed.
Collect a thread-time trace with networking providers enabled (modern .NET only — .NET Framework needs only
PerfView /ThreadTime):
- Windows (PerfView): Use
. For .NET Framework, omit thePerfView /ThreadTime collect /BufferSizeMB:1024 /CircularMB:2048 /Providers:*System.Net.Http,*System.Net.NameResolution,*System.Net.Security,*System.Net.Sockets
flag —/Providers
already includes the networking events. The thread-time trace shows where threads are blocked while the networking events show what requests are failing and why./ThreadTime - Linux (dotnet-trace):
captures thread time data by default, but specifyingdotnet-trace
overrides the defaults so you must also include--providers
:--profile
.dotnet-trace collect -p <PID> --profile dotnet-common,dotnet-sampled-thread-time --providers System.Net.Http,System.Net.NameResolution,System.Net.Security,System.Net.Sockets - Linux .NET 10+ with root: Use
.dotnet-trace collect-linux --profile dotnet-common,cpu-sampling,thread-time --providers System.Net.Http,System.Net.NameResolution,System.Net.Security,System.Net.Sockets - Containers:
can capture traces with custom providers via its REST API.dotnet-monitor
Assembly Loading Issues
For modern .NET, assembly loading issues (
FileNotFoundException, FileLoadException, ReflectionTypeLoadException, version conflicts, duplicate assembly loads across AssemblyLoadContexts) require collecting assembly loader binder events from the Microsoft-Windows-DotNETRuntime provider with the Loader keyword (0x4). These events trace every step of the runtime's assembly resolution algorithm — which paths were probed, which AssemblyLoadContext handled the load, whether the load succeeded or failed, and why. For .NET Framework, the same provider and keyword work for ETW-based collection; additionally, the Fusion Log Viewer (fuslogvw.exe) can diagnose assembly binding failures without requiring a trace.
The provider specification is
Microsoft-Windows-DotNETRuntime:0x4:4 (provider name, AssemblyLoader keyword, Informational verbosity).
- Windows (PerfView): A default PerfView trace already includes binder events - simply run
with no extra providers. For a smaller trace file, usePerfView collect
, which removes the most verbose default events while keeping the events necessary for diagnosing assembly loading issues.PerfView collect /ClrEvents:Default-Profile - Linux / cross-platform (dotnet-trace): Use
to launch and trace the process, ordotnet-trace collect --clrevents assemblyloader -- <path-to-built-exe>
to attach to a running process.dotnet-trace collect --clrevents assemblyloader -p <PID> - Linux .NET 10+ with root: Use
.dotnet-trace collect-linux --clrevents assemblyloader - Containers:
can capture traces with the loader provider via its REST API.dotnet-monitor
For short-lived processes that fail on startup (common with assembly loading issues), prefer the
dotnet-trace launch form (-- <path-to-built-exe>) over attaching by PID, since the process may exit before you can attach.
Explain the trade-offs when recommending a tool. For example:
- PerfView gives richer data but needs admin; runs on Windows including Windows containers.
works cross-platform without admin but captures less system-level detail.dotnet-trace
captures native call stacks but needs admin/root.perfcollect
is the best option for containers/K8s when console access is not available, but requires sidecar deployment and authentication configuration.dotnet-monitor
Step 3: Guide data collection
Provide the specific commands for the recommended tool. Load the appropriate reference file from the tool reference lookup table for detailed command-line examples.
Key guidance to include:
- Installation: How to install the tool if it is not already available (e.g.
). When recommending multiple tools, provide installation and usage instructions for each one — do not mention a tool without showing how to install and use it.dotnet tool install -g dotnet-trace - PID discovery (required before any
command): Verify the target process first (for example:-p <PID>
,dotnet-trace ps
, orcurl <monitor-endpoint>/processes
inside a container). If the app is expected to be PID 1 in a container, still verify before collecting.ps - Collection command: The exact command to run, including relevant providers, output format, and duration.
- Container considerations:
- Collecting from inside the container: ensure the tool is installed in the image or use
to copy it in.kubectl cp - Collecting from outside the container: use
as a sidecar with a shared diagnostic port (Unix domain socket indotnet-monitor
)./tmp - Kubernetes:
as a sidecar container, ordotnet-monitor
for ephemeral debug containers.kubectl debug
- Collecting from inside the container: ensure the tool is installed in the image or use
- Long-running repros (Windows/PerfView): show how to use trigger arguments and circular buffer settings.
- Output location: Where the collected file will be saved and how to copy it off the target for analysis.
- Artifact handoff checklist: Include runtime version, OS/kernel, container image tag or build SHA, PID/process name, UTC collection start/end timestamps, exact command used, and final artifact path when handing traces to someone else for analysis.
Step 4: Recommend analysis approach
After data is collected, recommend the appropriate tool for analysis. Do not perform the analysis — just point the developer to the right tool and documentation.
| Collected Data | Analysis Tool | Notes |
|---|---|---|
file | PerfView (Windows), Speedscope (web) | PerfView gives the richest view on Windows |
/ file | PerfView | ETW traces from PerfView or perfcollect |
from perfcollect | PerfView (Windows) | Copy the file to a Windows machine and open with PerfView |
Validation
- The recommended tool is compatible with the developer's runtime, OS, and deployment topology
- The collection command runs without errors
- The output file is generated in the expected location
- The developer knows which analysis tool to use for the collected data
Common Pitfalls
| Pitfall | Solution |
|---|---|
Using on .NET Framework | only works with modern .NET (.NET Core 3.0+). Use PerfView for .NET Framework. |
| PerfView without admin privileges | PerfView requires admin for ETW tracing. Fall back to if admin is not available. |
in container without | Containers drop by default. Run with or add capability, or fall back to . |
| Huge trace files from long repros | On Windows, use PerfView triggers that fire on the symptom you want to capture (e.g., , , ) with and . Never trigger on recovery — the circular buffer continuously overwrites old data, so the interesting behavior may be lost by the time collection stops. |
| Diagnostic port not accessible in container | Mount as a shared volume between the app container and sidecar for the diagnostic Unix domain socket. |
| Forgetting to install tools in container image | Add to your Dockerfile, or use as a sidecar to avoid modifying the app image. |
Exposing with in production | Keep auth enabled, bind to localhost, and use for access. Use only for short-lived isolated debugging. |
| Collecting only CPU/thread-time trace for networking issues | CPU and thread-time traces alone do not show HTTP status codes, DNS timing, or connection pool behavior. Add the networking providers (, , , ) alongside the thread-time trace. |
| Enabling all networking providers when only one is needed | Each networking provider adds overhead. If the issue is clearly HTTP-level (5xx status codes), alone may be sufficient. Add DNS, TLS, and socket providers when the root cause is unclear. |