Dotnet-skills dotnet-sep
Use Sep for high-performance separated-value parsing and writing in .NET, including delimiter inference, explicit parser/writer options, and low-allocation row/column workflows.
install
source · Clone the upstream repo
git clone https://github.com/managedcode/dotnet-skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/managedcode/dotnet-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/catalog/Libraries/Sep/skills/dotnet-sep" ~/.claude/skills/managedcode-dotnet-skills-dotnet-sep && rm -rf "$T"
manifest:
catalog/Libraries/Sep/skills/dotnet-sep/SKILL.mdsource content
Sep for .NET separated values
Trigger On
- delimited data needs are performance-sensitive and allocation-aware
- project needs explicit control over separator inference, escaping, trimming, and header behavior
- reading/writing large or long-lived file pipelines in ML, ETL, or analytics workloads
- startup/perf tests require AOT/trimming-friendly CSV/TSV processing
Install
- NuGet:
dotnet add package Sepdotnet add package Sep --version <version>
- XML package reference:
<PackageReference Include="Sep" Version="x.y.z" />
- Verify baseline support by checking the package page:
- Source:
Workflow
flowchart LR A[Input source: file/text/stream] --> B[Sep.Reader or Sep.New(...).Reader] B --> C[SepReaderOptions] C --> D[Rows -> Cols -> Span/Parse] D --> E[Transform and validate] E --> F[SepWriter via SepWriterOptions] F --> G[To file/text output]
- Decide schema shape
- header present or no header
- separator known (
,;
, tab, custom) or infer from first row, - row/column quoting rules
- Build reader with
and explicit options only where needed:Sep.Reader(...)
for inferred separator from header-like first rowSep.Reader()
for explicit separator modeSep.New(',').Reader(...)
if header is absentSep.Reader(o => o with { HasHeader = false })
- Read rows and map columns as
first, convert only when needed.ReadOnlySpan<char> - For output, use
when you need the same separator/culture as input.reader.Spec.Writer() - Control writer behavior with
andSep.Writer(...)
(SepWriterOptions
,WriteHeader
,Escape
).DisableColCountCheck - Add async only where it brings value and your runtime is C# 13 / .NET 9+ for
over async reader rows.await foreach - Use
for CPU-heavy transformations only after benchmarking single-threaded baseline.ParallelEnumerate
Install and read patterns
using var reader = Sep.Reader(o => o with { HasHeader = true, Unescape = true, Trim = SepTrim.Both }).FromText(data); foreach (var row in reader) { var id = row["Id"].Parse<int>(); var name = row[1].ToString(); // process row }
Write patterns
using var reader = Sep.Reader().FromFile("input.csv"); using var writer = reader.Spec.Writer().ToFile("output.csv"); foreach (var row in reader) { using var writeRow = writer.NewRow(row); writeRow["Amount"].Format(row["Amount"].Parse<double>() * 1.2); }
Async reading and writing
var text = "A;B\n1;hello\n"; using var reader = await Sep.Reader().FromTextAsync(text); await using var writer = reader.Spec.Writer().ToText(); await foreach (var row in reader) { await using var writeRow = writer.NewRow(row); var normalized = row["B"].ToString().ToUpperInvariant(); writeRow["B"].Set(normalized); }
Common configuration patterns
- Header-driven read
- default
HasHeader = true - query by name:
row["ColName"]
- default
- Headerless pipelines
HasHeader = false- use index-based access:
,row[0]row[1]
- Round-trip output
- start writer with
to preserve inference and formatting contractreader.Spec.Writer()
- start writer with
- Speed-first processing
- keep default buffer + culture unless profiling proves a need to tune
Best practices
- Parse to primitive types with
in hot paths to avoid extra allocations.Parse<T> - Keep
/format conversions at the edge (presentational layers), not in inner loops.ToString - Prefer
,Unescape
, andTrim
settings deliberately and test with realistic samples.DisableQuotesParsing - For large transforms, isolate heavy CPU work after enumeration and then apply
where appropriate.ParallelEnumerate
Limitations to check before production
andSepReader.Row
areSepWriter.Row
s:ref struct- avoid patterns that store rows beyond immediate scope
- materialize if you truly need random async/LINQ-style buffering
row iteration is row-by-row by design; it is intentionally not the same as a classic collection model.SepReader
Deliver
- installation and usage guide that is ready to copy into a .NET repo
- practical reader/writer configuration patterns
- clear notes on defaults, tradeoffs, and constraints
Validate
installs correctly and project compilesdotnet add package Sep- one file-read sample and one file-write sample execute successfully
- header/no-header and explicit-separator cases are covered
- at least one validation sample for quoting/unescaping or async path exists if required by task
Load References
- references/overview.md - official links and practical decision notes.