Claude-skill-registry libperf
install
source · Clone the upstream repo
git clone https://github.com/majiayu000/claude-skill-registry
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/libperf" ~/.claude/skills/majiayu000-claude-skill-registry-libperf && rm -rf "$T"
manifest:
skills/data/libperf/SKILL.mdsource content
libperf Skill
When to Use
- Writing performance tests for critical code paths
- Benchmarking function execution time and memory
- Detecting performance regressions in CI
- Validating scaling characteristics (O(n) vs O(n²))
Key Concepts
benchmark: Runs a function multiple times and collects timing/memory stats.
validateDuration/validateMemory: Assert that benchmarks meet performance requirements.
ScalingAnalyzer: Analyzes how performance scales with input size.
Usage Patterns
Pattern 1: Basic benchmark
import { benchmark, validateDuration } from "@copilot-ld/libperf"; const result = await benchmark( async () => { await myFunction(input); }, { iterations: 100 }, ); validateDuration(result, 50); // Assert max 50ms
Pattern 2: Scaling analysis
import { ScalingAnalyzer } from "@copilot-ld/libperf"; const analyzer = new ScalingAnalyzer(); const scaling = await analyzer.analyze(fn, [100, 1000, 10000]); // scaling.type: "linear" | "sublinear" | "superlinear"
Integration
Performance tests use
.perf.js extension. Run via npm run test:perf.