Claude-skill-registry benchmark-functions
Measure function performance and compare implementations. Use when optimizing critical code paths.
install
source · Clone the upstream repo
git clone https://github.com/majiayu000/claude-skill-registry
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/benchmark-functions" ~/.claude/skills/majiayu000-claude-skill-registry-benchmark-functions && rm -rf "$T"
manifest:
skills/data/benchmark-functions/SKILL.mdsource content
Benchmark Functions
Systematically measure function execution time, memory usage, and performance characteristics to identify optimization opportunities.
When to Use
- Comparing different algorithm implementations
- Measuring performance before/after optimization
- Profiling SIMD vs scalar implementations
- Establishing performance baselines for CI/CD
Quick Reference
# Python benchmarking with timeit python3 -m timeit -s 'import module' 'module.function(args)' -n 1000 -r 5 # Mojo benchmarking with built-in timing mojo run benchmark_script.mojo
Workflow
- Set up benchmarks: Create timing harness with warm-up iterations
- Run measurements: Execute function multiple times, record timing
- Collect statistics: Calculate mean, median, std deviation
- Compare baselines: Compare against previous implementations
- Identify bottlenecks: Pinpoint functions needing optimization
Output Format
Benchmark report:
- Function name and parameters tested
- Execution time statistics (mean, median, min, max)
- Memory usage (if applicable)
- Comparison to baseline (improvement percentage)
- Iterations and sample size used
References
- See
skill for detailed performance profilingprofile-code - See
skill for improvement strategiessuggest-optimizations - See CLAUDE.md > Performance for Mojo optimization guidelines