Skillshub golang-performance
Golang performance optimization patterns and methodology - if X bottleneck, then apply Y. Covers allocation reduction, CPU efficiency, memory layout, GC tuning, pooling, caching, and hot-path optimization. Use when profiling or benchmarks have identified a bottleneck and you need the right optimization pattern to fix it. Also use when performing performance code review to suggest improvements or benchmarks that could help identify quick performance gains. Not for measurement methodology (see golang-benchmark skill) or debugging workflow (see golang-troubleshooting skill).
git clone https://github.com/ComeOnOliver/skillshub
T=$(mktemp -d) && git clone --depth=1 https://github.com/ComeOnOliver/skillshub "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/Harmeet10000/skills/golang-performance" ~/.claude/skills/comeonoliver-skillshub-golang-performance && rm -rf "$T"
skills/Harmeet10000/skills/golang-performance/SKILL.mdPersona: You are a Go performance engineer. You never optimize without profiling first — measure, hypothesize, change one thing, re-measure.
Thinking mode: Use
ultrathink for performance optimization. Shallow analysis misidentifies bottlenecks — deep reasoning ensures the right optimization is applied to the right problem.
Modes:
- Review mode (architecture) — broad scan of a package or service for structural anti-patterns (missing connection pools, unbounded goroutines, wrong data structures). Use up to 3 parallel sub-agents split by concern: (1) allocation and memory layout, (2) I/O and concurrency, (3) algorithmic complexity and caching.
- Review mode (hot path) — focused analysis of a single function or tight loop identified by the caller. Work sequentially; one sub-agent is sufficient.
- Optimize mode — a bottleneck has been identified by profiling. Follow the iterative cycle (define metric → baseline → diagnose → improve → compare) sequentially — one change at a time is the discipline.
Go Performance Optimization
Core Philosophy
- Profile before optimizing — intuition about bottlenecks is wrong ~80% of the time. Use pprof to find actual hot spots (→ See
skill)samber/cc-skills-golang@golang-troubleshooting - Allocation reduction yields the biggest ROI — Go's GC is fast but not free. Reducing allocations per request often matters more than micro-optimizing CPU
- Document optimizations — add code comments explaining why a pattern is faster, with benchmark numbers when available. Future readers need context to avoid reverting an "unnecessary" optimization
Rule Out External Bottlenecks First
Before optimizing Go code, verify the bottleneck is in your process — if 90% of latency is a slow DB query or API call, reducing allocations won't help.
Diagnose: 1-
fgprof — captures on-CPU and off-CPU (I/O wait) time; if off-CPU dominates, the bottleneck is external 2- go tool pprof (goroutine profile) — many goroutines blocked in net.(*conn).Read or database/sql = external wait 3- Distributed tracing (OpenTelemetry) — span breakdown shows which upstream is slow
When external: optimize that component instead — query tuning, caching, connection pools, circuit breakers (→ See
samber/cc-skills-golang@golang-database skill, Caching Patterns).
Iterative Optimization Methodology
The cycle: Define Goals → Benchmark → Diagnose → Improve → Benchmark
- Define your metric — latency, throughput, memory, or CPU? Without a target, optimizations are random
- Write an atomic benchmark — isolate one function per benchmark to avoid result contamination (→ See
skill)samber/cc-skills-golang@golang-benchmark - Measure baseline —
go test -bench=BenchmarkMyFunc -benchmem -count=6 ./pkg/... | tee /tmp/report-1.txt - Diagnose — use the Diagnose lines in each deep-dive section to pick the right tool
- Improve — apply ONE optimization at a time with an explanatory comment
- Compare —
to confirm statistical significancebenchstat /tmp/report-1.txt /tmp/report-2.txt - Repeat — increment report number, tackle next bottleneck
Refer to library documentation for known patterns before inventing custom solutions. Keep all
/tmp/report-*.txt files as an audit trail.
Decision Tree: Where Is Time Spent?
| Bottleneck | Signal (from pprof) | Action |
|---|---|---|
| Too many allocations | high in heap profile | Memory optimization |
| CPU-bound hot loop | function dominates CPU profile | CPU optimization |
| GC pauses / OOM | high GC%, container limits | Runtime tuning |
| Network / I/O latency | goroutines blocked on I/O | I/O & networking |
| Repeated expensive work | same computation/fetch multiple times | Caching patterns |
| Wrong algorithm | O(n²) where O(n) exists | Algorithmic complexity |
| Lock contention | mutex/block profile hot | → See skill |
| Slow queries | DB time dominates traces | → See skill |
Common Mistakes
| Mistake | Fix |
|---|---|
| Optimizing without profiling | Profile with pprof first — intuition is wrong ~80% of the time |
Default without Transport | defaults to 2; set to match your concurrency level |
| Logging in hot loops | Log calls prevent inlining and allocate even when the level is disabled. Use |
/ as control flow | panic allocates a stack trace and unwinds the stack; use error returns |
without benchmark proof | Only justified when profiling shows >10% improvement in a verified hot path |
| No GC tuning in containers | Set to 80-90% of container memory to prevent OOM kills |
in production | 50-200x slower than typed comparison; use , , |
Deep Dives
- Memory Optimization — allocation patterns, backing array leaks, sync.Pool, struct alignment
- CPU Optimization — inlining, cache locality, false sharing, ILP, reflection avoidance
- I/O & Networking — HTTP transport config, streaming, JSON performance, cgo, batch operations
- Runtime Tuning — GOGC, GOMEMLIMIT, GC diagnostics, GOMAXPROCS, PGO
- Caching Patterns — algorithmic complexity, compiled patterns, singleflight, work avoidance
- Production Observability — Prometheus metrics, PromQL queries, continuous profiling, alerting rules
CI Regression Detection
Automate benchmark comparison in CI to catch regressions before they reach production. → See
samber/cc-skills-golang@golang-benchmark skill for benchdiff and cob setup.
Cross-References
- → See
skill for benchmarking methodology,samber/cc-skills-golang@golang-benchmark
, andbenchstat
(Go 1.24+)b.Loop() - → See
skill for pprof workflow, escape analysis diagnostics, and performance debuggingsamber/cc-skills-golang@golang-troubleshooting - → See
skill for slice/map preallocation andsamber/cc-skills-golang@golang-data-structuresstrings.Builder - → See
skill for worker pools,samber/cc-skills-golang@golang-concurrency
API, goroutine lifecycle, and lock contentionsync.Pool - → See
skill for defer in loops, slice backing array aliasingsamber/cc-skills-golang@golang-safety - → See
skill for connection pool tuning and batch processingsamber/cc-skills-golang@golang-database - → See
skill for continuous profiling in productionsamber/cc-skills-golang@golang-observability