Claude-skill-registry go-performance
Go performance optimization - profiling, benchmarks, memory management
install
source · Clone the upstream repo
git clone https://github.com/majiayu000/claude-skill-registry
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/go-performance" ~/.claude/skills/majiayu000-claude-skill-registry-go-performance && rm -rf "$T"
manifest:
skills/data/go-performance/SKILL.mdsource content
Go Performance Skill
Optimize Go application performance with profiling and best practices.
Overview
Comprehensive performance optimization including CPU/memory profiling, benchmarking, and common optimization patterns.
Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| profile_type | string | yes | - | Type: "cpu", "memory", "goroutine", "block" |
| duration | string | no | "30s" | Profile duration |
Core Topics
pprof Setup
import ( "net/http" _ "net/http/pprof" ) func main() { // Start pprof server go func() { http.ListenAndServe("localhost:6060", nil) }() // Your application runApp() }
CPU Profiling
# Collect 30s CPU profile go tool pprof http://localhost:6060/debug/pprof/profile?seconds=30 # Interactive commands (pprof) top 10 # Top 10 CPU consumers (pprof) list funcName # Source view (pprof) web # Open in browser (pprof) svg > cpu.svg # Export SVG
Memory Profiling
# Heap profile go tool pprof http://localhost:6060/debug/pprof/heap # Allocs since start go tool pprof http://localhost:6060/debug/pprof/allocs (pprof) top --cum # By cumulative allocations (pprof) list funcName # Where allocations happen
Benchmarking
func BenchmarkProcess(b *testing.B) { data := setupData() b.ResetTimer() b.ReportAllocs() for i := 0; i < b.N; i++ { Process(data) } } func BenchmarkProcess_Parallel(b *testing.B) { data := setupData() b.RunParallel(func(pb *testing.PB) { for pb.Next() { Process(data) } }) }
# Run benchmarks go test -bench=. -benchmem ./... # Compare benchmarks go test -bench=. -count=5 > old.txt # make changes go test -bench=. -count=5 > new.txt benchstat old.txt new.txt
Memory Optimization
// Preallocate slices func ProcessItems(items []Item) []Result { results := make([]Result, 0, len(items)) // Preallocate for _, item := range items { results = append(results, process(item)) } return results } // Use sync.Pool for frequent allocations var bufferPool = sync.Pool{ New: func() interface{} { return new(bytes.Buffer) }, } func GetBuffer() *bytes.Buffer { buf := bufferPool.Get().(*bytes.Buffer) buf.Reset() return buf } func PutBuffer(buf *bytes.Buffer) { bufferPool.Put(buf) }
Escape Analysis
# Check what escapes to heap go build -gcflags="-m -m" ./... # Common escapes # - Returning pointers to local variables # - Storing in interface{} # - Closures capturing variables
Optimization Patterns
// String building - use strings.Builder var b strings.Builder for _, s := range parts { b.WriteString(s) } result := b.String() // Avoid interface{} in hot paths // Use generics or concrete types // Reduce allocations in loops buffer := make([]byte, 1024) for { n, err := reader.Read(buffer) // reuse buffer }
Profiling Commands
# Goroutine profile (leak detection) go tool pprof http://localhost:6060/debug/pprof/goroutine # Block profile (contention) go tool pprof http://localhost:6060/debug/pprof/block # Mutex profile go tool pprof http://localhost:6060/debug/pprof/mutex # Trace (detailed execution) curl -o trace.out http://localhost:6060/debug/pprof/trace?seconds=5 go tool trace trace.out
Troubleshooting
Failure Modes
| Symptom | Cause | Fix |
|---|---|---|
| High CPU | Hot loop, GC | Profile, reduce allocs |
| High memory | Leak, no pooling | Heap profile, sync.Pool |
| Slow start | Large init | Lazy initialization |
| GC pauses | Many allocations | Reduce allocations |
Usage
Skill("go-performance")