Skillshub golang-grpc
Provides gRPC usage guidelines, protobuf organization, and production-ready patterns for Golang microservices. Use when implementing, reviewing, or debugging gRPC servers/clients, writing proto files, setting up interceptors, handling gRPC errors with status codes, configuring TLS/mTLS, testing with bufconn, or working with streaming RPCs.
git clone https://github.com/ComeOnOliver/skillshub
T=$(mktemp -d) && git clone --depth=1 https://github.com/ComeOnOliver/skillshub "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/Harmeet10000/skills/golang-grpc" ~/.claude/skills/comeonoliver-skillshub-golang-grpc && rm -rf "$T"
skills/Harmeet10000/skills/golang-grpc/SKILL.mdPersona: You are a Go distributed systems engineer. You design gRPC services for correctness and operability — proper status codes, deadlines, interceptors, and graceful shutdown matter as much as the happy path.
Modes:
- Build mode — implementing a new gRPC server or client from scratch.
- Review mode — auditing existing gRPC code for correctness, security, and operability issues.
Go gRPC Best Practices
Treat gRPC as a pure transport layer — keep it separate from business logic. The official Go implementation is
google.golang.org/grpc.
This skill is not exhaustive. Please refer to library documentation and code examples for more informations. Context7 can help as a discoverability platform.
Quick Reference
| Concern | Package / Tool |
|---|---|
| Service definition | or with files |
| Code generation | , |
| Error handling | with |
| Rich error details | |
| Interceptors | , |
| Middleware ecosystem | |
| Testing | |
| TLS / mTLS | |
| Health checks | |
Proto File Organization
Organize by domain with versioned directories (
proto/user/v1/). Always use Request/Response wrapper messages — bare types like string cannot have fields added later. Generate with buf generate or protoc.
Proto & code generation reference
Server Implementation
- Implement health check service (
) — Kubernetes probes need it to determine readinessgrpc_health_v1 - Use interceptors for cross-cutting concerns (logging, auth, recovery) — keeps business logic clean
- Use
with a timeout fallback toGracefulStop()
— drains in-flight RPCs while preventing hangsStop() - Disable reflection in production — it exposes your full API surface
srv := grpc.NewServer( grpc.ChainUnaryInterceptor(loggingInterceptor, recoveryInterceptor), ) pb.RegisterUserServiceServer(srv, svc) healthpb.RegisterHealthServer(srv, health.NewServer()) go srv.Serve(lis) // On shutdown signal: stopped := make(chan struct{}) go func() { srv.GracefulStop(); close(stopped) }() select { case <-stopped: case <-time.After(15 * time.Second): srv.Stop() }
Interceptor Pattern
func loggingInterceptor(ctx context.Context, req any, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (any, error) { start := time.Now() resp, err := handler(ctx, req) log.Printf("method=%s duration=%s code=%s", info.FullMethod, time.Since(start), status.Code(err)) return resp, err }
Client Implementation
- Reuse connections — gRPC multiplexes RPCs on a single HTTP/2 connection; one-per-request wastes TCP/TLS handshakes
- Set deadlines on every call (
) — without one, a slow upstream hangs goroutines indefinitelycontext.WithTimeout - Use
with headless Kubernetes services viaround_robin
schemedns:/// - Pass metadata (auth tokens, trace IDs) via
metadata.NewOutgoingContext
conn, err := grpc.NewClient("dns:///user-service:50051", grpc.WithTransportCredentials(creds), grpc.WithDefaultServiceConfig(`{ "loadBalancingPolicy": "round_robin", "methodConfig": [{ "name": [{"service": ""}], "timeout": "5s", "retryPolicy": { "maxAttempts": 3, "initialBackoff": "0.1s", "maxBackoff": "1s", "backoffMultiplier": 2, "retryableStatusCodes": ["UNAVAILABLE"] } }] }`), ) client := pb.NewUserServiceClient(conn)
Error Handling
Always return gRPC errors using
status.Error with a specific code — a raw error becomes codes.Unknown, telling the client nothing actionable. Clients use codes to decide retry vs fail-fast vs degrade.
| Code | When to Use |
|---|---|
| Malformed input (missing field, bad format) |
| Entity does not exist |
| Create failed, entity exists |
| Caller lacks permission |
| Missing or invalid token |
| System not in required state |
| Rate limit or quota exceeded |
| Transient issue, safe to retry |
| Unexpected bug |
| Timeout |
// ✗ Bad — caller gets codes.Unknown, can't decide whether to retry return nil, fmt.Errorf("user not found") // ✓ Good — specific code lets clients act appropriately if errors.Is(err, ErrNotFound) { return nil, status.Errorf(codes.NotFound, "user %q not found", req.UserId) } return nil, status.Errorf(codes.Internal, "lookup failed: %v", err)
For field-level validation errors, attach
errdetails.BadRequest via status.WithDetails.
Streaming
| Pattern | Use Case |
|---|---|
| Server streaming | Server sends a sequence (log tailing, result sets) |
| Client streaming | Client sends a sequence, server responds once (file upload, batch) |
| Bidirectional | Both send independently (chat, real-time sync) |
Prefer streaming over large single messages — avoids per-message size limits and lowers memory pressure.
func (s *server) ListUsers(req *pb.ListUsersRequest, stream pb.UserService_ListUsersServer) error { for _, u := range users { if err := stream.Send(u); err != nil { return err } } return nil }
Testing
Use
bufconn for in-memory connections that exercise the full gRPC stack (serialization, interceptors, metadata) without network overhead. Always test that error scenarios return the expected gRPC status codes.
Security
- TLS MUST be enabled in production — credentials travel in metadata
- For service-to-service auth, use mTLS or delegate to a service mesh (Istio, Linkerd)
- For user auth, implement
and validate tokens in an auth interceptorcredentials.PerRPCCredentials - Reflection SHOULD be disabled in production to prevent API discovery
Performance
| Setting | Purpose | Typical Value |
|---|---|---|
| Ping interval for idle connections | 30s |
| Ping ack timeout | 10s |
| Override 4 MB default for large payloads | 16 MB |
| Connection pooling | Multiple conns for high-load streaming | 4 connections |
Most services do not need connection pooling — profile before adding complexity.
Common Mistakes
| Mistake | Fix |
|---|---|
Returning raw | Becomes — client can't decide whether to retry. Use with a specific code |
| No deadline on client calls | Slow upstream hangs indefinitely. Always |
| New connection per request | Wastes TCP/TLS handshakes. Create once, reuse — HTTP/2 multiplexes RPCs |
| Reflection enabled in production | Lets attackers enumerate every method. Enable only in dev/staging |
for all errors | Wrong codes break client retry logic. triggers retry; does not |
| Bare types as RPC arguments | Can't add fields to . Wrapper messages allow backwards-compatible evolution |
| Missing health check service | Kubernetes can't determine readiness, kills pods during deployments |
| Ignoring context cancellation | Long operations continue after caller gave up. Check |
Cross-References
- → See
skill for deadline and cancellation patternssamber/cc-skills-golang@golang-context - → See
skill for gRPC error to Go error mappingsamber/cc-skills-golang@golang-error-handling - → See
skill for gRPC interceptors (logging, tracing, metrics)samber/cc-skills-golang@golang-observability - → See
skill for gRPC testing with bufconnsamber/cc-skills-golang@golang-testing