Claude-skill-registry application-metrics
Guide for instrumenting applications with metrics. Use when adding
install
source · Clone the upstream repo
git clone https://github.com/majiayu000/claude-skill-registry
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/application-metrics" ~/.claude/skills/majiayu000-claude-skill-registry-application-metrics && rm -rf "$T"
manifest:
skills/data/application-metrics/SKILL.mdsource content
Application Metrics Instrumentation
Practical patterns for adding observability to applications.
Five Metric Types
| Type | Purpose | Example |
|---|---|---|
| Operational Counters | Track discrete events (success/failure) | |
| Resource Utilization | Current capacity usage (gauges) | |
| Performance/Latency | Speed with explicit units | |
| Data Volume | Information flow rates | |
| Business Logic | Domain-specific value | |
Naming Convention
<system>.<component>.<operation>.<metric_type>
Examples:
myapp.api.users.requests_totalmyapp.db.queries.duration_msmyapp.cache.items.hit_total
Component Checklists
API Endpoints
- Request count by endpoint and method
- Response time (p50, p95, p99)
- Error rate by status code
- Authentication failures
- Request/response payload sizes
Database
- Connection pool (active, idle, waiting)
- Query duration by operation type
- Slow query count (threshold-based)
- Error count by type (timeout, constraint, connection)
- Transaction commit/rollback rates
Message Queues
- Messages produced/consumed per topic
- Queue depth (current backlog)
- Processing latency (end-to-end)
- Consumer lag
- Dead letter queue size
Caching
- Hit/miss ratio
- Eviction count and reason
- Cache size (entries and bytes)
- TTL expiration rate
- Connection pool status
Locks/Synchronization
- Acquisition time
- Contention count (failed acquisitions)
- Hold duration
- Timeout count
- Deadlock occurrences
Anti-patterns to Avoid
- Unbounded label cardinality - Never use user IDs, session tokens, or request IDs as labels
- Missing failure paths - Always instrument errors alongside successes
- No heartbeat metric - Add a constant gauge (e.g.,
) to verify instrumentation worksapp.up = 1 - Inconsistent naming - Stick to one convention across the codebase
Full Reference
For detailed examples, patterns, and rationale, fetch the complete guide: https://pierrezemb.fr/posts/practical-guide-to-application-metrics/