Claude-code-plugins-plus-skills grammarly-observability

install
source · Clone the upstream repo
git clone https://github.com/jeremylongshore/claude-code-plugins-plus-skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/jeremylongshore/claude-code-plugins-plus-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/plugins/saas-packs/grammarly-pack/skills/grammarly-observability" ~/.claude/skills/jeremylongshore-claude-code-plugins-plus-skills-grammarly-observability && rm -rf "$T"
manifest: plugins/saas-packs/grammarly-pack/skills/grammarly-observability/SKILL.md
source content

Grammarly Observability

Overview

Grammarly API integrations process user text through scoring, AI rewriting, and plagiarism endpoints where latency and accuracy directly affect user experience. Monitor text check response times, suggestion quality signals, API error rates, and token consumption to stay within rate limits. Catching degradation early prevents users from seeing stale suggestions or silent failures in real-time editing flows.

Key Metrics

MetricTypeTargetAlert Threshold
Text check latency p95Histogram< 300ms> 800ms
API error rateGauge< 1%> 5%
Suggestion acceptance rateGauge> 40%< 20% (quality signal)
Token usage (daily)Counter< 80% quota> 90% quota
Plagiarism check latencyHistogram< 2s> 5s
AI rewrite throughputCounterStableDrop > 30%

Instrumentation

async function trackGrammarlyCall(api: 'score' | 'ai' | 'plagiarism', textLen: number, fn: () => Promise<any>) {
  const start = Date.now();
  try {
    const result = await fn();
    metrics.histogram('grammarly.api.latency', Date.now() - start, { api });
    metrics.increment('grammarly.api.calls', { api });
    metrics.gauge('grammarly.text.length', textLen, { api });
    return result;
  } catch (err) {
    metrics.increment('grammarly.api.errors', { api, status: err.status });
    throw err;
  }
}

Health Check Dashboard

async function grammarlyHealth(): Promise<Record<string, string>> {
  const latencyP95 = await metrics.query('grammarly.api.latency', 'p95', '5m');
  const errorRate = await metrics.query('grammarly.api.error_rate', 'avg', '5m');
  const quotaUsed = await grammarlyAdmin.getQuotaUsage();
  return {
    api_latency: latencyP95 < 300 ? 'healthy' : 'slow',
    error_rate: errorRate < 0.01 ? 'healthy' : 'degraded',
    quota: quotaUsed < 0.8 ? 'healthy' : 'at_risk',
  };
}

Alerting Rules

const alerts = [
  { metric: 'grammarly.api.latency_p95', condition: '> 800ms', window: '10m', severity: 'warning' },
  { metric: 'grammarly.api.error_rate', condition: '> 0.05', window: '5m', severity: 'critical' },
  { metric: 'grammarly.quota.daily_pct', condition: '> 0.90', window: '1h', severity: 'warning' },
  { metric: 'grammarly.ai.throughput', condition: 'drop > 30%', window: '15m', severity: 'critical' },
];

Structured Logging

function logGrammarlyEvent(api: string, data: Record<string, any>) {
  console.log(JSON.stringify({
    service: 'grammarly', api,
    duration_ms: data.latency, status: data.status,
    text_length: data.textLen, suggestion_count: data.suggestions,
    // Never log user text content — only metadata
    timestamp: new Date().toISOString(),
  }));
}

Error Handling

SignalMeaningAction
429 rate limitToken quota exhaustedBack off, check daily usage, request limit increase
Latency spike on /scoreGrammarly service degradationCheck status page, enable local cache fallback
Suggestion count drops to 0API schema change or auth failureVerify API key, check response format
Plagiarism timeout > 5sLarge document or service overloadChunk text, retry with exponential backoff

Resources

Next Steps

See

grammarly-incident-runbook
.