Claude-code-plugins grammarly-cost-tuning

install
source · Clone the upstream repo
git clone https://github.com/jeremylongshore/claude-code-plugins-plus-skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/jeremylongshore/claude-code-plugins-plus-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/plugins/saas-packs/grammarly-pack/skills/grammarly-cost-tuning" ~/.claude/skills/jeremylongshore-claude-code-plugins-grammarly-cost-tuning && rm -rf "$T"
manifest: plugins/saas-packs/grammarly-pack/skills/grammarly-cost-tuning/SKILL.md
source content

Grammarly Cost Tuning

Overview

Grammarly enterprise pricing is per-seat with API costs driven by text check request volume and document length. Each grammar check, tone analysis, and plagiarism scan consumes API quota proportional to word count. For organizations processing thousands of documents daily, unchecked API usage — especially on long documents or duplicate content — creates substantial cost overrun. Implementing validation gates, result caching, and sample-based scoring reduces API spend by 40-60% without sacrificing quality coverage.

Cost Breakdown

ComponentCost DriverOptimization
Seat licensesPer-user/month enterprise pricingAudit active seats quarterly; remove inactive users
Text check requestsPer-call API quota on grammar/tone checksCache results for identical text; deduplicate requests
Document lengthAPI cost scales with word count per requestChunk documents over 10K words; skip boilerplate sections
Plagiarism scansHigher-cost endpoint for originality checksRun only on final drafts, not every revision
AI rewrite suggestionsPremium feature with per-request costBatch rewrites; limit to flagged passages only

API Call Reduction

class GrammarlyCostGate {
  private resultCache = new Map<string, any>();

  shouldCheck(text: string): boolean {
    const words = text.split(/\s+/).length;
    return words >= 30 && words <= 50_000; // Skip too-short; chunk too-long
  }

  async checkWithCache(text: string, checkFn: (t: string) => Promise<any>): Promise<any> {
    const hash = this.hashText(text);
    if (this.resultCache.has(hash)) return this.resultCache.get(hash);
    const result = await checkFn(text);
    this.resultCache.set(hash, result);
    return result;
  }

  sampleDocuments(documents: string[], rate = 0.2): string[] {
    // For bulk content audits, score a representative sample
    return documents.filter(() => Math.random() < rate);
  }

  private hashText(text: string): string {
    return text.slice(0, 200) + '|' + text.length;
  }
}

Usage Monitoring

class GrammarlyUsageMonitor {
  private daily = { score: 0, ai: 0, plagiarism: 0 };
  private budgets = { score: 5000, ai: 1000, plagiarism: 200 };

  record(type: 'score' | 'ai' | 'plagiarism'): void {
    this.daily[type]++;
    const utilization = (this.daily[type] / this.budgets[type]) * 100;
    if (utilization > 80) {
      console.warn(`Grammarly ${type} budget ${utilization.toFixed(0)}% used: ${this.daily[type]}/${this.budgets[type]}`);
    }
  }

  getReport(): Record<string, { used: number; budget: number }> {
    return Object.fromEntries(
      Object.keys(this.daily).map(k => [k, {
        used: this.daily[k as keyof typeof this.daily],
        budget: this.budgets[k as keyof typeof this.budgets]
      }])
    );
  }
}

Cost Optimization Checklist

  • Cache grammar check results for identical text
  • Validate text length before sending (30-50K word range)
  • Use sample-based scoring for bulk content audits (20% sample)
  • Run plagiarism checks only on final drafts
  • Chunk documents over 10K words into smaller segments
  • Skip boilerplate sections (headers, footers, legal text)
  • Limit AI rewrite suggestions to flagged passages only
  • Set per-endpoint daily budget alerts at 80% threshold

Error Handling

IssueCauseFix
API quota exhausted mid-dayNo usage monitoring or budget gatesImplement daily budget tracking with early warning
Duplicate checks on same contentNo result cachingHash-based cache for identical text submissions
Timeout on large documentsSingle 50K+ word submissionChunk into 10K-word segments and process sequentially
Plagiarism costs spikingRunning originality check on every saveRestrict to final draft submissions only
Stale cache returning outdated scoresAlgorithm updates not reflectedTTL of 7 days on cached results; invalidate on version change

Resources

Next Steps

See

grammarly-performance-tuning
.