Claude-seo seo-drift
git clone https://github.com/AgriciDaniel/claude-seo
T=$(mktemp -d) && git clone --depth=1 https://github.com/AgriciDaniel/claude-seo "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/seo-drift" ~/.claude/skills/agricidaniel-claude-seo-seo-drift && rm -rf "$T"
skills/seo-drift/SKILL.mdSEO Drift Monitor (April 2026)
Git for your SEO. Capture baselines, detect regressions, track changes over time.
Commands
| Command | Purpose |
|---|---|
| Capture current SEO state as a "known good" snapshot |
| Compare current page state to stored baseline |
| Show change history and past comparisons |
What It Captures
Every baseline records these SEO-critical elements:
| Element | Field | Source |
|---|---|---|
| Title tag | | |
| Meta description | | |
| Canonical URL | | |
| Robots directives | | |
| H1 headings | (array) | |
| H2 headings | (array) | |
| H3 headings | (array) | |
| JSON-LD schema | (array) | |
| Open Graph tags | (dict) | |
| Core Web Vitals | (dict) | |
| HTTP status code | | |
| HTML content hash | (SHA-256) | Computed |
| Schema content hash | (SHA-256) | Computed |
How Comparison Works
The comparison engine applies 17 rules across 3 severity levels. Load
references/comparison-rules.md for the full rule set with thresholds,
recommended actions, and cross-skill references.
Severity Levels
| Level | Meaning | Response Time |
|---|---|---|
| CRITICAL | SEO-breaking change, likely traffic loss | Immediate |
| WARNING | Potential impact, needs investigation | Within 1 week |
| INFO | Awareness only, may be intentional | Review at convenience |
Storage
All data is stored locally in SQLite:
~/.cache/claude-seo/drift/baselines.db
Tables
- baselines: Captured snapshots with all SEO elements
- comparisons: Diff results with triggered rules and severities
URL normalization ensures consistent matching: lowercase scheme/host, strip default ports (80/443), sort query parameters, remove UTM parameters, strip trailing slashes.
Command: baseline
baselineCaptures the current state of a page and stores it.
Steps:
- Validate URL (SSRF protection via
)google_auth.validate_url() - Fetch page via
scripts/fetch_page.py - Parse HTML via
scripts/parse_html.py - Optionally fetch CWV via
(usescripts/pagespeed_check.py
to skip)--skip-cwv - Hash HTML body and schema content (SHA-256)
- Store snapshot in SQLite
Execution:
python scripts/drift_baseline.py <url> python scripts/drift_baseline.py <url> --skip-cwv
Output: JSON with baseline ID, timestamp, URL, and summary of captured elements.
Command: compare
compareFetches the current page state and diffs it against the most recent baseline.
Steps:
- Validate URL
- Load most recent baseline from SQLite (or specific
)--baseline-id - Fetch and parse current page state
- Run all 17 comparison rules
- Classify findings by severity
- Store comparison result
- Output JSON diff report
Execution:
python scripts/drift_compare.py <url> python scripts/drift_compare.py <url> --baseline-id 5 python scripts/drift_compare.py <url> --skip-cwv
Output: JSON with all triggered rules, old/new values, severity, and actions.
After comparison, offer to generate an HTML report:
python scripts/drift_report.py <comparison_json_file> --output drift-report.html
Command: history
historyShows all baselines and comparisons for a URL.
Execution:
python scripts/drift_history.py <url> python scripts/drift_history.py <url> --limit 10
Output: JSON array of baselines (newest first) with timestamps and comparison summaries.
Cross-Skill Integration
When drift is detected, recommend the appropriate specialized skill:
| Finding | Recommendation |
|---|---|
| Schema removed or modified | Run for full validation |
| CWV regression | Run for performance audit |
| Title or meta description changed | Run for content analysis |
| Canonical changed or removed | Run for indexability check |
| Noindex added | Run for crawlability audit |
| H1/heading structure changed | Run for E-E-A-T review |
| OG tags removed | Run for social sharing analysis |
| Status code changed to error | Run for full diagnostics |
Error Handling
| Scenario | Action |
|---|---|
| URL unreachable | Report error from . Do not guess state. Suggest user verify URL. |
| No baseline exists for URL | Inform user and suggest running first. |
| SSRF blocked (private IP) | Report rejection. Never bypass. |
| SQLite database missing | Auto-create on first use. No error. |
| CWV fetch fails (no API key) | Store for CWV fields. Skip CWV rules during comparison. |
| Page returns 4xx/5xx | Still capture as baseline (status code IS a tracked field). |
| Multiple baselines exist | Use most recent unless specified. |
Security
- All URL fetching goes through
which enforces SSRF protection (blocks private IPs, loopback, reserved ranges, GCP metadata endpoints)scripts/fetch_page.py - No curl, no subprocess HTTP calls -- only the project's validated fetch pipeline
- All SQLite queries use parameterized placeholders (
), never string interpolation? - TLS always verified -- no
anywhere in the pipelineverify=False
Typical Workflows
Pre/Post Deployment Check
/seo drift baseline https://example.com # Before deploy # ... deploy happens ... /seo drift compare https://example.com # After deploy
Ongoing Monitoring
/seo drift baseline https://example.com # Initial capture # ... weeks later ... /seo drift compare https://example.com # Check for drift /seo drift history https://example.com # Review all changes
Investigating a Traffic Drop
/seo drift compare https://example.com # What changed? /seo drift history https://example.com # When did it change?