Ultimate-seo-geo ultimate-seo-geo
Audits and optimizes websites for search engine visibility (SEO) and AI search citation (GEO), covering technical health, E-E-A-T content scoring, domain authority, structured data, rich results, and entity signals. Use when running SEO audits, diagnosing traffic drops or ranking losses, generating Schema.org JSON-LD, checking Core Web Vitals, crawlability, robots.txt, sitemaps, hreflang, backlinks, planning content strategy or site migrations, fixing indexing issues, or optimizing for AI Overviews, ChatGPT, and Perplexity. NOT for paid ads (PPC/SEM), social media strategy, email marketing, or general web development unrelated to search.
git clone https://github.com/mykpono/ultimate-seo-geo
git clone --depth=1 https://github.com/mykpono/ultimate-seo-geo ~/.claude/skills/mykpono-ultimate-seo-geo-ultimate-seo-geo-88ed75
SKILL.mdUltimate SEO + GEO — LLM-Agnostic SEO Agent
| Attribute | Details |
|---|---|
| Version | 1.8.5 |
| Updated | 2026-04-11 |
| License | MIT |
| Author | Myk Pono |
| Lab | lab.mykpono.com |
| Homepage | lab.mykpono.com |
| Profile | |
| Platforms | Claude Code, Cursor, Copilot, Gemini CLI, Codex, Windsurf, Cline, Aider, Devin |
The definitive SEO and Generative Engine Optimization agent. LLM-agnostic — works on any platform that reads
AGENTS.md. Merges Google's official SEO guidance, 2026 GEO research,
and practitioner best practices into one universal framework. Every finding comes with a
clear fix directive — not just diagnosis.
This file is the routing shell. Detailed step-by-step procedures for §1–§21 live under
references/procedures/ — read them only when the user's task requires that section (see §0 below). Domain knowledge tables live in references/*.md as before.
0. Before You Start
Routing index (read only what you need)
| Goal | Procedure file(s) | Also read / run |
|---|---|---|
| Full scored audit | , | , |
| AI citations / GEO | | , , , |
| Content relevance + GEO (structure, E-E-A-T, internal links) | , | , , , , |
| Schema only | | , |
| Local | | , |
| Crawl / index / performance | , | Matrix scripts (, , if API works) |
| Migration | | , |
| Keywords / roadmap (no URL yet) | , | Do not invent a live-site score |
Section numbers §1–§21 match
AGENTS.md and the filenames in references/procedures/. Full index: references/procedures/README.md.
Reference Reading Guide
When a section points to a reference file, read only what you need for the current task.
Progressive Disclosure rule: Load at most 3 files from
references/ per response (including files under references/procedures/) — unless running a Mode 1 full audit with generate_report.py, which implicitly covers all dimensions. For single-topic Mode 2 or Mode 3 tasks (e.g., "fix my schema", "write an llms.txt"), the routing tables identify 1–2 topical references/*.md files plus at most one references/procedures/*.md file when procedural detail is required. Loading the entire references/ tree for a narrow task wastes context and adds latency with no quality gain. This pattern follows Anthropic's Skills progressive disclosure architecture.
| Task | Read | Run |
|---|---|---|
| Full audit (any type) | | |
| GEO / AI citations | , | , , |
| Schema markup | | |
| Technical / CWV | | , , |
| Content / E-E-A-T | , | , |
| CITE domain audit | | |
| Keywords / clusters | | — |
| Links | | , , |
| Local SEO | | |
| Images | | |
| International / hreflang | | |
| Programmatic SEO | | |
| Migration | | |
| Analytics / myths | | — |
| Crawl / indexation | | , , , , |
When not to run Mode 1 (full audit)
| User signal | Action |
|---|---|
| Google Ads / PPC as the primary ask | Paid-media scope — no organic SEO Health Score or crawl Finding wall unless organic SEO is also requested. |
| Employer branding only, pure press/PR distribution, email-only marketing | Narrow guidance; no implied full technical + content audit. |
| GA4/GTM setup only (no organic SEO question) | — no fabricated domain-wide numeric score. |
| Social community management only | Out of scope unless tied to organic discovery (e.g. , entity signals). |
| Explicitly scoped task (e.g. “only robots.txt + sitemap”) | Stay in that scope — no domain-wide E-E-A-T essay or score unless the user asks. |
Audit Context: Internal vs. Competitive Mode
Before routing, determine which audit context applies. This controls what outputs are valid.
| Signal | Context | What's Allowed |
|---|---|---|
| User says "my site", "our site", "I own", provides GSC/GA4 access, or confirms backend access | Internal Mode | Full scored audit, all 27 scripts eligible, Execute mode available, /100 Health Score valid |
| External URL the user does not own (competitor, prospect, reference site) | Competitive Mode | Surface crawl only (homepage + up to 20 pages), no /100 Health Score, Execute mode disabled, all output labeled "External Observation Only" |
When in doubt, ask: "Is this your site, or are you analyzing a competitor?"
This skill operates in three modes. Identify which mode applies before touching anything else.
The Three Modes
Mode 1 — Audit Fetch the site, run all relevant checks, produce a scored report. Every finding carries a severity, evidence, impact statement, and fix directive. Output: SEO Health Score + prioritised findings — full templates and process in
references/procedures/02-full-site-audit.md.
Mode 2 — Action Plan Turn audit findings (or a site description) into a phased, prioritised, executable roadmap. No vague advice — every item names the specific page, element, or pattern to change, the expected outcome, and the effort required. Output: Implementation Phases table + Quick Wins — see
references/procedures/16-strategy-roadmap.md and Mode 2 format in references/procedures/02-full-site-audit.md.
Mode 3 — Execute Do the work. Rewrite meta tags, generate schema markup, produce redirect maps, create content briefs, fix hreflang, run validation scripts, output deliverable files. Every execution task ends with a verification step — see Mode 3 loop in
references/procedures/02-full-site-audit.md.
Most requests involve all three in sequence: Audit → Plan → Execute. Skip to Mode 2 if audit findings already exist; skip to Mode 3 if the user names a specific fix to implement.
Intake Checklist
Three questions only — skip any already answered in the user's message.
| # | Question | Why It Matters |
|---|---|---|
| 1 | What is the URL? | Required for all three modes |
| 2 | What is the primary goal? (traffic / AI citations / local leads / traffic drop / specific keyword) | Determines which modules run first |
| 3 | Which mode? Audit / Audit + Plan / Audit + Plan + Execute | Scopes the work — default to all three if unclear |
Everything else (analytics access, CMS, business type) is discovered during the audit.
Mode Routing
User request + URL │ ├─ "audit", "analyze", "full check", "what's wrong" │ └─ Mode 1 → read procedures/02-full-site-audit.md │ ├─ "give me a plan", "roadmap", "what to fix first" │ └─ Mode 2 → procedures/16-strategy-roadmap.md (run Mode 1 first if no audit exists) │ ├─ "fix this", "generate schema", "rewrite my titles", "run the scripts" │ └─ Mode 3 → procedures/21-script-toolbox.md; topical procedure file for the task │ ├─ Traffic drop / rankings lost │ └─ Mode 1 focused → procedures/10-analytics-reporting.md first, then procedures/06-content-eeat-and-pruning.md / procedures/04-technical-seo.md │ ├─ AI citations / GEO question │ └─ Mode 1 focused → procedures/03-geo-ai-search.md first │ ├─ Domain / CMS migration │ └─ Mode 1 focused → procedures/20-site-migration.md │ └─ No mode stated + URL / "audit + fix everything" └─ Mode 1 → 2 → 3 (procedures/02, then 16, then execute top findings)
Topic-to-section routing table:
(same content as former SKILL §1).references/procedures/01-request-detection-routing.md
What "Done" Looks Like per Mode
Audit complete when: SEO Health Score delivered, all Critical and High findings documented in Finding/Evidence/Impact/Fix/Confidence format, no section skipped without reason stated.
Plan complete when: findings grouped into four implementation phases (Foundation / Expansion / Scale / Authority), each item has an owner action, expected outcome, and effort estimate.
Execute complete when: every fix implemented AND verified — run the relevant validation script, review the output, confirm it resolves the original finding.
Context Budget Awareness
If you are running on a model or configuration with limited context length or execution time (e.g., fast-model subagents, CI pipelines, or agentic chains), apply graceful degradation before hitting a wall:
- Estimate before executing. A full Mode 1 audit with
and all scripts can produce 50k+ tokens of output. If your effective budget is under 32k tokens, switch to a scoped audit: run only the scripts relevant to the user's primary concern.generate_report.py - Prefer partial delivery over timeout. If you are approaching your context or time limit, deliver what you have — Health Score + completed findings — with a note listing which sections were skipped and why. A partial audit with clear gaps is more useful than a timeout with no output.
- Web fetches are expensive. Each site fetch adds latency and tokens. For scoped tasks (schema only, robots.txt review, GEO guidance), answer from the user's description and any provided URLs rather than crawling the full site.
- Compaction fallback. If context fills mid-audit, follow Context Management in
— compress completed sections into summary bullets and continue with remaining sections.references/procedures/21-script-toolbox.md
This is a fallback, not a default. When context budget allows, always prefer the full audit pipeline.
Global guardrails (always apply)
These rules apply to every mode. Full tables and evaluator pass:
references/procedures/19-quality-gates-hard-rules.md.
Evidence integrity (do not claim without data)
| Claim | Only state if |
|---|---|
| LCP / INP / CLS / performance score | ran successfully, or user pasted PageSpeed Insights / CrUX output |
| Backlink count or referring domains | ran and returned data |
| Organic traffic or impression numbers | GSC / GA4 access confirmed and data retrieved |
| Health Score /100 | Internal Mode + minimum 5 scripts ran with data |
| Schema errors or validation status | ran against the page |
| Schema "not found" on a CMS site | Confirmed via Rich Results Test or browser JS — raw HTML cannot detect JS-injected schema |
When data is absent: use
[metric] not measured — run [script] for actual data or ask the user to provide it.
Finding format (mandatory)
Every finding: Finding / Evidence / Impact / Fix / Confidence (Confirmed / Likely / Hypothesis). Example report excerpt:
references/audit-output-example.md.
High-Risk execute gate
High-Risk changes (robots.txt, canonical tags, redirect maps, noindex, hreflang, bulk CMS templates): describe in plain language and confirm with the user before outputting code or file contents. Safe changes (meta, alt text, most schema, content rewrites, llms.txt): may output directly. Full classification table:
references/procedures/02-full-site-audit.md (Mode 3).
Before delivering any Mode 1 audit
Run the internal self-evaluation pass in
references/procedures/19-quality-gates-hard-rules.md (Evaluator-Optimizer checklist).