Awesome-omni-skills ai-seo-v2

AI SEO workflow skill. Use this skill when the user needs Optimize content for AI search and LLM citations across AI Overviews, ChatGPT, Perplexity, Claude, Gemini, and similar systems. Use when improving AI visibility, answer engine optimization, or citation readiness and the operator should preserve the upstream workflow, copied support files, and provenance before merging or handing off.

install
source · Clone the upstream repo
git clone https://github.com/diegosouzapw/awesome-omni-skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/ai-seo-v2" ~/.claude/skills/diegosouzapw-awesome-omni-skills-ai-seo-v2 && rm -rf "$T"
manifest: skills/ai-seo-v2/SKILL.md
source content

AI SEO

Overview

This public intake copy packages

plugins/antigravity-awesome-skills-claude/skills/ai-seo
from
https://github.com/sickn33/antigravity-awesome-skills
into the native Omni Skills editorial shape without hiding its origin.

Use it when the operator needs the upstream workflow, support files, and repository context to stay intact while the public validator and private enhancer continue their normal downstream flow.

This intake keeps the copied upstream files intact and uses

metadata.json
plus
ORIGIN.md
as the provenance anchor for review.

AI SEO You are an expert in AI search optimization — the practice of making content discoverable, extractable, and citable by AI systems including Google AI Overviews, ChatGPT, Perplexity, Claude, Gemini, and Copilot. Your goal is to help users get their content cited as a source in AI-generated answers.

Imported source sections that did not map cleanly to the public headings are still preserved below or in the support files. Notable imported sections: Before Starting, How AI Search Works, AI Visibility Audit, Optimization Strategy, Content Types That Get Cited Most, Monitoring AI Visibility.

When to Use This Skill

Use this section as the trigger filter. It should make the activation boundary explicit before the operator loads files, runs commands, or opens a pull request.

  • Use when optimizing content to be cited by LLMs and AI search systems.
  • Use when the user asks about AI SEO, AEO, GEO, LLM visibility, or AI citations.
  • Use when traditional SEO alone is not the full question and AI-specific discoverability matters.
  • Use when the request clearly matches the imported source intent: Optimize content for AI search and LLM citations across AI Overviews, ChatGPT, Perplexity, Claude, Gemini, and similar systems. Use when improving AI visibility, answer engine optimization, or citation readiness.
  • Use when the operator should preserve upstream workflow detail instead of rewriting the process from scratch.
  • Use when provenance needs to stay visible in the answer, PR, or review packet.

Operating Table

SituationStart hereWhy it matters
First-time use
metadata.json
Confirms repository, branch, commit, and imported path before touching the copied workflow
Provenance review
ORIGIN.md
Gives reviewers a plain-language audit trail for the imported source
Workflow execution
references/content-patterns.md
Starts with the smallest copied file that materially changes execution
Supporting context
references/platform-ranking-factors.md
Adds the next most relevant copied source file without loading the entire package
Handoff decision
## Related Skills
Helps the operator switch to a stronger native skill when the task drifts

Workflow

This workflow is intentionally editorial and operational at the same time. It keeps the imported source useful to the operator while still satisfying the public intake standards that feed the downstream enhancer flow.

  1. Confirm the user goal, the scope of the imported workflow, and whether this skill is still the right router for the task.
  2. Read the overview and provenance files before loading any copied upstream support files.
  3. Load only the references, examples, prompts, or scripts that materially change the outcome for the current request.
  4. Execute the upstream workflow while keeping provenance and source boundaries explicit in the working notes.
  5. Validate the result against the upstream expectations and the evidence you can point to in the copied files.
  6. Escalate or hand off to a related skill when the work moves out of this imported workflow's center of gravity.
  7. Before merge or closure, record what was used, what changed, and what the reviewer still needs to verify.

Imported Workflow Notes

Imported: Before Starting

Check for product marketing context first: If

.agents/product-marketing-context.md
exists (or
.claude/product-marketing-context.md
in older setups), read it before asking questions. Use that context and only ask for information not already covered or specific to this task.

Gather this context (ask if not provided):

1. Current AI Visibility

  • Do you know if your brand appears in AI-generated answers today?
  • Have you checked ChatGPT, Perplexity, or Google AI Overviews for your key queries?
  • What queries matter most to your business?

2. Content & Domain

  • What type of content do you produce? (Blog, docs, comparisons, product pages)
  • What's your domain authority / traditional SEO strength?
  • Do you have existing structured data (schema markup)?

3. Goals

  • Get cited as a source in AI answers?
  • Appear in Google AI Overviews for specific queries?
  • Compete with specific brands already getting cited?
  • Optimize existing content or create new AI-optimized content?

4. Competitive Landscape

  • Who are your top competitors in AI search results?
  • Are they being cited where you're not?

Examples

Example 1: Ask for the upstream workflow directly

Use @ai-seo-v2 to handle <task>. Start from the copied upstream workflow, load only the files that change the outcome, and keep provenance visible in the answer.

Explanation: This is the safest starting point when the operator needs the imported workflow, but not the entire repository.

Example 2: Ask for a provenance-grounded review

Review @ai-seo-v2 against metadata.json and ORIGIN.md, then explain which copied upstream files you would load first and why.

Explanation: Use this before review or troubleshooting when you need a precise, auditable explanation of origin and file selection.

Example 3: Narrow the copied support files before execution

Use @ai-seo-v2 for <task>. Load only the copied references, examples, or scripts that change the outcome, and name the files explicitly before proceeding.

Explanation: This keeps the skill aligned with progressive disclosure instead of loading the whole copied package by default.

Example 4: Build a reviewer packet

Review @ai-seo-v2 using the copied upstream files plus provenance, then summarize any gaps before merge.

Explanation: This is useful when the PR is waiting for human review and you want a repeatable audit packet.

Best Practices

Treat the generated public skill as a reviewable packaging layer around the upstream repository. The goal is to keep provenance explicit and load only the copied source material that materially improves execution.

  • Keep the imported skill grounded in the upstream repository; do not invent steps that the source material cannot support.
  • Prefer the smallest useful set of support files so the workflow stays auditable and fast to review.
  • Keep provenance, source commit, and imported file paths visible in notes and PR descriptions.
  • Point directly at the copied upstream files that justify the workflow instead of relying on generic review boilerplate.
  • Treat generated examples as scaffolding; adapt them to the concrete task before execution.
  • Route to a stronger native skill when architecture, debugging, design, or security concerns become dominant.

Troubleshooting

Problem: The operator skipped the imported context and answered too generically

Symptoms: The result ignores the upstream workflow in

plugins/antigravity-awesome-skills-claude/skills/ai-seo
, fails to mention provenance, or does not use any copied source files at all. Solution: Re-open
metadata.json
,
ORIGIN.md
, and the most relevant copied upstream files. Load only the files that materially change the answer, then restate the provenance before continuing.

Problem: The imported workflow feels incomplete during review

Symptoms: Reviewers can see the generated

SKILL.md
, but they cannot quickly tell which references, examples, or scripts matter for the current task. Solution: Point at the exact copied references, examples, scripts, or assets that justify the path you took. If the gap is still real, record it in the PR instead of hiding it.

Problem: The task drifted into a different specialization

Symptoms: The imported skill starts in the right place, but the work turns into debugging, architecture, design, security, or release orchestration that a native skill handles better. Solution: Use the related skills section to hand off deliberately. Keep the imported provenance visible so the next skill inherits the right context instead of starting blind.

Imported Troubleshooting Notes

Imported: Common Mistakes

  • Ignoring AI search entirely — ~45% of Google searches now show AI Overviews, and ChatGPT/Perplexity are growing fast
  • Treating AI SEO as separate from SEO — Good traditional SEO is the foundation; AI SEO adds structure and authority on top
  • Writing for AI, not humans — If content reads like it was written to game an algorithm, it won't get cited or convert
  • No freshness signals — Undated content loses to dated content because AI systems weight recency heavily. Show when content was last updated
  • Gating all content — AI can't access gated content. Keep your most authoritative content open
  • Ignoring third-party presence — You may get more AI citations from a Wikipedia mention than from your own blog
  • No structured data — Schema markup gives AI systems structured context about your content
  • Keyword stuffing — Unlike traditional SEO where it's just ineffective, keyword stuffing actively reduces AI visibility by 10% (Princeton GEO study)
  • Blocking AI bots — If GPTBot, PerplexityBot, or ClaudeBot are blocked in robots.txt, those platforms can't cite you
  • Generic content without data — "We're the best" won't get cited. "Our customers see 3x improvement in [metric]" will
  • Forgetting to monitor — You can't improve what you don't measure. Check AI visibility monthly at minimum

Related Skills

  • @00-andruia-consultant
    - Use when the work is better handled by that native specialization after this imported skill establishes context.
  • @10-andruia-skill-smith
    - Use when the work is better handled by that native specialization after this imported skill establishes context.
  • @20-andruia-niche-intelligence
    - Use when the work is better handled by that native specialization after this imported skill establishes context.
  • @3d-web-experience
    - Use when the work is better handled by that native specialization after this imported skill establishes context.

Additional Resources

Use this support matrix and the linked files below as the operator packet for this imported skill. They should reflect real copied source material, not generic scaffolding.

Resource familyWhat it gives the reviewerExample path
references
copied reference notes, guides, or background material from upstream
references/content-patterns.md
examples
worked examples or reusable prompts copied from upstream
examples/n/a
scripts
upstream helper scripts that change execution or validation
scripts/n/a
agents
routing or delegation notes that are genuinely part of the imported package
agents/n/a
assets
supporting assets or schemas copied from the source package
assets/n/a

Imported Reference Notes

Imported: How AI Search Works

The AI Search Landscape

PlatformHow It WorksSource Selection
Google AI OverviewsSummarizes top-ranking pagesStrong correlation with traditional rankings
ChatGPT (with search)Searches web, cites sourcesDraws from wider range, not just top-ranked
PerplexityAlways cites sources with linksFavors authoritative, recent, well-structured content
GeminiGoogle's AI assistantPulls from Google index + Knowledge Graph
CopilotBing-powered AI searchBing index + authoritative sources
ClaudeBrave Search (when enabled)Training data + Brave search results

For a deep dive on how each platform selects sources and what to optimize per platform, see references/platform-ranking-factors.md.

Key Difference from Traditional SEO

Traditional SEO gets you ranked. AI SEO gets you cited.

In traditional search, you need to rank on page 1. In AI search, a well-structured page can get cited even if it ranks on page 2 or 3 — AI systems select sources based on content quality, structure, and relevance, not just rank position.

Critical stats:

  • AI Overviews appear in ~45% of Google searches
  • AI Overviews reduce clicks to websites by up to 58%
  • Brands are 6.5x more likely to be cited via third-party sources than their own domains
  • Optimized content gets cited 3x more often than non-optimized
  • Statistics and citations boost visibility by 40%+ across queries

Imported: AI Visibility Audit

Before optimizing, assess your current AI search presence.

Step 1: Check AI Answers for Your Key Queries

Test 10-20 of your most important queries across platforms:

QueryGoogle AI OverviewChatGPTPerplexityYou Cited?Competitors Cited?
[query 1]Yes/NoYes/NoYes/NoYes/No[who]
[query 2]Yes/NoYes/NoYes/NoYes/No[who]

Query types to test:

  • "What is [your product category]?"
  • "Best [product category] for [use case]"
  • "[Your brand] vs [competitor]"
  • "How to [problem your product solves]"
  • "[Your product category] pricing"

Step 2: Analyze Citation Patterns

When your competitors get cited and you don't, examine:

  • Content structure — Is their content more extractable?
  • Authority signals — Do they have more citations, stats, expert quotes?
  • Freshness — Is their content more recently updated?
  • Schema markup — Do they have structured data you're missing?
  • Third-party presence — Are they cited via Wikipedia, Reddit, review sites?

Step 3: Content Extractability Check

For each priority page, verify:

CheckPass/Fail
Clear definition in first paragraph?
Self-contained answer blocks (work without surrounding context)?
Statistics with sources cited?
Comparison tables for "[X] vs [Y]" queries?
FAQ section with natural-language questions?
Schema markup (FAQ, HowTo, Article, Product)?
Expert attribution (author name, credentials)?
Recently updated (within 6 months)?
Heading structure matches query patterns?
AI bots allowed in robots.txt?

Step 4: AI Bot Access Check

Verify your robots.txt allows AI crawlers. Each AI platform has its own bot, and blocking it means that platform can't cite you:

  • GPTBot and ChatGPT-User — OpenAI (ChatGPT)
  • PerplexityBot — Perplexity
  • ClaudeBot and anthropic-ai — Anthropic (Claude)
  • Google-Extended — Google Gemini and AI Overviews
  • Bingbot — Microsoft Copilot (via Bing)

Check your robots.txt for

Disallow
rules targeting any of these. If you find them blocked, you have a business decision to make: blocking prevents AI training on your content but also prevents citation. One middle ground is blocking training-only crawlers (like CCBot from Common Crawl) while allowing the search bots listed above.

See references/platform-ranking-factors.md for the full robots.txt configuration.


Imported: Optimization Strategy

The Three Pillars

1. Structure (make it extractable)
2. Authority (make it citable)
3. Presence (be where AI looks)

Pillar 1: Structure — Make Content Extractable

AI systems extract passages, not pages. Every key claim should work as a standalone statement.

Content block patterns:

  • Definition blocks for "What is X?" queries
  • Step-by-step blocks for "How to X" queries
  • Comparison tables for "X vs Y" queries
  • Pros/cons blocks for evaluation queries
  • FAQ blocks for common questions
  • Statistic blocks with cited sources

For detailed templates for each block type, see references/content-patterns.md.

Structural rules:

  • Lead every section with a direct answer (don't bury it)
  • Keep key answer passages to 40-60 words (optimal for snippet extraction)
  • Use H2/H3 headings that match how people phrase queries
  • Tables beat prose for comparison content
  • Numbered lists beat paragraphs for process content
  • Each paragraph should convey one clear idea

Pillar 2: Authority — Make Content Citable

AI systems prefer sources they can trust. Build citation-worthiness.

The Princeton GEO research (KDD 2024, studied across Perplexity.ai) ranked 9 optimization methods:

MethodVisibility BoostHow to Apply
Cite sources+40%Add authoritative references with links
Add statistics+37%Include specific numbers with sources
Add quotations+30%Expert quotes with name and title
Authoritative tone+25%Write with demonstrated expertise
Improve clarity+20%Simplify complex concepts
Technical terms+18%Use domain-specific terminology
Unique vocabulary+15%Increase word diversity
Fluency optimization+15-30%Improve readability and flow
Keyword stuffing-10%Actively hurts AI visibility

Best combination: Fluency + Statistics = maximum boost. Low-ranking sites benefit even more — up to 115% visibility increase with citations.

Statistics and data (+37-40% citation boost)

  • Include specific numbers with sources
  • Cite original research, not summaries of research
  • Add dates to all statistics
  • Original data beats aggregated data

Expert attribution (+25-30% citation boost)

  • Named authors with credentials
  • Expert quotes with titles and organizations
  • "According to [Source]" framing for claims
  • Author bios with relevant expertise

Freshness signals

  • "Last updated: [date]" prominently displayed
  • Regular content refreshes (quarterly minimum for competitive topics)
  • Current year references and recent statistics
  • Remove or update outdated information

E-E-A-T alignment

  • First-hand experience demonstrated
  • Specific, detailed information (not generic)
  • Transparent sourcing and methodology
  • Clear author expertise for the topic

Pillar 3: Presence — Be Where AI Looks

AI systems don't just cite your website — they cite where you appear.

Third-party sources matter more than your own site:

  • Wikipedia mentions (7.8% of all ChatGPT citations)
  • Reddit discussions (1.8% of ChatGPT citations)
  • Industry publications and guest posts
  • Review sites (G2, Capterra, TrustRadius for B2B SaaS)
  • YouTube (frequently cited by Google AI Overviews)
  • Quora answers

Actions:

  • Ensure your Wikipedia page is accurate and current
  • Participate authentically in Reddit communities
  • Get featured in industry roundups and comparison articles
  • Maintain updated profiles on relevant review platforms
  • Create YouTube content for key how-to queries
  • Answer relevant Quora questions with depth

Schema Markup for AI

Structured data helps AI systems understand your content. Key schemas:

Content TypeSchemaWhy It Helps
Articles/Blog posts
Article
,
BlogPosting
Author, date, topic identification
How-to content
HowTo
Step extraction for process queries
FAQs
FAQPage
Direct Q&A extraction
Products
Product
Pricing, features, reviews
Comparisons
ItemList
Structured comparison data
Reviews
Review
,
AggregateRating
Trust signals
Organization
Organization
Entity recognition

Content with proper schema shows 30-40% higher AI visibility. For implementation, use the schema-markup skill.


Imported: Content Types That Get Cited Most

Not all content is equally citable. Prioritize these formats:

Content TypeCitation ShareWhy AI Cites It
Comparison articles~33%Structured, balanced, high-intent
Definitive guides~15%Comprehensive, authoritative
Original research/data~12%Unique, citable statistics
Best-of/listicles~10%Clear structure, entity-rich
Product pages~10%Specific details AI can extract
How-to guides~8%Step-by-step structure
Opinion/analysis~10%Expert perspective, quotable

Underperformers for AI citation:

  • Generic blog posts without structure
  • Thin product pages with marketing fluff
  • Gated content (AI can't access it)
  • Content without dates or author attribution
  • PDF-only content (harder for AI to parse)

Imported: Monitoring AI Visibility

What to Track

MetricWhat It MeasuresHow to Check
AI Overview presenceDo AI Overviews appear for your queries?Manual check or Semrush/Ahrefs
Brand citation rateHow often you're cited in AI answersAI visibility tools (see below)
Share of AI voiceYour citations vs. competitorsPeec AI, Otterly, ZipTie
Citation sentimentHow AI describes your brandManual review + monitoring tools
Source attributionWhich of your pages get citedTrack referral traffic from AI sources

AI Visibility Monitoring Tools

ToolCoverageBest For
Otterly AIChatGPT, Perplexity, Google AI OverviewsShare of AI voice tracking
Peec AIChatGPT, Gemini, Perplexity, Claude, Copilot+Multi-platform monitoring at scale
ZipTieGoogle AI Overviews, ChatGPT, PerplexityBrand mention + sentiment tracking
LLMrefsChatGPT, Perplexity, AI Overviews, GeminiSEO keyword → AI visibility mapping

DIY Monitoring (No Tools)

Monthly manual check:

  1. Pick your top 20 queries
  2. Run each through ChatGPT, Perplexity, and Google
  3. Record: Are you cited? Who is? What page?
  4. Log in a spreadsheet, track month-over-month

Imported: AI SEO for Different Content Types

SaaS Product Pages

Goal: Get cited in "What is [category]?" and "Best [category]" queries.

Optimize:

  • Clear product description in first paragraph (what it does, who it's for)
  • Feature comparison tables (you vs. category, not just competitors)
  • Specific metrics ("processes 10,000 transactions/sec" not "blazing fast")
  • Customer count or social proof with numbers
  • Pricing transparency (AI cites pages with visible pricing)
  • FAQ section addressing common buyer questions

Blog Content

Goal: Get cited as an authoritative source on topics in your space.

Optimize:

  • One clear target query per post (match heading to query)
  • Definition in first paragraph for "What is" queries
  • Original data, research, or expert quotes
  • "Last updated" date visible
  • Author bio with relevant credentials
  • Internal links to related product/feature pages

Comparison/Alternative Pages

Goal: Get cited in "[X] vs [Y]" and "Best [X] alternatives" queries.

Optimize:

  • Structured comparison tables (not just prose)
  • Fair and balanced (AI penalizes obviously biased comparisons)
  • Specific criteria with ratings or scores
  • Updated pricing and feature data
  • Cite the competitor-alternatives skill for building these pages

Documentation / Help Content

Goal: Get cited in "How to [X] with [your product]" queries.

Optimize:

  • Step-by-step format with numbered lists
  • Code examples where relevant
  • HowTo schema markup
  • Screenshots with descriptive alt text
  • Clear prerequisites and expected outcomes

Imported: Tool Integrations

For implementation, use the SEO and monitoring tools available in the current environment.

ToolUse For
semrush
AI Overview tracking, keyword research, content gap analysis
ahrefs
Backlink analysis, content explorer, AI Overview data
gsc
Search Console performance data, query tracking
ga4
Referral traffic from AI sources

Imported: Task-Specific Questions

  1. What are your top 10-20 most important queries?
  2. Have you checked if AI answers exist for those queries today?
  3. Do you have structured data (schema markup) on your site?
  4. What content types do you publish? (Blog, docs, comparisons, etc.)
  5. Are competitors being cited by AI where you're not?
  6. Do you have a Wikipedia page or presence on review sites?

Imported: Limitations

  • Use this skill only when the task clearly matches the scope described above.
  • Do not treat the output as a substitute for environment-specific validation, testing, or expert review.
  • Stop and ask for clarification if required inputs, permissions, safety boundaries, or success criteria are missing.