Claude-seo-skills seo-gsc-indexing
git clone https://github.com/lionkiii/claude-seo-skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/lionkiii/claude-seo-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/seo-gsc-indexing" ~/.claude/skills/lionkiii-claude-seo-skills-seo-gsc-indexing && rm -rf "$T"
skills/seo-gsc-indexing/SKILL.mdGSC Index Issues — Indexing Coverage Report
@skills/seo/references/mcp-degradation.md @skills/seo/references/gsc-api-reference.md
Checks indexing status for up to 20 pages using the GSC URL Inspection API. IMPORTANT:
inspect_url is rate-limited — this skill caps all calls at 20 to avoid
rate-limit hangs. Prioritizes pages most likely to have indexing issues.
MCP Check
Before calling any GSC tool, verify the MCP is connected:
- Use ToolSearch with query
+google-search-console - If tools returned — note the actual tool name prefix, proceed to Inputs
- If no tools returned — display the GSC MCP error template from
and stop:references/mcp-degradation.md
## Google Search Console MCP Not Available The `/seo gsc index-issues` command requires the GSC MCP, which is not currently connected. **What you can do:** - Use `/seo technical <url>` for crawlability and indexability analysis (no live data) - Use `/seo audit <url>` for a full static SEO audit **To connect GSC MCP:** - Install and configure a Google Search Console MCP server (see README for setup) - Add it to ~/.claude/mcp.json at user scope (NOT project scope) - Verify GSC property access before running commands (domain vs URL prefix format) - See references/gsc-api-reference.md for property format details
Inputs
: The GSC property URL. Accept both formats:site- Domain property:
sc-domain:example.com - URL prefix:
orhttps://example.comhttps://www.example.com - If user provides a bare domain (no prefix), call
to identify the correct property format registered in GSC.list_sites
- Domain property:
(optional): Specific URLs the user wants to inspect. If provided, inspect these directly (still capped at 20 total calls).urls
Date Calculation
Use Bash to calculate dates for the initial search analytics query:
endDate=$(date -v-3d +%Y-%m-%d) startDate=$(date -v-31d +%Y-%m-%d) echo "endDate: $endDate | startDate: $startDate"
Execution
RATE LIMIT RULE: Never call
more than 20 times in a single run.inspect_url
Path A — User provided specific URLs
If the user provides specific URLs to inspect:
- Accept up to 20 URLs (if user provides more, inspect first 20, note the cap)
- For each URL, call
:inspect_url
: the site propertysiteUrl
: the specific URLinspectionUrl
- Collect all results, proceed to Output
Path B — No specific URLs (general scan)
-
Get pages GSC knows about: Call
with:query_search_analytics
: the site propertysiteUrl
: calculated startDatestartDate
: calculated endDateendDate
:dimensions["page"]
: 100rowLimit
-
Select pages to inspect (cap at 20): Strategy — prioritize pages most likely to have indexing issues:
- Sort pages by impressions ascending (low impressions = potential indexing issues)
- Take the bottom 20 pages by impressions
- These are the pages GSC sees but that get little visibility
-
Inspect each selected page: For each of the up to 20 selected pages, call
:inspect_url
: the site propertysiteUrl
: the page URLinspectionUrl
Stop after 20 calls regardless of remaining pages.
Post-processing: Group results by coverage state:
- Indexed (coverageState contains "Indexed" or indexingState = "INDEXED")
- Excluded (coverageState contains "Excluded", "Blocked", "Noindex", "Redirect")
- Error (coverageState contains "Error", "Not found")
Output Format
## GSC Index Issues: [site property] **Pages inspected:** [count] of [total pages in GSC] **Rate limit cap:** 20 inspect_url calls maximum per run ### Indexing Status Summary | Coverage State | Count | |----------------|-------| | Indexed | [n] | | Excluded | [n] | | Error | [n] | ### Indexed Pages | URL | Indexing State | Last Crawl Date | |-----|----------------|-----------------| | [url] | [state] | [date] | ### Excluded Pages | URL | Coverage State | Robots.txt State | Last Crawl Date | |-----|----------------|------------------|-----------------| | [url] | [state] | [state] | [date] | ### Error Pages | URL | Coverage State | Page Fetch State | Last Crawl Date | |-----|----------------|------------------|-----------------| | [url] | [state] | [state] | [date] | ### Recommendations - [For each excluded/error page: specific action based on coverage state]
Recommendation logic by coverage state:
- "Blocked by robots.txt": Check robots.txt is not accidentally blocking this URL
- "Excluded by 'noindex' tag": Remove noindex meta tag if page should be indexed
- "Redirect": Verify the redirect target is indexed
- "Crawl anomaly" or fetch error: Check server response codes, page may be returning 5xx
- "Duplicate without user-selected canonical": Add canonical tag pointing to preferred URL
- "Not found (404)": Fix broken URL or add redirect from old URL
Note if cap reached: "Showing [20] of [total] pages. Run again with specific URLs to inspect additional pages."