Claude-github github
git clone https://github.com/avalonreset/claude-github
T=$(mktemp -d) && git clone --depth=1 https://github.com/avalonreset/claude-github "$T" && mkdir -p ~/.claude/skills && cp -r "$T/github" ~/.claude/skills/avalonreset-claude-github-github && rm -rf "$T"
github/SKILL.mdGitHub -- Repository Optimization Suite
Comprehensive GitHub optimization across SEO, legal, community, and discoverability. Orchestrates 8 specialized sub-skills and 6 subagents. Data-first: every recommendation traces back to a data source.
Quick Reference
| Command | What it does |
|---|---|
| Full repo health audit with 0-100 scoring |
| Audit a specific remote repo |
| Audit entire portfolio (empire-level) |
| Generate or optimize README |
| License, SECURITY.md, CITATION.cff, fork compliance |
| Description, topics, settings, social preview |
| Keyword research and content optimization strategy |
| Templates, CONTRIBUTING, CODE_OF_CONDUCT, devcontainer |
| CHANGELOG, badges, versioning, release strategy |
| Multi-repo portfolio strategy, profile README |
| Live keyword/SERP data (requires DataForSEO account) |
Prerequisites
Before any skill can operate, check these in order:
- Git installed? Run
. If missing, guide user to install.git --version - GitHub CLI installed? Run
. If missing:gh --version
(Windows) orwinget install GitHub.cli
(macOS).brew install gh - Authenticated? Run
. If not logged in, guide throughgh auth status
.gh auth login - In a git repo? Run
. If not, ask if they want to create one or point to an existing repo.git rev-parse --is-inside-work-tree
If a user has no GitHub account, direct them to https://github.com/signup and wait for them to complete signup before proceeding with
gh auth login.
API Credentials
Two recommended services power SEO research and banner generation. The install script walks users through setting both up during installation. If they skipped setup, Step 0 below will catch it and guide them before any skill runs.
1. DataForSEO (MCP server -- NOT .env) DataForSEO provides live keyword and SERP data. It runs as an MCP server (
dataforseo-mcp-server) configured via ~/.claude/settings.json.
The install script handles this automatically. When configured, tools like
dataforseo_labs_google_keyword_suggestions are available directly in conversation.
If the MCP server is not configured, SEO skills fall back to codebase analysis.
2. KIE.ai (REST API -- uses .env) KIE.ai generates banner images for READMEs. It requires an API key in a
.env file.
Standard .env location: Check these paths in order:
- Current working directory:
./.env - Skill root:
~/.claude/skills/github/.env - User home:
~/.env
Loading credentials: Before banner generation, load the .env:
for envfile in ./.env ~/.claude/skills/github/.env ~/.env; do if [ -f "$envfile" ]; then export $(grep -v '^#' "$envfile" | xargs) 2>/dev/null break fi done
| Key | Required By | Purpose |
|---|---|---|
| (banner generation) | KIE.ai image generation |
| DataForSEO credentials | SEO data pass, , , , | Configured via MCP server, not .env |
Shared Data Cache
Skills persist their outputs to
.github-audit/ so other skills can reuse data
without re-gathering. The orchestrator writes repo-context.json after baseline
gathering. Each sub-skill reads cached data before gathering and writes its own
cache file after executing.
Reference: Read
~/.claude/skills/github/references/shared-data-cache.md for
JSON schemas, dependency map, and freshness rules.
Orchestrator responsibilities: Cache writes are embedded directly in Step 2 and Step 3.5 below -- look for the CACHE: callouts in each step.
Orchestration Logic
Step 0: Setup Check (runs ONCE per session, before anything else)
Before routing to any sub-skill, check if the user has the recommended services configured. This runs the FIRST time any
/github command is used in a session.
After showing the setup status once, do not repeat it.
Check 1 -- DataForSEO MCP: Use ToolSearch to look for
dataforseo_labs_google_keyword_suggestions.
Check 2 -- KIE.ai API Key:
for envfile in ./.env ~/.claude/skills/github/.env ~/.env; do if [ -f "$envfile" ] && grep -q 'KIE_API_KEY=.' "$envfile" 2>/dev/null; then echo "KIE_CONFIGURED=true"; break fi done
(Check that the key is not just present but has a non-empty value after the
=.)
If BOTH are configured: Show a one-liner and move on:
Services: DataForSEO [active] | KIE.ai [active] -- full power mode.
If one or both are missing, STOP and show the setup guide:
## Setup Check This suite works best with two recommended services. Here's your status: DataForSEO [not configured] -- powers live keyword research, SERP rankings, and AI visibility KIE.ai [not configured] -- powers AI-generated banner images for READMEs ### How to set up DataForSEO (5 minutes) 1. Create a free account at https://dataforseo.com (free tier includes enough credits for hundreds of keyword analyses) 2. Go to https://app.dataforseo.com/api-access to find your login and password 3. From the claude-github directory you cloned during install, run: - macOS/Linux: bash extensions/dataforseo/install.sh - Windows: powershell -File extensions\dataforseo\install.ps1 The installer will prompt for your credentials and configure the MCP server. ### How to set up KIE.ai (2 minutes) 1. Go to https://kie.ai/api-key and create a free account 2. Copy your API key 3. Paste it into ~/.claude/skills/github/.env: KIE_API_KEY=your_key_here Want to set these up now, or continue without them? (Skills still work without these services, but SEO recommendations will be less precise and banner generation won't be available.)
Wait for the user to respond before proceeding. If they say continue/skip/later, proceed normally. If they want to set things up, guide them through it.
Step 1: Capture User Intent
On first interaction, ask conversationally what they want to accomplish. Map to one of:
| Intent | Optimization Priorities |
|---|---|
| Open Source Community | README (welcoming), community files (critical), topics (broad), SEO (discovery) |
| Professional Portfolio | README (impressive), badges (social proof), branding (consistent), pinned repos |
| Business / Brand | SEO (aggressive keywords), description (value prop), Pages, cross-linking |
| Internal to Public | Legal (thorough), SECURITY.md (critical), README (docs-heavy), CITATION.cff |
| Academic / Research | CITATION.cff (required), README (methodology), license (permissive), releases |
| Hobby / Learning | README (authentic), legal (simple MIT), community (lightweight), SEO (lower priority) |
If the user skips intent, fall back to repo-type defaults.
Step 2: Gather Baseline Data
For any repo, collect:
gh repo view --json name,description,url,homepageUrl,repositoryTopics,visibility,defaultBranchRef,licenseInfo,stargazerCount,forkCount,watchers,primaryLanguage,createdAt,updatedAt- File existence: README.md, LICENSE, CONTRIBUTING.md, SECURITY.md, CITATION.cff, CODE_OF_CONDUCT.md, .github/ISSUE_TEMPLATE/, .github/PULL_REQUEST_TEMPLATE.md, .github/FUNDING.yml, .gitattributes, CHANGELOG.md
- Codebase scan for repo-type detection
CACHE: After gathering, write
with all
baseline data. Create the directory and gitignore entry first:.github-audit/repo-context.json
mkdir -p .github-audit grep -qxF '.github-audit/' .gitignore 2>/dev/null || echo '.github-audit/' >> .gitignore
Step 3: Detect Repo Type
| Type | Signals |
|---|---|
| Library/Package | package.json, setup.py, Cargo.toml, go.mod, pyproject.toml |
| CLI Tool | bin/, "usage:", commander, yargs, clap, argparse |
| Framework | middleware, routing, plugins, "getting started" |
| API/Service | /api, swagger, OpenAPI, endpoints |
| Application | docker-compose, Dockerfile, deployment configs |
| Skill/Plugin | SKILL.md, AGENTS.md, extension pattern |
| Documentation | mkdocs.yml, docusaurus.config.js, mostly .md files |
Step 3.5: SEO Data Pass (DataForSEO Integration)
This step runs automatically when the DataForSEO MCP server is available. It discovers keyword opportunities that all downstream skills consume. If the MCP server is not configured, this step is skipped and skills use fallback methods.
How to check availability: Use ToolSearch to look for
dataforseo_labs_google_keyword_suggestions. If it exists, the MCP server is running.
Do NOT waste an API call just to test availability.
When to run: Before routing to ANY content-producing sub-skill (readme, meta, seo, empire). Skip for legal, community, releases (they don't need keyword data).
Important context: GitHub repos are NOT traditional websites. We don't scan a domain -- we discover what keywords people Google where a well-optimized GitHub repo could rank, then place those keywords in the README, description, and topics. See the github-seo skill for the full Keyword Opportunity Framework.
Cost and auto-run behavior:
- Each
call costs ~3-5 centskeyword_suggestions - Each
call costs ~5-8 centsserp_organic_live_advanced - Typical per-repo cost: 2 keyword calls + 1 SERP = ~10-15 cents
For single repos and small portfolios (1-5 repos): Just run it automatically. The cost is under 75 cents -- not worth interrupting the flow to ask permission. Show a brief note in the output: "Running DataForSEO keyword analysis (~10 cents)..."
For large portfolios (6+ repos): Show the cost estimate and ask before proceeding, because 10+ repos can cost over a dollar:
DataForSEO SEO Pass: [N] repo(s) x ~15 cents = ~[estimated total] Proceed? [y/n]
Actually STOP and wait for a response. Do NOT continue working while waiting.
If DataForSEO is NOT configured: Do not silently skip it. Encourage setup: "DataForSEO is not configured. SEO data will use GitHub search fallback only. For live keyword research, set it up in 5 minutes: https://dataforseo.com Then run the install script in extensions/dataforseo/."
keyword_suggestions returns volume + difficulty + intent INLINE -- no separate
volume or difficulty calls needed.
1. Generate 2-3 seed phrases from: repo name + description + primary language SEED QUALITY RULES (critical -- bad seeds waste API calls): - Keep seeds SHORT: 2-3 words max. "seo audit tool" not "gemini cli seo audit tool" - Use CATEGORY-LEVEL terms, not project-specific jargon - Think: "what would a developer Google to find this kind of project?" - Good seeds: "python web framework", "open source seo tools", "terminal emulator" - Bad seeds: "gemini cli seo tools", "lightweight WSGI microframework server" SEED FALLBACK: If a seed returns zero results, it was too specific. Broaden it by removing words. "gemini cli seo" → "seo tools" → "open source seo tools". Budget: max 4 keyword_suggestions calls total. If all 4 return nothing usable, fall back to codebase + GitHub search analysis (no DataForSEO). 2. Call: dataforseo_labs_google_keyword_suggestions (once per seed) Params: { "keyword": "python web framework", "location_name": "United States", "language_code": "en", "limit": 30 } → Returns ~30 candidates each with volume, difficulty, and intent already included → Filter: drop anything under 50/mo volume, flag difficulty under 40 as Sweet Spot → NOTE: parameter is "keyword" (singular string), NOT "seed_keywords" (array) → NOTE: use "location_name": "United States", NOT "location_code": 2840 3. Call: serp_organic_live_advanced (on best Sweet Spot candidate) Params: { "keyword": "python web framework", "location_name": "United States", "language_code": "en", "device": "desktop", "depth": 20 } → THE CRITICAL CHECK: scan for "github.com" in results. → If github.com in top 10: keyword is GOLD (GitHub Viability = 1.0) → If github.com in 11-20: possible (0.5) → If no github.com at all: SKIP this keyword, try next candidate
Store the results as SEO context. Format:
## SEO Data (from DataForSEO) ### Keyword Opportunities (by Opportunity Score) | Keyword | Volume/mo | Difficulty | GitHub in SERP? | Category | |---------|----------|------------|----------------|----------| | [term] | X | Y | Yes (#N) | Sweet Spot | | [term] | X | Y | Yes (#N) | Worth It | ### Placements - Primary keyword: [term] → H1, description, first paragraph - Secondary keywords: [terms] → H2 headings - Topic keywords: [terms] → GitHub topics ### AI Visibility - ChatGPT mentions: [yes/no] - Competitors mentioned instead: [list] Data source: DataForSEO MCP (live, [date])
If DataForSEO is NOT available, note:
SEO Data: fallback mode (codebase + GitHub search analysis).
Skills will then use their own fallback methods (see github-seo skill).
CACHE: After the SEO pass, write
with keyword
opportunities, placements, and AI visibility findings..github-audit/seo-data.json
Cost: ~10-15 cents per repo (2 keyword calls + 1 SERP check). Warn user before portfolio analysis (multiply by repo count).
Step 4: Route to Sub-Skill
Pass intent + repo type + baseline data + SEO data (if available) to the appropriate sub-skill.
Default behavior for bare
(no sub-command):
When the user provides a repo without specifying a sub-skill, run Steps 1-3.5
(intent, baseline, repo type, SEO data), then present a Quick Health Summary:/github <owner/repo>
## Quick Health Summary: owner/repo **Repo type:** [detected type] | **Intent:** [captured or assumed intent] **Stars:** X | **License:** MIT | **Last release:** vX.X.X (date) ### Top 3 Recommended Actions (by impact) 1. [Highest impact action] → use `/github [sub-skill]` 2. [Second action] → use `/github [sub-skill]` 3. [Third action] → use `/github [sub-skill]` ### SEO Snapshot (if DataForSEO available) Primary keyword opportunity: "[keyword]" (X/mo, difficulty Y, GitHub at #Z) Run `/github audit` for a full 0-100 score, or pick an action above to start.
This gives the user immediate value and a clear next step, instead of just showing a menu.
For commands explicitly matching a sub-skill (e.g.,
/github readme), route
directly to that sub-skill.
The GARE Pattern
Every skill follows Gather, Analyze, Recommend, Execute:
- Gather -- Collect data before making decisions
- Analyze -- Compare current state vs. ideal for repo type + intent
- Recommend -- Present data-backed recommendations with reasoning
- Execute -- Apply changes with user approval
Every recommendation must cite its source:
- "Based on DataForSEO keyword volume..." (live data)
- "Based on analysis of your codebase..." (repo scan)
- "Based on your repo type (CLI tool)..." (detection)
- "Based on GitHub's indexing rules..." (reference file)
- "Based on audit findings..." (prior audit run)
- "Based on your stated intent..." (user intent)
UX Principles (applies to ALL skills)
Hold the user's hand through manual steps. Many GitHub actions have no API (profile photo upload, social preview, repo settings). When a skill requires manual action, provide:
- Clickable file links for generated files:
file:///[absolute-path]/assets/banner.webp - Direct URLs to the exact settings page:
-- always substitute the actual owner/repo, never leave placeholders when you know the valueshttps://github.com/{owner}/{repo}/settings - Step-by-step instructions numbered and specific: "Go to X, click Y, upload Z"
- Verification links when possible: "Test your social preview at https://www.opengraph.xyz"
The goal is zero guesswork for the user. If they need to upload a file, show them exactly where the file is AND exactly where to upload it. If they need to change a setting, link them to the exact page. Make the manual parts as close to automated as possible.
Generated image link rule (banners, avatars, social previews): Whenever ANY image is generated or referenced for manual upload, ALWAYS output TWO clickable links:
- Local:
(for immediate access)file:///[absolute-path]/image.webp - Remote:
(after push) Plus the relevant settings/upload URL. Never just mention a relative path likehttps://raw.githubusercontent.com/{owner}/{repo}/main/{path}
without the clickable link. This applies to:assets/banner.webp
- README banners (github-readme skill)
- Social preview images (github-meta skill)
- Profile avatars (github-empire skill)
- Any other generated image that requires manual upload
Image format pipeline: All AI-generated images (banners, avatars) follow the same flow: request PNG from KIE.ai (lossless source) then convert to WebP (quality 80) for delivery. WebP is ~30% smaller than JPEG at equivalent quality and GitHub renders it natively. See
banner-generation.md Image Format Pipeline for details.
Reference Files
Load on-demand as needed -- do NOT load all at startup.
Path resolution: Reference files are installed at
~/.claude/skills/github/references/.
When a sub-skill says Read github/references/foo.md, use the Read tool with the full path:
~/.claude/skills/github/references/foo.md
-- License types, compatibility, fork obligationsreferences/license-guide.md
-- README structure, SEO patterns, headingsreferences/readme-framework.md
-- GitHub ranking factors, Google indexing rulesreferences/github-seo-guide.md
-- Standard files, best practices, priority by intentreferences/community-files-guide.md
-- YAML issue forms, PR template, devcontainer, dependabotreferences/community-templates.md
-- KIE.ai Nano Banana 2 API, prompt engineering, defaultsreferences/banner-generation.md
-- Semver, changelog format, badge URLsreferences/releases-guide.md
-- Per-type defaults for all repo typesreferences/repo-type-templates.md
-- Cross-skill data persistence schemas and rulesreferences/shared-data-cache.md
Scoring Methodology
GitHub Health Score (0-100)
| Category | Weight |
|---|---|
| README Quality | 25% |
| Metadata & Discovery | 20% |
| Legal Compliance | 15% |
| Community Health | 15% |
| Release & Maintenance | 15% |
| SEO & Discoverability | 10% |
Priority Levels
- Critical: Blocks discoverability or creates legal risk (immediate fix)
- High: Significantly impacts professional appearance (fix within 1 week)
- Medium: Optimization opportunity (fix within 1 month)
- Low: Nice to have (backlog)
Standard Operating Procedure (SOP)
The skill suite is designed to be run in a specific order. Each skill builds on the work of the previous one. Do not skip skills or run them out of order unless a category already scores 90+ (in which case, skip it).
The canonical workflow:
/github audit Step 0: Diagnose (scores 6 categories, generates SOP) | /github legal Step 1: Foundation (license, compliance, fork obligations) | /github community Step 2: Infrastructure (templates, CoC, devcontainer) | /github release Step 3: Versioning (CHANGELOG, badges, catch-up releases) | /github seo Step 4: Research (keyword data for description + README) | /github meta Step 5: Settings (description, topics, features -- uses SEO data) | /github readme Step 6: Capstone (README optimization -- uses everything above) | /github audit Step 7: Measure (re-audit to verify improvement)
Why this order:
- Legal must come first because every other file references the license
- Community files must exist before releases reference them
- SEO keyword research must happen before meta and readme consume those keywords
- README is last because it references all other files and uses SEO keywords
- The final audit measures the delta and catches anything that slipped through
Each skill ends with a handoff pointing the user to the next step. The user should never finish a skill and wonder "what now?" -- the skill tells them.
Two-phase approach:
Phase 1: Per-Repo Optimization (repeat for each repo) /github audit -> legal -> community -> release -> seo -> meta -> readme -> audit Phase 2: Portfolio Optimization (run once, after all repos are done) /github empire -> profile README, cross-linking, topic sync, branding, avatar
Empire operates at the portfolio level and assumes each repo is already in good shape. Running it before the individual repos are optimized means it will make recommendations based on incomplete data. Always finish Phase 1 on all repos first.
Sub-Skills
- github-audit -- Full repo health audit with 0-100 scoring (Step 0 + Step 7)
- github-legal -- License, attribution, SECURITY.md, CITATION.cff (Step 1)
- github-community -- Community health files and templates (Step 2)
- github-release -- Release strategy, CHANGELOG, badges (Step 3)
- github-seo -- Keyword research and content optimization (Step 4)
- github-meta -- Description, topics, settings, social preview (Step 5)
- github-readme -- README generation and optimization (Step 6)
- github-empire -- Portfolio strategy, profile README, org profile (after all repos)
Subagents
For parallel analysis during audits:
-- Legal compliance scoringgithub-legal
-- Community health scoringgithub-community
-- Release and maintenance scoringgithub-release
-- SEO and discoverability scoringgithub-seo
-- Metadata and discovery scoringgithub-meta
-- README quality scoringgithub-readme