Desktop github-deep-research

Conduct multi-round deep research on any GitHub repository. Use when users request comprehensive analysis, timeline reconstruction, competitive analysis, or in-depth investigation of a GitHub project. Produces structured markdown reports with executive summaries, chronological timelines, metrics analysis, and Mermaid diagrams. Triggers on GitHub repository URLs or open source project names.

install
source · Clone the upstream repo
git clone https://github.com/openyak/openyak
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openyak/openyak "$T" && mkdir -p ~/.claude/skills && cp -r "$T/backend/app/data/skills/github-deep-research" ~/.claude/skills/openyak-desktop-github-deep-research && rm -rf "$T"
manifest: backend/app/data/skills/github-deep-research/SKILL.md
source content

GitHub Deep Research Skill

Multi-round research combining GitHub API, web_search, web_fetch to produce comprehensive markdown reports.

Research Workflow

  • Round 1: GitHub API
  • Round 2: Discovery
  • Round 3: Deep Investigation
  • Round 4: Deep Dive

Core Methodology

Query Strategy

Broad to Narrow: Start with GitHub API, then general queries, refine based on findings.

Round 1: GitHub API
Round 2: "{topic} overview"
Round 3: "{topic} architecture", "{topic} vs alternatives"
Round 4: "{topic} issues", "{topic} roadmap", "site:github.com {topic}"

Source Prioritization:

  1. Official docs/repos (highest weight)
  2. Technical blogs (Medium, Dev.to)
  3. News articles (verified outlets)
  4. Community discussions (Reddit, HN)
  5. Social media (lowest weight, for sentiment)

Research Rounds

Round 1 - GitHub API

Execute the bundled

scripts/github_api.py
using the
bash
tool:

python scripts/github_api.py <owner> <repo> summary
python scripts/github_api.py <owner> <repo> readme
python scripts/github_api.py <owner> <repo> tree

The script path is relative to this skill's base directory (shown in the skill output).

Available commands (the last argument of

github_api.py
):

  • summary — comprehensive overview (stars, forks, languages, latest release)
  • info — basic repository metadata
  • readme — repository README content
  • tree — directory structure (depth 3)
  • languages — language breakdown by bytes
  • contributors — top contributors
  • commits — recent commit history
  • issues — open/closed issues
  • prs — pull requests
  • releases — release history

Environment: Set

GITHUB_TOKEN
for higher API rate limits (optional but recommended).

Round 2 - Discovery (3-5 web_search)

  • Get overview and identify key terms
  • Find official website/repo
  • Identify main players/competitors

Round 3 - Deep Investigation (5-10 web_search + web_fetch)

  • Technical architecture details
  • Timeline of key events
  • Community sentiment
  • Use web_fetch on valuable URLs for full content

Round 4 - Deep Dive

  • Analyze commit history for timeline
  • Review issues/PRs for feature evolution
  • Check contributor activity

Report Structure

Follow template in

assets/report_template.md
:

  1. Metadata Block - Date, confidence level, subject
  2. Executive Summary - 2-3 sentence overview with key metrics
  3. Chronological Timeline - Phased breakdown with dates
  4. Key Analysis Sections - Topic-specific deep dives
  5. Metrics & Comparisons - Tables, growth charts
  6. Strengths & Weaknesses - Balanced assessment
  7. Sources - Categorized references
  8. Confidence Assessment - Claims by confidence level
  9. Methodology - Research approach used

Mermaid Diagrams

Include diagrams where helpful:

Timeline (Gantt):

gantt
    title Project Timeline
    dateFormat YYYY-MM-DD
    section Phase 1
    Development    :2025-01-01, 2025-03-01
    section Phase 2
    Launch         :2025-03-01, 2025-04-01

Architecture (Flowchart):

flowchart TD
    A[User] --> B[Coordinator]
    B --> C[Planner]
    C --> D[Research Team]
    D --> E[Reporter]

Comparison (Pie/Bar):

pie title Market Share
    "Project A" : 45
    "Project B" : 30
    "Others" : 25

Confidence Scoring

Assign confidence based on source quality:

ConfidenceCriteria
High (90%+)Official docs, GitHub data, multiple corroborating sources
Medium (70-89%)Single reliable source, recent articles
Low (50-69%)Social media, unverified claims, outdated info

Output

Save report as:

research_{topic}_{YYYYMMDD}.md

Formatting Rules

  • Chinese content: Use full-width punctuation
  • Technical terms: Provide Wiki/doc URL on first mention
  • Tables: Use for metrics, comparisons
  • Code blocks: For technical examples
  • Mermaid: For architecture, timelines, flows

Best Practices

  1. Start with official sources - Repo, docs, company blog
  2. Verify dates from commits/PRs - More reliable than articles
  3. Triangulate claims - 2+ independent sources
  4. Note conflicting info - Don't hide contradictions
  5. Distinguish fact vs opinion - Label speculation clearly
  6. Always include inline citations - Use
    [citation:Title](URL)
    format immediately after each claim from external sources
  7. Extract URLs from search results - web_search returns {title, url, snippet} - always use the URL field
  8. Update as you go - Don't wait until end to synthesize

Citation Examples

Good - With inline citations:

The project gained 10,000 stars within 3 months of launch [citation:GitHub Stats](https://github.com/owner/repo).
The architecture uses a multi-agent workflow [citation:Official Docs](https://docs.example.com).

Bad - Without citations:

The project gained 10,000 stars within 3 months of launch.
The architecture uses a multi-agent workflow.