Skilllibrary repo-evidence-gathering
install
source · Clone the upstream repo
git clone https://github.com/merceralex397-collab/skilllibrary
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/merceralex397-collab/skilllibrary "$T" && mkdir -p ~/.claude/skills && cp -r "$T/06-agent-role-candidates/repo-evidence-gathering" ~/.claude/skills/merceralex397-collab-skilllibrary-repo-evidence-gathering && rm -rf "$T"
manifest:
06-agent-role-candidates/repo-evidence-gathering/SKILL.mdsource content
Purpose
Gather structured, factual evidence from a repository — file inventory, dependency graphs, test coverage signals, coding conventions, and tech-stack identifiers — so that downstream agents (planners, reviewers, architects) can make decisions grounded in reality rather than assumptions.
When to use
- A planning or review agent needs verified facts about the repo before acting.
- You need a tech-stack summary, dependency tree, or convention report for a new codebase.
- A ticket or architecture decision requires evidence about test coverage or code patterns.
- Onboarding a new contributor who needs a concise codebase orientation.
Do NOT use when
- The task requires modifying code — hand off to an implementer skill instead.
- Runtime performance profiling or load testing is needed (use a benchmarking skill).
- The question is about external services, not the repository itself.
- A security-specific audit is required — use
instead.security-review
Operating procedure
- Run
andfind . -maxdepth 2 -type f | head -80
at repo root to build an initial file-tree inventory.ls -la - Identify the package manifest(s) — search for
,package.json
,Cargo.toml
,pyproject.toml
,go.mod
,pom.xml
usingGemfile
.find . -maxdepth 3 -name '<manifest>' - Extract direct dependencies from each manifest and list them in a table:
.| Dependency | Version | Purpose (inferred) | - Run
to map internal module relationships.grep -r 'import\|require\|from ' --include='*.ts' --include='*.py' --include='*.go' -l | head -30 - Locate test directories by running
and count test files per directory.find . -type d -name '__tests__' -o -name 'test' -o -name 'tests' -o -name 'spec' - Detect coding conventions: run
on 3–5 representative source files and note indent style, naming conventions (camelCase vs snake_case), import order, and comment patterns.head -40 - Identify CI/CD configuration by checking for
,.github/workflows/
,Jenkinsfile
,.gitlab-ci.yml
, orMakefile
.Taskfile.yml - Check for documentation artifacts:
,README.md
,AGENTS.md
,CONTRIBUTING.md
directory.docs/ - Summarise all findings into the output format below, citing file paths and line numbers for every claim.
- Flag any gaps — missing tests, undocumented modules, stale dependencies — as open questions for downstream agents.
Decision rules
- Every claim must cite at least one file path. No unsupported assertions.
- If a file is too large to read fully, sample the first 50 and last 20 lines rather than skipping it.
- Prefer breadth over depth: cover all top-level directories before diving deep into any one.
- When two conventions conflict (e.g., mixed indent styles), report both with file-path examples.
- Do not infer intent — report what the code does, not what it was probably meant to do.
Output requirements
- Tech Stack Summary — languages, frameworks, runtimes with version evidence.
- Dependency Inventory — table of direct dependencies per manifest.
- File Structure Map — top-level directory tree with purpose annotations.
- Convention Report — indent, naming, import, and comment patterns observed.
- Test Coverage Signal — test directory locations, file counts, and runner configuration.
- CI/CD Overview — pipelines detected, triggers, and key steps.
- Gap List — missing docs, untested modules, stale configs.
References
- Project manifest files (
,package.json
,Cargo.toml
, etc.)pyproject.toml - CI/CD workflow files (
).github/workflows/*.yml
orAGENTS.md
if presentCONTRIBUTING.md
Related skills
— for runtime environment checks complementing repo evidenceshell-inspection
— consumes evidence gathered here for vulnerability analysissecurity-review
— turns gap-list findings into actionable ticketsticket-creator
— condenses evidence reports for downstream agentscontext-summarization
Failure handling
- If the repository is empty or has fewer than 3 files, report "insufficient codebase" and stop.
- If a manifest file is unparseable, quote the first 10 lines and flag it as corrupt.
- If file-system access is restricted, list which paths failed and continue with accessible ones.
- If conventions are contradictory across the repo, present both variants with counts rather than picking one.