Claude-skill-registry documentation-explanation
Efficient documentation discovery, interpretation, and explanation methodology for technical projects. Use when explaining system architecture, design rationale, implementation patterns, or integrating multiple documentation sources (text docs, code comments, diagrams, test descriptions). Provides strategies for navigating large documentation sets, extracting design intent, cross-referencing multiple sources, and synthesizing comprehensive explanations.
git clone https://github.com/majiayu000/claude-skill-registry
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/documentation-explanation" ~/.claude/skills/majiayu000-claude-skill-registry-documentation-explanation && rm -rf "$T"
skills/data/documentation-explanation/SKILL.mdDocumentation Explanation Methodology
Universal framework for efficiently discovering, interpreting, and explaining technical documentation across text files, code comments, diagrams, and test descriptions.
When to Use This Skill
- Explaining system architecture from multiple documentation sources
- Extracting design rationale and decisions from specifications
- Understanding how components integrate by cross-referencing docs and code
- Navigating large documentation sets to answer specific questions
- Synthesizing information from text docs + code + diagrams + tests
- Identifying documentation gaps or inconsistencies
- Teaching users how a system works based on available documentation
Documentation Discovery Framework
Document Type Taxonomy
Technical projects contain multiple documentation types serving different purposes:
| Type | Purpose | Common Locations | Triggering Questions |
|---|---|---|---|
| Architecture Specs | System design, component hierarchy | , | "How does X work?", "What's the overall structure?" |
| Implementation Guides | Coding patterns, best practices | , module headers | "How to implement Y?", "What pattern to use?" |
| API References | Interface definitions, contracts | , interface files, header comments | "What parameters does X take?", "What does Y return?" |
| Test Plans | Verification strategy, coverage | , test files, assertion comments | "What tests exist?", "Is X verified?" |
| Debugging Guides | Known issues, troubleshooting | , | "Why is X failing?", "Known problems with Y?" |
| Integration Guides | Cross-module interfaces | , glue logic | "How do X and Y connect?", "What's the data flow?" |
| Design Rationale | Why decisions were made | Embedded in specs, commit messages | "Why was X designed this way?", "What tradeoffs?" |
| Reference Docs | Standards, protocols, ISA | , external links | "What does instruction X do?", "Protocol requirements?" |
Discovery Strategies
Text Documentation Search
Filename patterns reveal content:
,*_spec.md
→ High-level design*_architecture.md
,*_implementation.md
→ How-to information*_guide.md
,*_test_plan.md
→ Testing strategy*_verification.md
,*_api.md
→ Interface definitions*_reference.md
,*_debug.md
,*_issues.md
→ Problem-solving*_troubleshooting.md
→ Entry point, overviewREADME.md
,CHANGELOG.md
→ Implementation progress*_status.md
Directory structure indicates organization:
→ Primary documentationdocs/
,docs/cpu/
→ Component-specific docsdocs/modules/
→ External or upstream documentationreference/- Root-level
files → Project-wide information.md
Content signatures identify purpose:
- "Design Intent", "Rationale", "Why this approach" → Design decisions
- "Mandatory", "Must", "Shall" → Requirements
- "Known Issue", "Workaround", "Bug" → Debugging information
- "Example", "Usage", "How to" → Implementation guidance
- Tables with addresses/opcodes → Reference material
- "Status: Complete", "TODO", "Pending" → Implementation progress
- "❌ Wrong / ✅ Correct" → Anti-patterns and best practices
Code-as-Documentation Search
Module/Class headers contain architectural information:
// Module: vexriscv_decoder // Purpose: Instruction decode stage with hazard detection // Inputs: 32-bit instruction, pipeline control signals // Outputs: ALU control, register addresses, immediate values // Design rationale: Single-cycle decode with bypass forwarding
Interface definitions document contracts:
interface axi4_lite_if(input logic clk); // Address Write Channel logic [31:0] awaddr; // Write address logic awvalid; // Write address valid logic awready; // Write address ready (response from slave) // ... endinterface
Function/Task signatures with docstrings:
def analyze_pipeline_stall(trace_data: list, cycle: int) -> StallReason: """Determine why pipeline stalled at specific cycle. Args: trace_data: List of cycle-by-cycle execution trace cycle: Cycle number to analyze Returns: StallReason enum indicating hazard type (DATA/CONTROL/MEMORY) Algorithm: 1. Check for RAW hazards (read-after-write) 2. Check for control flow changes (branch/jump) 3. Check for memory access conflicts """
Inline comments explain non-obvious logic:
// Multi-cycle shift operation causes 8-cycle stall // Must hold decode stage until shifter completes assign decode_stall = shifter_busy && rs2_is_shift_count;
Assertion messages document expected behavior:
property p_axi_wdata_stable; @(posedge clk) (awvalid && !awready) |=> $stable(awaddr); endproperty assert property (p_axi_wdata_stable) else $error("AXI Protocol Violation: awaddr changed while awvalid held and awready=0");
Test descriptions explain verification intent:
// Test: Load-Use Hazard with Forwarding // Scenario: LW x5, 0(x1) followed by ADD x6, x5, x7 // Expected: Pipeline stalls 1 cycle, then forwards MEM→EX // Verification: Check stall signal asserted for exactly 1 cycle
Naming conventions as implicit documentation:
→ PC output from fetch stage is validfetch_pc_valid
→ Register source 1 has data hazarddecode_rs1_hazard
→ Memory stage requesting pipeline stallmem_stage_stall_req
→ Machine trap vector base address (CSR)csr_mtvec_base
Progressive Disclosure Approach
Don't read everything at once. Use progressive refinement:
- Overview first: Read high-level architecture docs, README files
- Narrow down: Identify which subsystem/module is relevant
- Read specifics: Load detailed module documentation
- Cross-reference: Validate understanding against code/tests
- Synthesize: Combine multiple sources into coherent explanation
Example flow for "How does hazard detection work?":
- Overview:
→ Identifies hazard unit moduledocs/architecture.md - Narrow:
→ Detailed hazard logicdocs/cpu/hazard_unit.md - Code: Read
module header and key logichazard_unit.sv - Tests: Check
for test scenariostests/*_hazard_test.sv - Synthesize: Explain based on spec + implementation + verification
Multi-Source Synthesis Patterns
Combining Documentation Sources
Architecture question: "How does component X work?"
Sources to combine:
- Text spec (design intent, requirements)
- Module header (interface, I/O signals)
- Implementation (actual logic, algorithms)
- Diagrams (visual structure - see references/diagram-interpretation.md)
- Tests (expected behavior, edge cases)
- Known issues (limitations, gotchas)
Synthesis pattern:
[Component X] {purpose from spec} Architecture: - {Structure from spec + diagrams} - {Key interfaces from code headers} Implementation: - {Algorithm/logic from spec + inline comments} - {Data flow from code analysis} Verification: - {Test scenarios from test descriptions} - {Edge cases from assertions} Known limitations: - {Issues from debugging docs}
Cross-Validation Techniques
Does code match specification?
Check for mismatches:
- Spec says "single-cycle operation" but code shows multi-cycle FSM
- Spec defines 5 pipeline stages but code implements 4
- Interface diagram shows signal
but code usesreadyrdy - Test plan says "fully verified" but no tests exist in test directory
Identifying documentation gaps:
- Module implemented but no specification document
- Complex algorithm in code without explanation comments
- Test exists but no test plan documentation
- Interface signals without timing diagrams
Resolving inconsistencies:
- Code is typically source of truth for "what exists"
- Specs reveal "what was intended"
- Tests show "what was verified"
- Comments explain "why it works this way"
If conflict exists:
- Note the discrepancy in explanation
- Verify which source is more recent (git history, timestamps)
- Check if issue is documented in known_issues
- Recommend updating documentation
Explanation Templates
"How does X work?" Pattern
Question structure: User wants to understand a component's operation
Response template:
[Component X] performs {high-level purpose from spec}. Architecture: {Structure description from spec/diagrams} Key components: {From architecture doc or block diagram} Interface: {Input/output signals from code header or interface definition} Operation: {Algorithm/behavior from spec + code + inline comments} {Timing/sequence from timing diagrams or sequence diagrams} Example: {Concrete example from test descriptions or code comments} Related components: {Cross-references to interacting modules}
"Why was X designed this way?" Pattern
Question structure: User wants design rationale
Response template:
[Design decision X] was chosen because {extract from design rationale sections}. Alternatives considered: {From "Design Alternatives" sections or commit messages} Tradeoffs: Advantages: {Performance, simplicity, compatibility, etc.} Disadvantages: {Limitations, complexity, resource cost} Constraints: {Requirements that drove the decision - from specs} Implementation status: {Current state from status docs - complete/partial/planned}
"How to integrate X with Y?" Pattern
Question structure: User wants to connect two components
Response template:
[Component X] connects to [Component Y] through {interface description}. Interface signals: {Signal list from interface definitions or glue logic} Data flow: {Sequence from X to Y, from sequence diagrams or code} Configuration required: {Setup steps from integration guides or test setup code} Example integration: {From existing integration code or test harnesses} Common issues: {From debugging docs or known issues}
"What tests exist for X?" Pattern
Question structure: User wants verification status
Response template:
[Component X] verification coverage: Test inventory: {List from test plan docs and actual test files} Scenarios covered: {Test descriptions from test plan and test file headers} Coverage gaps: {Unimplemented tests from test plan, or features without tests} Assertions: {Assertion descriptions from assertion modules} Known issues: {Bugs/limitations from debugging docs} Test execution: {How to run tests - from test infrastructure docs}
"Known issues with X?" Pattern
Question structure: User debugging or planning work
Response template:
[Component X] known issues: Issue inventory: {List from known_issues.md, bug_fixes_*.md, or issue tracking} Symptoms: {Observable behavior from debugging docs} Root causes: {Analysis from debugging docs or bug fix descriptions} Workarounds: {Temporary solutions from debugging guides} Status: {Fixed/open/wontfix from status docs or changelogs} Related tests: {Tests that catch or demonstrate the issue}
Documentation Quality Assessment
When explaining from documentation, note quality indicators:
High-quality documentation signs:
- ✅ Design rationale explained
- ✅ Diagrams match code structure
- ✅ Interface contracts clearly defined
- ✅ Code has explanatory comments
- ✅ Tests document expected behavior
- ✅ Known limitations documented
- ✅ Examples provided
Documentation gaps to flag:
- ❌ Complex logic without comments
- ❌ Inconsistency between spec and code
- ❌ Modules without specifications
- ❌ Interfaces without timing diagrams
- ❌ Features without test coverage
- ❌ Bugs without troubleshooting guides
- ❌ No design rationale for unusual choices
Best Practices
Efficient Navigation
- Start broad, narrow down: Overview → subsystem → component → details
- Use file structure: Directory hierarchy reveals organization
- Follow cross-references: Docs link related information
- Check timestamps: Recent docs more likely accurate
- Validate with code: Code is source of truth for implementation
- Look for patterns: Similar modules have similar documentation
Interpretation Strategies
- Read abstracts first: Frontmatter, summaries, introductions
- Skim for structure: Headings, bullet lists, diagrams
- Focus on relevant sections: Don't read everything
- Extract key concepts: Design decisions, interfaces, algorithms
- Note assumptions: Requirements, constraints, dependencies
- Identify status: Complete, partial, planned, deprecated
Explanation Construction
- Match user's level: Adjust detail to question specificity
- Structure clearly: Headings, lists, logical flow
- Concrete examples: From tests, code, or diagrams
- Cite sources: Reference specific docs/files/lines
- Note gaps: Be honest about missing information
- Cross-reference: Link related components/concepts
- Visual aids: Reference diagrams (see references/diagram-interpretation.md)
Bilingual Content Handling
When documentation uses multiple languages:
- Identify primary language: Usually code/comments are English, design rationale may be native language
- Preserve technical terms: Keep signal names, module names as-is
- Translate for clarity: Convert design rationale to user's preferred language
- Note language mixing: Flag when critical info is in non-English sections
- Respect author intent: Some terms better left in original language
Example (Japanese design doc with English code):
Design Intent (設計意図): Pipeline hazard detection must complete in single cycle to avoid degrading clock frequency. Implementation: logic decode_hazard = (ex_rd == decode_rs1) && ex_regwrite_valid;
Output explanation: "The hazard unit performs single-cycle detection to maintain clock frequency. It checks if the decode stage's source register matches the execute stage's destination register while that destination is being written."
Auto-Generated Content Recognition
Some documentation is machine-generated:
Indicators:
- "AUTO-GENERATED - DO NOT EDIT" headers
- Consistent formatting (tables, lists)
- Timestamps in headers
- Scripts referenced (
)generate_docs.py
Handling:
- Treat as reference (opcodes, register maps, API lists)
- Don't suggest edits to these files
- Point users to source data (JSON, schemas) for changes
- Note that manual docs may override auto-generated content
Advanced Techniques
Documentation Mining
For large documentation sets, mine for specific information:
Keyword search patterns:
- Design decisions:
(rationale|why|because|chose|decision|tradeoff) - Requirements:
(must|shall|mandatory|required|critical) - Issues:
(bug|issue|problem|workaround|limitation) - Status:
(complete|implemented|pending|todo|wip) - Examples:
(example|usage|scenario|sample)
Cross-referencing:
- Find all mentions of component X across docs
- Build connection map (which components interact)
- Identify documentation clusters (related topics)
Version/Status Indicators
Track implementation progress from docs:
- ✅ /
/[Done]
→ Implemented and verifiedStatus: Complete - ⏳ /
/[WIP]
→ Partial implementationStatus: In Progress - ❌ /
/[TODO]
→ Not yet implementedStatus: Planned - ⚠️ /
/[Known Issue]
→ Implemented but problematicStatus: Buggy - 🔧 /
/[Refactoring]
→ Being reworkedStatus: Redesign
Diagram Integration
Diagrams provide visual documentation - see detailed interpretation guide: references/diagram-interpretation.md
Quick diagram types:
- Block diagrams → Component structure and connectivity
- Sequence diagrams → Temporal behavior and message flow
- State machines → Control flow and state transitions
- Timing diagrams → Cycle-accurate signal behavior
- Flowcharts → Algorithmic logic and decision trees
Common Pitfalls
❌ Reading everything before answering: Too slow, wastes tokens ✅ Progressive disclosure: Start broad, narrow down as needed
❌ Treating specs as absolute truth: May be outdated or incorrect ✅ Cross-validate: Check spec against code and tests
❌ Ignoring code comments: Often contain critical context ✅ Mine code-as-documentation: Headers, comments, assertions, tests
❌ Missing diagram information: Diagrams often clearest explanation ✅ Interpret visuals: Extract structure and flow from diagrams
❌ Not noting documentation gaps: User needs complete picture ✅ Flag inconsistencies: Note missing docs, conflicts, uncertainties
❌ Over-explaining obvious concepts: Wastes time and tokens ✅ Match detail to question: Answer what was asked
❌ Single-source answers: Incomplete understanding ✅ Synthesize sources: Text + code + diagrams + tests = complete picture
Summary
Effective documentation explanation requires:
- Smart discovery: Use taxonomy and search patterns to find relevant docs quickly
- Multi-source synthesis: Combine text, code, diagrams, and tests
- Progressive disclosure: Start broad, narrow down as needed
- Cross-validation: Check consistency across sources
- Clear explanation: Structure responses with templates
- Quality assessment: Note gaps and inconsistencies
- Efficient navigation: Don't read everything, focus on relevance
For diagram interpretation strategies, see references/diagram-interpretation.md.