Skilllibrary skill-provenance
Document the origin, authorship, evidence basis, and encoded assumptions of a skill to establish trust and traceability. Use this when publishing a skill to a shared registry, importing from an external source, auditing library trustworthiness, or when a skill encodes non-obvious assumptions that need documentation. Do not use for evaluating skill quality (use skill-evaluation), tracking lifecycle states (use skill-lifecycle-management), or when git history alone provides sufficient provenance.
git clone https://github.com/merceralex397-collab/skilllibrary
T=$(mktemp -d) && git clone --depth=1 https://github.com/merceralex397-collab/skilllibrary "$T" && mkdir -p ~/.claude/skills && cp -r "$T/03-meta-skill-engineering/skill-provenance" ~/.claude/skills/merceralex397-collab-skilllibrary-skill-provenance && rm -rf "$T"
03-meta-skill-engineering/skill-provenance/SKILL.mdPurpose
Documents origin, authorship, evidence basis, and assumptions of a skill. Provenance prevents skills from becoming mysterious prompt blobs—answers "where did this come from, why written this way, can we trust it?"
When to use this skill
Use when:
- User says "where did this come from?", "document origin", "add provenance"
- Creating official skill for sharing/publishing
- Auditing library for trust and traceability
- Skill imported from external source needs attribution
- Skill encodes non-obvious assumptions needing documentation
Do NOT use when:
- Skill is trivial and provenance overkill
- Just need git history (built-in)
- Want skill evaluation (use
)skill-evaluation
Operating procedure
- Document origin:
- Source URL if adapted
- "created" if from scratch
- If derived: % original vs adapted
- Record authorship:
- Original author(s)
- Adapter(s) if modified
- Reviewer(s) if reviewed
- Dates
- Document evidence basis:
- What documentation informed procedure?
- What best practices referenced?
- What expert knowledge encoded?
- What failures learned from?
- Catalog assumptions:
- What must be true for skill to work?
- What environment/context assumed?
- What tool versions targeted?
- What conventions expected?
- Assess trust level:
- High: Official, reviewed, tested, maintained
- Medium: Known source, some testing
- Low: Unknown source, unreviewed
- Untrusted: External, not reviewed
- Record change history:
- Major revisions with rationale
- Breaking changes
- Deprecation of approaches
- Update frontmatter
Output defaults
In SKILL.md frontmatter:
metadata: provenance: origin: "https://..." adaptation: 30% trust: high
PROVENANCE.md:
# Provenance ## Origin - **Source**: [URL or "created"] - **Author**: [name] - **Adapted by**: [name] on [date] ## Evidence Basis - [URL] — informed steps 1-3 - [guide] — informed format ## Assumptions - Unix environment - Node.js 18+ - npm as package manager ## Trust Level: [High] Rationale: [why] ## Change History | Date | Change | Author | Why | |------|--------|--------|-----|
References
- https://docs.github.com/en/copilot/concepts/agents/about-agent-skills — Agent skill metadata structure
- https://developers.openai.com/codex/skills — Codex skill format
- Git history for change tracking
- Original source documentation when skill is adapted
Failure handling
- Source unknown: Mark "unknown", trust Low, recommend review
- Assumptions undocumentable: Flag risk
- Multiple sources: Document all, note precedence
- Author unreachable: Document what's known, mark gaps