Claude-skill-registry distill

Distill content with 5-level granularity, showing quality loss estimation before execution

install
source · Clone the upstream repo
git clone https://github.com/majiayu000/claude-skill-registry
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/distill" ~/.claude/skills/majiayu000-claude-skill-registry-distill && rm -rf "$T"
manifest: skills/data/distill/SKILL.md
source content

/distill - Content Distillation

Reduce content size while preserving critical information. Shows quality loss estimation before execution.

Out of scope: Conversation context (use built-in

/compact
)

Granularity Levels

LevelNameTarget ReductionPreservesMay Remove
1
essence
85-95%Identity onlyEverything except core purpose
2
summary
70-80%+ BehaviorReasoning, examples, context
3
condensed
45-60%+ ReasoningVerbose examples, redundant explanations
4
detailed
25-40%+ ContextRedundant phrasing, verbose wording
5
minimal
10-20%All facts, rules, examples, structureFiller words, redundant phrasing, excessive formatting

Criticality Heuristics

Priority order for preservation (highest → lowest):

Content TypeCriticality Order
policyRules > Priority > Rationale > Examples > Detection
codeSignatures > Logic > Types > Comments > Formatting
memoryFacts > Decisions > Reasoning > Timestamps > Verbose
artifactsRequirements > Criteria > Rationale > Background > Examples
defaultCritical > Important > Helpful > Context > Lossy

Workflow

1. Run Estimation Script

.claude/skills/distill/scripts/estimate_distill.py "$FILE" [content_type]

2. Present Results & Confirm

/distill global/policy/RULES.md

Tokens: ~4364

| Level        | After | Reduction | Critical Loss |
|--------------|-------|-----------|---------------|
| 1. essence   | ~654  | 85%       | ~35%          |
| 2. summary   | ~1527 | 65%       | ~15%          |
| 3. condensed | ~2400 | 45%       | ~5%           |
| 4. detailed  | ~3273 | 25%       | ~2%           |
| 5. minimal   | ~3709 | 15%       | ~0%           |
----------------------------------------------------

Select level [1-5/cancel]:

3. Execute Distillation

Remove ONLY what "May Remove" permits for selected level.

Self-check: If removing content not in "May Remove" column → STOP, wrong level.

4. Verify & Report

After writing, measure actual tokens:

.claude/skills/distill/scripts/estimate_distill.py "$FILE" [content_type]

Report (compare against initial estimation, not generic range):

Before: 4434 tokens → Estimated: 2858 tokens (35% reduction)
Actual: 2621 tokens (41%) | Variance: +6%

If variance >10%, warn and offer git restore.

5. Output Location

  • Git-versioned files: Replace original (git tracks history)
  • Serena memories: Archive old as
    <name>_archived_<timestamp>
    , write new

Anti-Patterns

  • Distilling without estimation first
  • Removing content not permitted by level's "May Remove"
  • Not verifying actual vs target reduction
  • Skipping user confirmation