EasyPlatform prompt-expand
[Skill Management] Expand caveman-compressed text back into fluent English, then apply AI attention anchoring (top/bottom summaries, inline READ summaries, progressive disclosure). Use when reconstructing compressed prompts/docs/skills into readable, well-structured form.
git clone https://github.com/duc01226/EasyPlatform
T=$(mktemp -d) && git clone --depth=1 https://github.com/duc01226/EasyPlatform "$T" && mkdir -p ~/.claude/skills && cp -r "$T/.claude/skills/prompt-expand" ~/.claude/skills/duc01226-easyplatform-prompt-expand && rm -rf "$T"
.claude/skills/prompt-expand/SKILL.md<!-- SYNC:critical-thinking-mindset -->[IMPORTANT] Use
to break ALL work into small tasks BEFORE starting.TaskCreate
<!-- /SYNC:critical-thinking-mindset --> <!-- SYNC:ai-mistake-prevention -->Critical Thinking Mindset — Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence >80% to act. Anti-hallucination: Never present guess as fact — cite sources for every claim, admit uncertainty freely, self-check output for errors, cross-reference independently, stay skeptical of own confidence — certainty without evidence root of all hallucination.
<!-- /SYNC:ai-mistake-prevention -->AI Mistake Prevention — Failure modes to avoid on every task:
- Check downstream references before deleting. Deleting components causes documentation and code staleness cascades. Map all referencing files before removal.
- Verify AI-generated content against actual code. AI hallucinates APIs, class names, and method signatures. Always grep to confirm existence before documenting or referencing.
- Trace full dependency chain after edits. Changing a definition misses downstream variables and consumers derived from it. Always trace the full chain.
- Trace ALL code paths when verifying correctness. Confirming code exists is not confirming it executes. Always trace early exits, error branches, and conditional skips — not just happy path.
- When debugging, ask "whose responsibility?" before fixing. Trace whether bug is in caller (wrong data) or callee (wrong handling). Fix at responsible layer — never patch symptom site.
- Assume existing values are intentional — ask WHY before changing. Before changing any constant, limit, flag, or pattern: read comments, check git blame, examine surrounding code.
- Verify ALL affected outputs, not just the first. Changes touching multiple stacks require verifying EVERY output. One green check is not all green checks.
- Holistic-first debugging — resist nearest-attention trap. When investigating any failure, list EVERY precondition first (config, env vars, DB names, endpoints, DI registrations, data preconditions), then verify each against evidence before forming any code-layer hypothesis.
- Surgical changes — apply the diff test. Bug fix: every changed line must trace directly to the bug. Don't restyle or improve adjacent code. Enhancement task: implement improvements AND announce them explicitly.
- Surface ambiguity before coding — don't pick silently. If request has multiple interpretations, present each with effort estimate and ask. Never assume all-records, file-based, or more complex path.
Quick Summary
Goal: Two-phase restoration of any caveman-compressed markdown file: (1) Language Expansion — reconstruct fluent, grammatically correct English from compressed text while preserving ALL semantic content; (2) Prompt Enhancement — apply AI attention anchoring so AI reads and follows all instructions.
Workflow:
- Read — Read the target file completely
- Expand — Apply language expansion pass (Phase 1)
- Enhance — Apply prompt enhancement transforms (Phase 2)
- Verify — No semantic loss, correct structure, rule density ≥ pre-expansion
Key Rules:
- Expand FIRST, enhance SECOND — expansion restores readability; enhancement then structures it for AI attention
- Preserve ALL facts, constraints, logical steps, numbers, and technical terms exactly — never invent or omit content
- Post-expansion rule density (MUST ATTENTION/NEVER/ALWAYS per 100 lines) must be ≥ pre-expansion
- Language expansion applies to prose only — never modify code blocks, YAML, structured tables, or SYNC tags
- Expand for clarity and flow, not verbosity — natural English, not padded English
<!-- /SYNC:output-quality-principles -->Output Quality — Token efficiency without sacrificing quality.
- No inventories/counts — AI can
. Counts go stale instantlygrep | wc -l- No directory trees — AI can
/glob. Use 1-line path conventionsls- No TOCs — AI reads linearly. TOC wastes tokens
- No examples that repeat what rules say — one example only if non-obvious
- Lead with answer, not reasoning. Skip filler words and preamble
- Sacrifice grammar for concision in reports
- Unresolved questions at end, if any
Target File
Expand and enhance this file: <target>$ARGUMENTS</target>
If no file specified, ask via
AskUserQuestion. If raw caveman text is passed instead of a file path, apply language expansion directly and output the result.
Phase 1: Language Expansion
Convert caveman-compressed text back into proper, fluent English. The source text uses very short sentences, no connectives, active voice, concrete language, and minimal articles.
Expansion Rules
What to Add
| Category | Examples | Guidance |
|---|---|---|
| Articles | a, an, the | Add where natural; default to for specific things, for general |
| Connectives | because, therefore, however, additionally, which means, in order to | Use to show logical relationships between sentences |
| Auxiliary verbs | is, are, was, were, have, has, does | Restore where grammatically required |
| Prepositions | of, for, to, in, on, at | Add back when they clarify relationships |
| Pronouns | it, this, that, they | Restore when referencing a previously mentioned noun |
| Subordinate clauses | "which allows...", "so that...", "when..." | Use to merge short choppy sentences into natural flow |
What to Preserve Exactly
| Category | Why |
|---|---|
| All nouns and noun phrases | Core semantic units — never paraphrase |
| All main verbs | Actions must remain unchanged |
| All adjectives | Meaning-bearing; do not substitute synonyms |
| Numbers and quantifiers | , , — exact values matter |
| Uncertainty qualifiers | , , — these are intentional hedges |
| Negations | , , , — critical for correctness |
| Technical and domain terms | Never simplify or paraphrase domain language |
references and paths | Exact paths must be preserved verbatim |
| Names and titles | , , proper nouns — unchanged |
| Time and frequency words | , , , |
Connective Selection Guide
Use connectives that accurately reflect the logical relationship — do not add connectives arbitrarily.
| Relationship | Connectives to use |
|---|---|
| Cause → Effect | because, since, which causes, leading to, as a result |
| Contrast | however, but, although, despite, on the other hand |
| Addition | additionally, furthermore, also, in addition, and |
| Sequence | first, then, next, finally, after, before |
| Purpose | in order to, so that, to enable, to ensure |
| Condition | if, when, unless, provided that, given that |
| Clarification | specifically, that is, in other words, which means |
Sentence Expansion Process
For each compressed sentence or bullet:
- Identify the core subject-verb-object — this is non-negotiable content
- Restore articles — add
for specific referents,the
for generala/an - Restore auxiliary verbs —
,was designed
,is requiredhas been removed - Add connectives — merge related short sentences into one fluent sentence where natural
- Restore prepositions — add
,of
,for
where they clarify relationshipsto - Check length — target 10-25 words per sentence for readability
Expansion Examples
| Compressed | Expanded | Notes |
|---|---|---|
| "System designed process data efficiently." | "The system was designed to process data efficiently." | Added: the, was, to |
| "Removes predictable grammar preserving unpredictable content." | "It removes predictable grammar while preserving the unpredictable content." | Added: It, while, the |
| "At least 20 people." | "There were at least 20 people." | Restored existential construction; kept quantifier exact |
| "Made from wood and metal." | "It is made from wood and metal." | Added: It, is; kept (relationship preposition) |
| "Method compressing LLM contexts." | "This is a semantic compression method for LLM contexts." | Fully restored noun phrase |
| "Confidence >80% act." | "When confidence exceeds 80%, proceed to act." | Restored conditional structure |
Expansion Scope
Apply expansion to:
- Prose paragraphs and explanatory text
- Bullet point descriptions
- Rule statements (restore full imperative sentences)
- Section introductions and transitions
Do NOT modify:
- Code blocks (any language)
- YAML frontmatter
- Structured tables (expand cell values only if they are prose fragments)
tags and their contents<!-- SYNC -->
references and pathsfile:line- Frontmatter fields
Phase 2: Prompt Enhancement
Applies after expansion. Source: Anthropic prompt engineering guide, Stanford "lost-in-the-middle" research, 2025-2026 LLM context optimization studies.
<!-- SYNC:context-engineering-principles --><!-- /SYNC:context-engineering-principles --> <!-- SYNC:prompt-enhancement-transforms-base -->Context Engineering Principles — Research-backed principles for prompt quality. Source: Anthropic prompt engineering guide, Stanford "lost-in-the-middle" research, 2025-2026 LLM context optimization studies.
- Primacy-Recency Effect — LLM performance drops 15-47% for middle-context information (Stanford). AI attention peaks at first/last 10% of text. Action: Place the 3 most critical rules in both the first 5 lines AND the last 5 lines of every prompt. Queries at end improve quality by up to 30% (Anthropic).
- High-Signal Density — Anthropic: "Identify the smallest collection of high-signal tokens that maximize the probability of the desired outcome." Action: Every line should change AI behavior. If removing a line doesn't change output → cut it. Target ≥8 rules (MUST ATTENTION/NEVER/ALWAYS) per 100 lines.
- Context Rot — LLM performance degrades as context length grows — even when all content is relevant. Compression (5-20x) maintains or improves accuracy while saving 70-94% tokens. Action: Compress aggressively. Shorter, denser prompts outperform longer, diluted ones.
- Structured > Prose — Tables, bullets, XML/markdown parse faster than paragraphs. Constrained formats reduce error rates vs free-text. Action: Convert narrative to tables/bullets. Use markdown headers for semantic sections.
- RCCF Framework — Modern LLMs (2025+) already know how to reason. What they need: Role (personality), Context (grounding), Constraints (guardrails), Format (structure). Constraints and format matter more than verbose instructions.
- Checkbox Avoidance —
syntax triggers mechanical compliance — AI ticks boxes without reasoning. Bullet rules force reading and evaluation. Action: Replace[ ]with- [ ] Check X.- MUST ATTENTION verify X- Example Economy — 3-5 examples optimal for few-shot; diminishing returns after. Action: 1 best example per pattern. Use BAD→GOOD pairs (2-3 lines each) for anti-patterns.
- Deferred Tool Loading — Claude Code delays loading tool definitions when they exceed 10% of context window. Action: Keep injected docs well under 10% of context budget. Docs exceeding ~3,000 lines are too large for injection — split or compress.
- Rule Density Verification — Post-optimization rule count (MUST ATTENTION/NEVER/ALWAYS) must be ≥ pre-optimization count. Compression should preserve or increase density, never decrease it. Action: Count before and after every optimization pass.
<!-- /SYNC:prompt-enhancement-transforms-base --> <!-- SYNC:shared-protocol-duplication-policy -->Prompt Enhancement Transforms (Base) — Transforms 1-3 are identical across
/prompt-enhance. Transform 4 is per-skill (conciseness pass for enhance; structural clarity pass for expand) and stays local to each skill.prompt-expandTransform 1: Inline Summaries for READ References
Problem: AI sees
and skips it. Solution: Add a 2-3 line summary of key rules BEFORE the read instruction.MUST ATTENTION READ file.mdBefore:
MUST ATTENTION READ .claude/protocols/evidence.mdAfter:
> **Evidence-Based Reasoning** — Speculation is FORBIDDEN. Every claim requires `file:line` proof. > Confidence: >95% recommend freely, 80-94% with caveats, <80% DO NOT recommend. MUST ATTENTION READ .claude/protocols/evidence.md for full details.Scope rules:
protocol files → always add an inline summary (stable, belongs to framework).claude/ files → NO inline summary (varies per project, auto-injected by hooks). Add:docs/project-reference/(content auto-injected by hook — check for [Injected: ...] header before reading)Transform 2: Top Summary Section
Required structure (first 20 lines after frontmatter):
> **[IMPORTANT]** TaskCreate instruction... > **Protocol Name** — [inline summary]. MUST ATTENTION READ `path` for details. ## Quick Summary **Goal:** [One sentence — what this skill achieves] **Workflow:** 1. **[Step]** — [description] **Key Rules:** - [Most critical constraint]Transform 3: Bottom Closing Reminders
Add at the very end of the file:
--- ## Closing Reminders - **IMPORTANT MUST ATTENTION** [echo rule #1 from the top section] - **IMPORTANT MUST ATTENTION** [echo rule #2] - **IMPORTANT MUST ATTENTION** [echo rule #3] - **IMPORTANT MUST ATTENTION** add a final review task to verify work qualityPick 3-5 rules AI most commonly violates. Bottom section re-anchors attention after the long middle.
<!-- /SYNC:shared-protocol-duplication-policy -->Shared Protocol Duplication Policy — Inline protocol content in skills (wrapped in
) is INTENTIONAL duplication. Do NOT extract, deduplicate, or replace with file references. AI compliance drops significantly when protocols are behind file-read indirection. To update: edit<!-- SYNC:tag -->first, then grep.claude/skills/shared/sync-inline-versions.mdand update all occurrences.SYNC:protocol-name
Transform 4: Structural Clarity Pass
After expansion, apply structural improvements:
Convert to structured format:
- Prose paragraphs listing rules → bullet lists
- Enumerated conditions → decision tables
- Before/after examples → two-column tables
Keep as prose:
- Explanatory context (why a rule exists)
- Narrative descriptions of workflows
- Anti-pattern stories and rationale
Process
Step 1: Read and Analyze
- Read the target file completely
- Record: current line count, rule density (MUST ATTENTION/NEVER/ALWAYS count per 100 lines)
- Identify compressed prose regions — very short sentences, missing articles, no connectives
- List all READ references → classify as
(needs inline summary) or.claude/
(skip)docs/ - Note: missing Quick Summary, missing Closing Reminders, tables needing cell expansion
Step 2: Language Expansion Pass
- Work through each prose section sequentially
- For each compressed sentence or bullet: restore articles, auxiliary verbs, connectives, prepositions
- Merge short choppy sentences into natural flowing sentences where logical relationship is clear
- Skip code blocks, YAML, SYNC tags, and file paths entirely
- After each paragraph, verify that all original facts and constraints are still present
Step 3: Create Inline Summaries
For each
.claude/ protocol reference:
- Read the referenced file
- Extract 2-3 key rules
- Write the blockquote inline summary
- Keep the MUST ATTENTION READ instruction on the next line
Step 4: Add/Fix Top Section
- If Quick Summary is missing → create one from the file's content
- If present but weak → strengthen with Goal, Workflow, Key Rules structure
- Ensure protocol summaries appear before the Quick Summary block
Step 5: Add/Fix Bottom Section
- If Closing Reminders are missing → add the standard section
- Choose rules that AI most commonly skips (evidence-based, task creation, pattern search)
- Remove old "IMPORTANT Task Planning Notes" sections if superseded by Closing Reminders
Step 6: Verify
| Check | Pass Condition |
|---|---|
| No YAML corruption | Frontmatter intact and parseable |
| No semantic loss | All original facts, constraints, numbers, paths present |
| Rule density | Post-expansion ≥ pre-expansion (count MUST ATTENTION/NEVER/ALWAYS) |
| Fluency | No remaining 2-5 word telegraphic sentences in prose regions |
| Formatting | Blank lines between sections, headers correct |
| READ classification | → inline summary added, → skipped |
| Code blocks untouched | No changes inside ``` fences |
Closing Reminders
- IMPORTANT MUST ATTENTION apply language expansion FIRST before any structural enhancement — never skip Phase 1
- IMPORTANT MUST ATTENTION preserve ALL facts, numbers, technical terms, and
references exactly — never invent or paraphrase contentfile:line - IMPORTANT MUST ATTENTION never modify code blocks, YAML frontmatter, structured tables, or SYNC tags during expansion
- IMPORTANT MUST ATTENTION verify rule density post-expansion ≥ pre-expansion — expansion must not dilute signal below the original
- IMPORTANT MUST ATTENTION apply primacy-recency anchoring — 3 critical rules in first 5 AND last 5 lines of every enhanced file
- IMPORTANT MUST ATTENTION add inline summaries only for
protocol files, never for.claude/
project-specific filesdocs/ - IMPORTANT MUST ATTENTION cite
evidence for every claim (confidence >80% to act). NEVER speculate without proof.file:line - IMPORTANT MUST ATTENTION READ
before starting <!-- SYNC:critical-thinking-mindset:reminder -->CLAUDE.md - MUST ATTENTION apply critical thinking — every claim needs traced proof, confidence >80% to act. Anti-hallucination: never present guess as fact. <!-- /SYNC:critical-thinking-mindset:reminder --> <!-- SYNC:ai-mistake-prevention:reminder -->
- MUST ATTENTION apply AI mistake prevention — holistic-first debugging, fix at responsible layer, surface ambiguity before coding, re-read files after compaction. <!-- /SYNC:ai-mistake-prevention:reminder -->