Claude-skill-registry engineering-claude-context
Curates context, optimizes prompts with XML, and manages extended thinking for Anthropic Claude models. Use when building Claude-based agents, designing system prompts, or handling long-context tasks.
install
source · Clone the upstream repo
git clone https://github.com/majiayu000/claude-skill-registry
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/engineering-claude-context" ~/.claude/skills/majiayu000-claude-skill-registry-engineering-claude-context && rm -rf "$T"
manifest:
skills/data/engineering-claude-context/SKILL.mdsource content
Claude Context Engineering
Instructions
Follow these strategies to maximize reliability and performance for Anthropic Claude models within limited attention budgets.
-
Curate High-Signal Context
- Treat context as a finite resource; every token dilutes attention.
- Prune low-value information (e.g., repetitive logs, unused tool outputs).
- Use "Just-in-Time" Retrieval: Provide lightweight identifiers (file lists, summaries) initially, allowing Claude to load full details only when needed.
-
Structure with XML Tags
- Use XML tags to strictly delineate prompt sections (Claude is optimized for this).
- Common tags:
,<instructions>
,<context>
,<tools>
,<examples>
.<formatting> - Nest tags for hierarchy (e.g.,
).<examples><example>...</example></examples> - Reference tags explicitly in instructions (e.g., "Analyze the data in
").<context>
-
Optimize for Long Horizons
- Compaction: Periodically summarize conversation history to free up context while preserving key state (decisions, bugs).
- Memory: Implement an external "notebook" (e.g.,
) where Claude reads/writes persistent state, freeing it from the context window.scratchpad.md - Sub-Agents: Delegate distinct, context-heavy sub-tasks to ephemeral sub-agents that return only distilled results.
-
Leverage Extended Thinking
- Use for: Complex STEM problems, constraint optimization, and multi-step strategic frameworks.
- Prompting: Use open-ended, high-level instructions ("Think thoroughly about X") rather than rigid step-by-step constraints for reasoning.
- Few-Shot: Include
blocks in your examples to demonstrate the reasoning process, not just the final output.<thinking> - Budget: Start small (1024 tokens) and scale up for complexity (up to 32k+ with batch processing).
-
Design Token-Efficient Tools
- Ensure tools return only necessary data (e.g.,
output vs. full file).grep - Keep tool definitions clear and distinct to avoid ambiguity.
- Ensure tools return only necessary data (e.g.,
Critical Rules
- No Brittle Logic: Avoid hardcoding complex if/else chains in prompts; use examples ("pictures") to guide behavior instead.
- Don't Prefill Thinking: Never prefill Claude's response when using Extended Thinking mode.
- Progressive Disclosure: Don't dump all data at once. Let Claude discover information layer by layer (hierarchy -> summary -> detail).
- Clear Separation: Always separate
fromSystem Instructions
to prevent prompt injection and confusion.User Data
Checklist
- Tags: Are distinct sections wrapped in XML tags?
- Efficiency: Is the initial context free of unnecessary bulk?
- Retrieval: Does Claude have tools to fetch details "just-in-time"?
- Memory: Is there a mechanism (compaction or external file) for long-term persistence?
- Examples: Do few-shot examples demonstrate the reasoning (if using thinking) or format required?
- Thinking: Is Extended Thinking enabled only for complex/planning tasks, not simple lookups?
Detailed Guidance
For deep dives on context anatomy, compaction strategies, and extended thinking patterns, see REFERENCE.md.