Claude-skill-registry context-engineering

Principles for designing context-efficient AI agents and tools. Use when designing LLM tools, agents, MCP servers, or multi-agent systems.

install
source · Clone the upstream repo
git clone https://github.com/majiayu000/claude-skill-registry
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/context-engineering-byunk-minimal-claude-code" ~/.claude/skills/majiayu000-claude-skill-registry-context-engineering && rm -rf "$T"
manifest: skills/data/context-engineering-byunk-minimal-claude-code/SKILL.md
source content

Context Engineering

Principles for maximizing LLM effectiveness by treating context as a finite resource.

Core Principle

Find the smallest possible set of high-signal tokens that maximize the likelihood of your desired outcome.

The Context Budget

LLMs have an "attention budget" that depletes with each token. Context rot causes recall accuracy to decrease as token count grows. Every design decision should optimize for signal density.

Quick Reference

ChallengeStrategyReference
Too many toolsCurate minimal viable setTool
Ambiguous tool selectionSelf-contained, unambiguous toolsTool
Context pollution over timeCompaction and summarizationAgent
Long-horizon tasksExternal memory and note-takingAgent
Exceeding single context limitsSub-agent architecturesMulti-Agent
MCP server bloatToken-efficient responsesMCP
Measuring effectivenessEnd-state evaluationEvaluation

Single vs Multi-Agent

Multi-agent adds ~15x token overhead. Use single agent unless:

FactorSingle AgentMulti-Agent
ParallelizationSequential stepsIndependent subtasks
Context sizeFits in windowExceeds single context
Tool complexityFocused toolsetMany specialized tools
DependenciesSteps depend on each otherWork can be isolated

Default to single agent. Add agents only when parallelization or context limits demand it.

Decision Checklists

Before Adding to Context

  • Is this the minimum information needed?
  • Can an agent discover this just-in-time instead?
  • Does this justify its token cost?

Tool Design

  • Can a human definitively say which tool to use?
  • Does each tool have a distinct, non-overlapping purpose?
  • Are responses token-efficient with high signal?
  • Do error messages guide toward solutions?

Agent Design

  • Does the system prompt strike the right altitude?
  • Are there mechanisms for compaction when context grows?
  • Is external memory used for long-horizon tracking?
  • Are canonical examples provided instead of exhaustive rules?

Multi-Agent

  • Is the task parallelizable enough to justify coordination overhead?
  • Do sub-agents return condensed summaries (not raw results)?
  • Is there clear separation of concerns between agents?

Key Techniques

Just-in-Time Retrieval

Keep lightweight identifiers (paths, queries, links). Load data dynamically at runtime rather than pre-loading everything upfront.

Progressive Disclosure

Let agents discover context through exploration. File sizes suggest complexity; naming hints at purpose. Each interaction yields context for the next decision.

Compaction

Summarize conversations nearing limits. Preserve architectural decisions and critical details; discard redundant tool outputs and verbose messages.

Structured Note-Taking

Persist notes to external memory (to-do lists, NOTES.md). Pull back into context when needed. Tracks progress without exhausting working context.

Sub-Agent Distribution

Delegate focused tasks to specialized agents with clean context windows. Each sub-agent explores extensively but returns only condensed summaries (1000-2000 tokens).

The Golden Rule

Do the simplest thing that works. Start minimal, add complexity only based on observed failure modes.

References

  • Tool - Building self-contained, token-efficient tools
  • Agent - Single agent context management
  • Multi-Agent - Coordinating multiple agents
  • MCP - Model Context Protocol best practices
  • Evaluation - Measuring context engineering effectiveness

Examples

Complete examples from Claude Code:

Tool Descriptions

  • Bash - Boundaries, when NOT to use, good/bad examples
  • Edit - Prerequisites, error guidance, concise design
  • Grep - Exclusivity, parameter examples, output modes

Agent Prompts

  • Explore - Role definition, constraints, strengths
  • Plan - Process steps, output format, boundaries
  • Summarization - Compaction structure, what to preserve