Claude-context-mode context-mode
git clone https://github.com/mksglu/context-mode
T=$(mktemp -d) && git clone --depth=1 https://github.com/mksglu/context-mode "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/context-mode" ~/.claude/skills/mksglu-claude-context-mode-context-mode && rm -rf "$T"
skills/context-mode/SKILL.mdContext Mode: Default for All Large Output
MANDATORY RULE
<context_mode_logic> <mandatory_rule> Default to context-mode for ALL commands. Only use Bash for guaranteed-small-output operations. </mandatory_rule> </context_mode_logic>
Bash whitelist (safe to run directly):
- File mutations:
,mkdir
,mv
,cp
,rm
,touchchmod - Git writes:
,git add
,git commit
,git push
,git checkout
,git branchgit merge - Navigation:
,cd
,pwdwhich - Process control:
,killpkill - Package management:
,npm install
,npm publishpip install - Simple output:
,echoprintf
Everything else →
or ctx_execute
. Any command that reads, queries, fetches, lists, logs, tests, builds, diffs, inspects, or calls an external service. This includes ALL CLIs (gh, aws, kubectl, docker, terraform, wrangler, fly, heroku, gcloud, etc.) — there are thousands and we cannot list them all.ctx_execute_file
When uncertain, use context-mode. Every KB of unnecessary context reduces the quality and speed of the entire session.
Decision Tree
About to run a command / read a file / call an API? │ ├── Command is on the Bash whitelist (file mutations, git writes, navigation, echo)? │ └── Use Bash │ ├── Output MIGHT be large or you're UNSURE? │ └── Use context-mode ctx_execute or ctx_execute_file │ ├── Fetching web documentation or HTML page? │ └── Use ctx_fetch_and_index → ctx_search │ ├── Using Playwright (navigate, snapshot, console, network)? │ └── ALWAYS use filename parameter to save to file, then: │ browser_snapshot(filename) → ctx_index(path) or ctx_execute_file(path) │ browser_console_messages(filename) → ctx_execute_file(path) │ browser_network_requests(filename) → ctx_execute_file(path) │ ⚠ browser_navigate returns a snapshot automatically — ignore it, │ use browser_snapshot(filename) for any inspection. │ ⚠ Playwright MCP uses a SINGLE browser instance — NOT parallel-safe. │ For parallel browser ops, use agent-browser via execute instead. │ ├── Using agent-browser (parallel-safe browser automation)? │ └── Run via execute (shell) — each call gets its own subprocess: │ execute("agent-browser open example.com && agent-browser snapshot -i -c") │ ✓ Supports sessions for isolated browser instances │ ✓ Safe for parallel subagent execution │ ✓ Lightweight accessibility tree with ref-based interaction │ ├── Processing output from another MCP tool (Context7, GitHub API, etc.)? │ ├── Output already in context from a previous tool call? │ │ └── Use it directly. Do NOT re-index with ctx_index(content: ...). │ ├── Need to search the output multiple times? │ │ └── Save to file via ctx_execute, then ctx_index(path) → ctx_search │ └── One-shot extraction? │ └── Save to file via ctx_execute, then ctx_execute_file(path) │ └── Reading a file to analyze/summarize (not edit)? └── Use ctx_execute_file (file loads into FILE_CONTENT, not context)
When to Use Each Tool
| Situation | Tool | Example |
|---|---|---|
| Hit an API endpoint | | |
| Run CLI that returns data | | , , |
| Run tests | | , , |
| Git operations | | , |
| Docker/K8s inspection | | , |
| Read a log file | | Parse access.log, error.log, build output |
| Read a data file | | Analyze CSV, JSON, YAML, XML |
| Read source code to analyze | | Count functions, find patterns, extract metrics |
| Fetch web docs | | Index React/Next.js/Zod docs, then search |
| Playwright snapshot | → → | Save to file, index server-side, query |
| Playwright snapshot (one-shot) | → | Save to file, extract in sandbox |
| Playwright console/network | → | Save to file, analyze in sandbox |
| MCP output (already in context) | Use directly | Don't re-index — it's already loaded |
| MCP output (need multi-query) | to save → → | Save to file first, index server-side |
| Wipe indexed KB content | | Permanently deletes all indexed content |
Automatic Triggers
Use context-mode for ANY of these, without being asked:
- API debugging: "hit this endpoint", "call the API", "check the response", "find the bug in the response"
- Log analysis: "check the logs", "what errors", "read access.log", "debug the 500s"
- Test runs: "run the tests", "check if tests pass", "test suite output"
- Git history: "show recent commits", "git log", "what changed", "diff between branches"
- Data inspection: "look at the CSV", "parse the JSON", "analyze the config"
- Infrastructure: "list containers", "check pods", "S3 buckets", "show running services"
- Dependency audit: "check dependencies", "outdated packages", "security audit"
- Build output: "build the project", "check for warnings", "compile errors"
- Code metrics: "count lines", "find TODOs", "function count", "analyze codebase"
- Web docs lookup: "look up the docs", "check the API reference", "find examples"
Language Selection
| Situation | Language | Why |
|---|---|---|
| HTTP/API calls, JSON | | Native fetch, JSON.parse, async/await |
| Data analysis, CSV, stats | | csv, statistics, collections, re |
| Shell commands with pipes | | grep, awk, jq, native tools |
| File pattern matching | | find, wc, sort, uniq |
Search Query Strategy
- BM25 uses OR semantics — results matching more terms rank higher automatically
- Use 2-4 specific technical terms per query
- Always use
parameter when multiple docs are indexed to avoid cross-source contaminationsource- Partial match works:
matchessource: "Node""Node.js v22 CHANGELOG"
- Partial match works:
- Always use
array — batch ALL search questions in ONE call:queriesctx_search(queries: ["transform pipe", "refine superRefine", "coerce codec"], source: "Zod")- NEVER make multiple separate ctx_search() calls — put all queries in one array
External Documentation
- Always use
for external docs — NEVERctx_fetch_and_index
orcat
with local paths for packages you don't ownctx_execute - For GitHub-hosted projects, use the raw URL:
https://raw.githubusercontent.com/org/repo/main/CHANGELOG.md - After indexing, use the
parameter in search to scope results to that specific documentsource
Critical Rules
- Always console.log/print your findings. stdout is all that enters context. No output = wasted call.
- Write analysis code, not just data dumps. Don't
— analyze first, print findings.console.log(JSON.stringify(data)) - Be specific in output. Print bug details with IDs, line numbers, exact values — not just counts.
- For files you need to EDIT: Use the normal Read tool. context-mode is for analysis, not editing.
- For Bash whitelist commands only: Use Bash for file mutations, git writes, navigation, process control, package install, and echo. Everything else goes through context-mode.
- Never use
. Usectx_index(content: large_data)
to read files server-side. Thectx_index(path: ...)
parameter sends data through context as a tool parameter — use it only for small inline text.content - Always use
parameter on Playwright tools (filename
,browser_snapshot
,browser_console_messages
). Without it, the full output enters context.browser_network_requests - Don't re-index data already in context. If an MCP tool returned data in a previous response, it's already loaded — use it directly or save to file first.
Sandboxed Data Workflow
<sandboxed_data_workflow> <critical_rule> When using tools that support saving to a file: ALWAYS use the 'filename' parameter. NEVER return large raw datasets directly to context. </critical_rule> <workflow> LargeDataTool(filename: "path") → mcp__context-mode__ctx_index(path: "path") → ctx_search() </workflow> </sandboxed_data_workflow>
This is the universal pattern for context preservation regardless of the source tool (Playwright, GitHub API, AWS CLI, etc.).
Examples
Debug an API endpoint
const resp = await fetch('http://localhost:3000/api/orders'); const { orders } = await resp.json(); const bugs = []; const negQty = orders.filter(o => o.quantity < 0); if (negQty.length) bugs.push(`Negative qty: ${negQty.map(o => o.id).join(', ')}`); const nullFields = orders.filter(o => !o.product || !o.customer); if (nullFields.length) bugs.push(`Null fields: ${nullFields.map(o => o.id).join(', ')}`); console.log(`${orders.length} orders, ${bugs.length} bugs found:`); bugs.forEach(b => console.log(`- ${b}`));
Analyze test output
npm test 2>&1 echo "EXIT=$?"
Check GitHub PRs
gh pr list --json number,title,state,reviewDecision --jq '.[] | "\(.number) [\(.state)] \(.title) — \(.reviewDecision // "no review")"'
Read and analyze a large file
# FILE_CONTENT is pre-loaded by ctx_execute_file import json data = json.loads(FILE_CONTENT) print(f"Records: {len(data)}") # ... analyze and print findings
Browser & Playwright Integration
When a task involves Playwright snapshots, screenshots, or page inspection, ALWAYS route through file → sandbox.
Playwright
browser_snapshot returns 10K–135K tokens of accessibility tree data. Calling it without filename dumps all of that into context. Passing the output to ctx_index(content: ...) sends it into context a SECOND time as a parameter. Both are wrong.
The key insight:
browser_snapshot has a filename parameter that saves to file instead of returning to context. ctx_index has a path parameter that reads files server-side. ctx_execute_file processes files in a sandbox. None of these touch context.
Workflow A: Snapshot → File → Index → Search (multiple queries)
Step 1: browser_snapshot(filename: "/tmp/playwright-snapshot.md") → saves to file, returns ~50B confirmation (NOT 135K tokens) Step 2: ctx_index(path: "/tmp/playwright-snapshot.md", source: "Playwright snapshot") → reads file SERVER-SIDE, indexes into FTS5, returns ~80B confirmation Step 3: ctx_search(queries: ["login form email password"], source: "Playwright") → returns only matching chunks (~300B)
Total context: ~430B instead of 270K tokens. Real 99% savings.
Workflow B: Snapshot → File → Execute File (one-shot extraction)
Step 1: browser_snapshot(filename: "/tmp/playwright-snapshot.md") → saves to file, returns ~50B confirmation Step 2: ctx_execute_file(path: "/tmp/playwright-snapshot.md", language: "javascript", code: " const links = [...FILE_CONTENT.matchAll(/- link \"([^\"]+)\"/g)].map(m => m[1]); const buttons = [...FILE_CONTENT.matchAll(/- button \"([^\"]+)\"/g)].map(m => m[1]); const inputs = [...FILE_CONTENT.matchAll(/- textbox|- checkbox|- radio/g)]; console.log('Links:', links.length, '| Buttons:', buttons.length, '| Inputs:', inputs.length); console.log('Navigation:', links.slice(0, 10).join(', ')); ") → processes in sandbox, returns ~200B summary
Total context: ~250B instead of 135K tokens.
Workflow C: Console & Network (save to file if large)
browser_console_messages(level: "error", filename: "/tmp/console.md") → ctx_execute_file(path: "/tmp/console.md", ...) or ctx_index(path: "/tmp/console.md", ...) browser_network_requests(includeStatic: false, filename: "/tmp/network.md") → ctx_execute_file(path: "/tmp/network.md", ...) or ctx_index(path: "/tmp/network.md", ...)
CRITICAL: Why filename
+ path
is mandatory
filenamepath| Approach | Context cost | Correct? |
|---|---|---|
→ raw into context | 135K tokens | NO |
→ | 270K tokens (doubled!) | NO |
→ → | ~430B | YES |
→ | ~250B | YES |
Key Rule
ALWAYS use
parameter when callingfilename,browser_snapshot, orbrowser_console_messages. Then process viabrowser_network_requestsorctx_index(path: ...)— neverctx_execute_file(path: ...).ctx_index(content: ...)Data flow: Playwright → file → server-side read → context. Never: Playwright → context → ctx_index(content) → context again.
Subagent Usage
Subagents automatically receive context-mode tool routing via a PreToolUse hook. You do NOT need to manually add tool names to subagent prompts — the hook injects them. Just write natural task descriptions.
Anti-Patterns
- Using
via Bash → 50KB floods context. Usecurl http://api/endpoint
with fetch instead.ctx_execute - Using
via Bash → entire file in context. Usecat large-file.json
instead.ctx_execute_file - Using
via Bash → raw JSON in context. Usegh pr list
withctx_execute
filter instead.--jq - Piping Bash output through
→ you lose the rest. Use| head -20
to analyze ALL data and print summary.ctx_execute - Running
via Bash → full test output in context. Usenpm test
to capture and summarize.ctx_execute - Calling
WITHOUTbrowser_snapshot()
parameter → 135K tokens flood context. Always usefilename
.browser_snapshot(filename: "/tmp/snap.md") - Calling
orbrowser_console_messages()
WITHOUTbrowser_network_requests()
→ entire output floods context. Always use thefilename
parameter.filename - Passing ANY large data to
→ data enters context as a parameter. Always usectx_index(content: ...)
to read server-side. Thectx_index(path: ...)
parameter should only be used for small inline text you're composing yourself.content - Calling an MCP tool (Context7
, GitHub API, etc.) then passing the response toquery-docs
→ doubles context usage. The response is already in context — use it directly or save to file first.ctx_index(content: response) - Ignoring
auto-snapshot → navigation response includes a full page snapshot. Don't rely on it for inspection — callbrowser_navigate
separately.browser_snapshot(filename) - Expecting
to reset or wipe anything →ctx_stats
is read-only (shows stats only). Usectx_stats
to permanently delete all indexed content.ctx_purge(confirm: true)