Hal-9000 best-practices
Use when setting up, configuring, choosing, refining, or asking about best practices for tools, libraries, config formats, API patterns, or project setup where outdated guidance causes debugging pain. Also use when the user says "search online", "how should I", or "what's the best way to"
git clone https://github.com/vinta/hal-9000
T=$(mktemp -d) && git clone --depth=1 https://github.com/vinta/hal-9000 "$T" && mkdir -p ~/.claude/skills && cp -r "$T/plugins/hal-skills/skills/best-practices" ~/.claude/skills/vinta-hal-9000-best-practices && rm -rf "$T"
plugins/hal-skills/skills/best-practices/SKILL.mdBest Practices
Two-Phase Rule
- Phase 1: Research. Dispatch find-docs and/or WebSearch queries.
- Phase 2: Synthesize and act. Only after Phase 1 results arrive.
The user's argument may be a question or an imperative. Imperatives ("refine X", "set up Y") determine what Phase 2 does, not whether Phase 1 happens. Phase 1 always runs.
Red flags indicating you are about to skip research:
| Thought | Reality |
|---|---|
| "I already know this" | Training data goes stale. Config keys get renamed, APIs get deprecated. |
| "The user said to act" | The imperative scopes Phase 2, it does not eliminate Phase 1. |
| "This is a simple lookup" | A 30-second search costs nothing. A wrong recommendation costs a debugging round-trip. |
Workflow
1. Identify Research Targets
Break the topic into 2-4 specific queries targeting distinct aspects (libraries, patterns, configuration, pitfalls). For single-library lookups, call
find-docs or WebSearch directly without subagents.
2. Parallel Research
Dispatch one subagent per query in a single message so they run in parallel. Each uses
find-docs (Context7) and WebSearch. Be concrete in each subagent prompt: pass library names, version constraints, and the user's specific context. Vague prompts produce vague results.
<subagent_prompt_template> <context> The user wants to [user's task]. We need the latest, authoritative guidance on [specific aspect]. </context>
<task> Research best practices for: [specific query]Use the find-docs skill to look up [library/tool] documentation, then use WebSearch to find recent guides and recommendations for "[specific search query]". </task>
<output_format> Report in under 300 words. Include:
- Recommended approach with rationale
- Concrete code/config examples
- Pitfalls to avoid
- Sources consulted (with publication dates)
If you cannot find authoritative guidance on a point, say so explicitly rather than guessing. </output_format> </subagent_prompt_template>
3. Synthesize
Phase check: If no research results have arrived yet, STOP. You are still in Phase 1. Go back to step 2.
After all subagents return, merge using these criteria:
- Deduplicate overlapping recommendations
- Rank by authority: official docs > well-known guides > blog posts > training data
- Flag conflicts with attribution (which source said what)
- Discard stale results: a 2022 guide for a fast-moving framework is noise
If a subagent failed or returned empty, note the gap and proceed with the results you have. Do not block synthesis waiting for a straggler.
4. Present Findings
Deliver to the user in this structure:
- Recommended Approach: the primary recommendation with rationale
- Key Patterns: concrete code/config examples the user can apply immediately
- Pitfalls to Avoid: common mistakes with explanations
- Sources: what was consulted, so the user can dig deeper
Gotchas
- 2-4 focused subagents, not more. Each carries ~20K tokens of startup overhead. Fewer focused queries beat many shallow ones.
- User-provided URLs are additive. If the user provided specific URLs, fetch those too, but they supplement research, not replace it.
- Context7 quota limits exist. If
fails with quota errors, fall back tofind-docs
only and note the limitation.WebSearch - If both
andfind-docs
fail, say so explicitly rather than falling back to training data.WebSearch