Lucid-toolkit resolve-ambiguity
Systematic ambiguity resolution through tiered information gathering. Use when facing unclear requirements, unknown context, uncertain implementation choices, or any situation where guessing would be risky.
git clone https://github.com/rayk/lucid-toolkit
T=$(mktemp -d) && git clone --depth=1 https://github.com/rayk/lucid-toolkit "$T" && mkdir -p ~/.claude/skills && cp -r "$T/plugins/luc/skills/resolve-ambiguity" ~/.claude/skills/rayk-lucid-toolkit-resolve-ambiguity && rm -rf "$T"
plugins/luc/skills/resolve-ambiguity/SKILL.mdCore principle: Rather ask than guess. Wrong assumptions waste more time than clarifying questions. </objective>
<quick_start> <decision_tree> When you encounter ambiguity, classify it:
-
Technical/Factual - "How does X work?" "What is the correct syntax?" → Likely found in project or online sources → Follow the tiered lookup process
-
Intent/Choice - "Which approach should I use?" "What does the user want?" → Requires user input → Use AskUserQuestion immediately </decision_tree>
<immediate_action> If ambiguity is about user intent or preference: → Skip lookup, go directly to
<user_clarification> section
If ambiguity is technical/factual: → Follow
<tiered_lookup> process
</immediate_action>
</quick_start>
<tiered_lookup> <description> For technical/factual ambiguity, check sources in this order. Stop as soon as you find authoritative information. </description>
<tier_1 name="Project Context Files"> Check first - fastest and most relevant
-
Read
.claude/workspace-info.toon- Contains workspace structure, projects, capabilities, outcomes
- Shows current focus and IDE configuration
-
Read
.claude/project-info.toon- Contains project technology, dependencies, entry points
- Shows repository and IDE details
Task(subagent_type="Explore", model="haiku", prompt=""" Check for context files: - .claude/workspace-info.toon - .claude/project-info.toon If found, extract relevant information about: {specific question} Return only the relevant fields, not the entire file. """)
</tier_1>
<tier_2 name="Architectural Files"> Check second - project-specific patterns
Look within current scope for:
- Project instructions and conventionsCLAUDE.md
- Project overview and setupREADME.md
orARCHITECTURE.md
- Design decisionsdocs/architecture.md- Configuration files relevant to the question:
,package.json
(JavaScript/TypeScript)tsconfig.json
,pyproject.toml
(Python)setup.py
(Rust)Cargo.toml
(environment variables).env.example
Glob for architectural files: - **/CLAUDE.md - **/README.md - **/ARCHITECTURE.md - **/docs/*.md
</tier_2>
<tier_3 name="Documentation via MCP"> Check third - if MCP documentation tools are available
If MCP tools are available for documentation lookup:
- Use
for library documentationmcp__context7__* - Use
for web documentationmcp__firecrawl__* - Use other documentation-specific MCP tools
These provide structured access to official documentation. </tier_3>
<tier_4 name="Web Search Official Sources"> Check fourth - for external APIs, libraries, standards
Use WebSearch and WebFetch for:
- Official documentation sites
- GitHub repositories of libraries
- API reference documentation
- RFC or specification documents
<search_strategy>
- WebSearch with specific query: "{library/API name} documentation {specific topic} 2024 2025"
- WebFetch the most authoritative result (official docs preferred)
- Extract only the relevant information
Prefer sources in this order:
- Official documentation (*.dev, *.io, readthedocs)
- GitHub repository README/docs
- Stack Overflow with high votes (for edge cases) </search_strategy> </tier_4>
<when_to_stop> Stop the tiered lookup when:
- You find authoritative information that resolves the ambiguity
- You've checked all relevant tiers without finding information
- The information found indicates this is actually a choice/intent question
If all tiers exhausted without answer → proceed to
<user_clarification>
</when_to_stop>
</tiered_lookup>
<user_clarification> <description> For intent/choice questions, or when tiered lookup fails, ask the user directly. Never guess when user input is available. </description>
<principles> 1. **Explain what you need** - Tell the user what information is missing 2. **Explain why you need it** - Describe how the answer affects the outcome 3. **Offer smart choices** - If you can infer likely options, present them 4. **Best practice first** - Order choices with recommended approach at top 5. **Bad ideas last** - If including risky options, put them at the bottom 6. **Always allow custom input** - User can always provide their own answer 7. **When uncertain, don't guess choices** - Better to ask open-ended than offer wrong options </principles><with_known_choices> When you can confidently identify the options:
<example> Question: "Which authentication approach should I implement?"AskUserQuestion with structure: - Question: Clear, specific question explaining context - Options ordered by preference: 1. Best practice / Most common / Recommended 2. Good alternative 3. Another valid option 4. Least recommended / Has drawbacks - Each option includes description explaining implications - User can always select "Other" for custom input
Options (ordered best → least recommended):
- JWT with refresh tokens - Industry standard, stateless, works well with APIs
- Session-based auth - Simple, works well for server-rendered apps
- OAuth2 only - Good for social login, but adds complexity
- Basic auth - Simple but less secure, only for internal tools
Each option explains trade-offs so user can make informed choice. </example> </with_known_choices>
<without_known_choices> When you cannot confidently identify the options:
DO NOT GUESS. Ask an open-ended question instead.
<example> Instead of guessing configuration options:AskUserQuestion: - Question: Explain what you need to know and why - Options: 1. A general direction if you have any hint 2. (Keep options minimal or omit entirely) - Allow free-form input as primary response method
Question: "I need to configure the database connection. What database are you using and what are the connection details?"
Options:
- I'll provide the details - Let me type the configuration
This is better than guessing "PostgreSQL" or "MySQL" when you don't know. </example> </without_known_choices>
<formatting_rules>
- Single question at a time - Don't overwhelm with multiple questions
- 2-4 options maximum - More becomes confusing
- Descriptions are required - Every option needs context
- No yes/no when options exist - Offer the actual choices instead
- Acknowledge uncertainty - "I'm not sure which applies, so..." </formatting_rules> </user_clarification>
<ambiguity_categories> <category name="Technical Implementation"> Examples: "How do I call this API?" "What's the correct syntax?"
Resolution path:
- Check project context files
- Check architectural docs
- WebSearch official documentation
- If still unclear → ask user for clarification </category>
Resolution path:
- Check CLAUDE.md for explicit conventions
- Check existing code for patterns (Glob + Read)
- Check README/contributing guide
- If no clear pattern → ask user preference </category>
Resolution path:
- Skip lookup - this requires user input
- AskUserQuestion immediately
- Present inferred options if confident
- Allow open-ended response if uncertain </category>
Resolution path:
- Check package.json/pyproject.toml for versions
- WebSearch for current documentation
- WebFetch official docs
- If version-specific behavior → confirm with user </category>
Resolution path:
- Check .env.example or config files
- Check project-info.toon
- Check README for setup instructions
- If sensitive values → ask user (never guess credentials) </category>
Resolution path:
- Check ARCHITECTURE.md or design docs
- Check workspace-info.toon for project structure
- This is usually a choice → AskUserQuestion
- Present trade-offs clearly in options </category>
</ambiguity_categories>
<process> <step_1> **Detect ambiguity**: Identify what information is missing and classify it: - Technical/Factual → Tier lookup - Intent/Choice → User clarification </step_1><step_2> For technical ambiguity: Execute tiered lookup in order:
- Project context files (.claude/*.toon)
- Architectural files (CLAUDE.md, README, docs)
- MCP documentation tools (if available)
- WebSearch/WebFetch official sources </step_2>
<step_3> For intent ambiguity or lookup failure: Use AskUserQuestion:
- Explain what's needed and why
- Offer choices ordered by recommendation (if known)
- Don't guess choices if uncertain
- Always allow custom input </step_3>
<step_4> Apply the answer: Use the information to proceed with the task. Document any decisions made for future reference. </step_4> </process>
<success_criteria> Ambiguity is resolved when:
- Information found: Authoritative source confirms the answer
- User clarified: User provided explicit direction
- Documented: Decision is captured for future reference
Signs of good resolution:
- No guessing occurred
- User wasn't asked unnecessary questions
- The answer came from the most appropriate source
- Forward progress is now possible </success_criteria>
<anti_patterns> <pattern name="Guessing and Hoping"> Wrong: Making assumptions and proceeding without verification Instead: Take 30 seconds to check or ask </pattern>
<pattern name="Too Many Questions"> **Wrong**: Asking 5 questions before doing anything **Instead**: Ask only what's blocking immediate progress </pattern> <pattern name="Vague Questions"> **Wrong**: "How should I proceed?" **Instead**: "Should I use approach A (benefit) or B (benefit)?" </pattern> <pattern name="Assuming User Expertise"> **Wrong**: "Should I use the factory pattern or strategy pattern?" **Instead**: Explain the options in plain terms with trade-offs </pattern> <pattern name="Hiding Behind Defaults"> **Wrong**: Silently using a default without mentioning alternatives **Instead**: "I'll use X (the standard approach). Let me know if you'd prefer Y." </pattern> </anti_patterns>