Claude-skill-registry inquire
Request clarification when input is ambiguous. Use when user request has missing parameters, conflicting interpretations, or insufficient constraints for reliable execution.
git clone https://github.com/majiayu000/claude-skill-registry
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/inquire" ~/.claude/skills/majiayu000-claude-skill-registry-inquire && rm -rf "$T"
skills/data/inquire/SKILL.mdIntent
Generate targeted clarifying questions when a user request is ambiguous or underspecified, enabling the agent to gather missing information before committing to an action.
Success criteria:
- Questions target specific missing parameters or ambiguous interpretations
- Each question provides bounded answer options when applicable
- Confidence score reflects actual ambiguity level
- Evidence anchors reference the specific ambiguous elements
Compatible schemas:
schemas/output_schema.yamlreference/capability_ontology.yaml#/inquire
Inputs
| Parameter | Required | Type | Description |
|---|---|---|---|
| Yes | string|object | The underspecified request or goal to clarify |
| No | object | Previous conversation or domain context for better question generation |
| No | integer | Maximum clarifying questions to generate (default: 3) |
Procedure
-
Analyze the input: Examine the ambiguous_input for completeness
- Identify required parameters for the likely intended action
- Check for conflicting interpretations
- Note any implicit assumptions that need validation
-
Categorize ambiguity types: Classify what's unclear
- Missing parameters: Required information not provided
- Conflicting interpretations: Multiple valid ways to interpret the request
- Constraint gaps: Boundaries or limits not specified
- Domain uncertainty: Unclear which domain or scope applies
-
Generate clarifying questions: For each ambiguity, formulate a question
- Target specific missing information
- Provide bounded options when possible (2-4 choices)
- Frame questions to elicit actionable answers
- Prioritize questions by impact on execution
-
Ground claims: Attach evidence anchors to the ambiguity analysis
- Reference specific phrases or words that are ambiguous
- Link to context that informed the interpretation
- Format:
,input:phrase
, orcontext:fieldinference:reason
-
Format output: Structure results according to the output contract
-
Assess confidence: Rate how confident you are that clarification is needed
- High confidence (0.8-1.0): Clear missing required parameters
- Medium confidence (0.5-0.8): Ambiguous but could proceed with assumptions
- Low confidence (0.0-0.5): Probably clear enough, clarification optional
Output Contract
Return a structured object:
questions: - question: "Which database should this operation target?" parameter: "target_database" options: ["production", "staging", "development"] - question: "Should this include archived records?" parameter: "include_archived" options: ["yes", "no"] ambiguity_analysis: missing_parameters: ["target_database", "date_range"] conflicting_interpretations: - interpretation: "Query all users" confidence: 0.6 - interpretation: "Query only active users" confidence: 0.4 confidence_without_clarification: 0.3 confidence: 0.85 # How confident we are that clarification is needed evidence_anchors: ["input:users", "context:previous_query_was_active_only"] assumptions: [] # No assumptions made - we're asking for clarification
Field Definitions
| Field | Type | Description |
|---|---|---|
| array | Clarifying questions with optional bounded answers |
| string | The clarifying question text |
| string | What parameter this question resolves |
| array | Suggested answers if the question has bounded options |
| object | Analysis of what makes the input ambiguous |
| array | Parameters required but not provided |
| array | Different valid interpretations |
| number | How confident we could be if we just guessed |
| number | 0.0-1.0 confidence that clarification is needed |
| array | References to ambiguous elements |
| array | Assumptions made (typically empty for inquire) |
Examples
Example 1: Ambiguous API Request
Input:
ambiguous_input: "Update the user's email" context: recent_topic: "user management" available_users: 150 max_questions: 3
Output:
questions: - question: "Which user's email should be updated?" parameter: "user_id" options: [] # Open-ended - too many users to list - question: "What should the new email address be?" parameter: "new_email" options: [] - question: "Should this send a verification email to the new address?" parameter: "send_verification" options: ["yes", "no", "only if production"] ambiguity_analysis: missing_parameters: ["user_id", "new_email"] conflicting_interpretations: [] confidence_without_clarification: 0.1 confidence: 0.95 evidence_anchors: ["input:the user's", "input:email"] assumptions: []
Evidence pattern: The phrase "the user's" indicates a specific user is intended but not identified. "email" is ambiguous - could mean view, update, or verify.
Example 2: Multiple Valid Interpretations
Input:
ambiguous_input: "Show me the report" context: available_reports: ["sales", "inventory", "user_activity"] last_viewed_report: "sales" max_questions: 2
Output:
questions: - question: "Which report would you like to see?" parameter: "report_type" options: ["sales", "inventory", "user_activity"] - question: "For what time period?" parameter: "date_range" options: ["today", "this week", "this month", "custom range"] ambiguity_analysis: missing_parameters: ["report_type", "date_range"] conflicting_interpretations: - interpretation: "Show sales report (last viewed)" confidence: 0.5 - interpretation: "Show most recently generated report" confidence: 0.3 - interpretation: "Show all available reports" confidence: 0.2 confidence_without_clarification: 0.5 confidence: 0.7 evidence_anchors: ["input:the report", "context:available_reports:3", "context:last_viewed:sales"] assumptions: []
Evidence pattern: "the report" implies a specific report but context shows 3 options. Recent history (sales) provides a weak signal.
Verification
- Output contains at least one question when confidence > 0.5
- Each question has a non-empty
fieldparameter - Options array is provided for bounded-choice questions
- Evidence anchors reference specific input elements
- Confidence is justified by ambiguity analysis
Verification tools: None beyond allowed tools
Safety Constraints
: falsemutation
: falserequires_checkpoint
: falserequires_approval
: lowrisk
Capability-specific rules:
- Do not make assumptions - the purpose is to ask, not to guess
- Limit questions to max_questions to avoid overwhelming users
- If confidence < 0.3, consider returning empty questions (input may be clear enough)
- Never include sensitive data in question options
Composition Patterns
Commonly follows:
- After identifying issues with a request, inquire resolves themcritique
- After receiving a new request that needs clarificationreceive
Commonly precedes:
- After inquiring, wait for user's clarification responsereceive
- Merge clarification into the original requestintegrate
- Once clarified, proceed to planningplan
Anti-patterns:
- Never use
afterinquire
- clarify BEFORE actingexecute - Avoid chaining multiple
calls - combine questions into one requestinquire - Don't use with
in the same step - inquire is read-onlymutate
Workflow references:
- See
for the complete clarification workflowreference/workflow_catalog.yaml#clarify_intent