Autosearch autosearch:delegate-subtask
Define the execution contract for isolating a research sub-task — input schema, budget, return summary, evidence list, failure status. Complements decompose-task (which only splits the problem) by giving each split a bounded, auditable execution unit the runtime AI can farm out to a sub-agent or parallel session.
install
source · Clone the upstream repo
git clone https://github.com/0xmariowu/Autosearch
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/0xmariowu/Autosearch "$T" && mkdir -p ~/.claude/skills && cp -r "$T/autosearch/skills/meta/delegate-subtask" ~/.claude/skills/0xmariowu-autosearch-autosearch-delegate-subtask && rm -rf "$T"
manifest:
autosearch/skills/meta/delegate-subtask/SKILL.mdsource content
Delegate Subtask — Execution Contract
decompose-task splits a problem into sub-questions. This skill says how to execute each sub-question with a stable, auditable contract: inputs, budget, outputs, failure modes. Borrowed from MiroThinker + DeepAgents + deer-flow + DeepResearchAgent subagent patterns.
Contract
input: id: str # stable subtask id, e.g. "sub_1" / "sub_1a" parent_id: str | null # linking back to the decompose-task output question: str # one specific sub-question rationale: str # why this subtask matters for the parent goal scope: list[str] # channels / tools the subtask may touch budget: latency_seconds: int cost_usd: float tool_calls: int # max total tool invocations context_seed: list[dict] # evidence already gathered the subtask should start with stop_conditions: list[str] # e.g. "answer rubrics satisfied" / "budget exhausted" output: id: str # echoes input.id status: "success" | "partial" | "failure" summary: str # 3-6 sentences; what was found evidence: list[dict] # slim-dict Evidence items the subtask produced citations: list[str] # URL list, matched to evidence follow_ups: list[str] # open questions, if partial metrics: latency_ms: int cost_usd: float tool_calls: int channels_hit: list[str] failure_reason: str | null
Invocation Policy
- One subtask per thread/session — isolation matters. Do not merge two subtasks' tool calls into one session.
- Budget is the governor. Subtask must halt when ANY budget axis is exhausted and report
.status: "partial" - Read
, don't re-search it. Seed is evidence the parent already has; subtask should build on, not duplicate.context_seed - Return slim evidence — use autosearch's
shape so the parent can dedupe/merge.Evidence.to_slim_dict() - Follow-ups are first-class. If a subtask runs out of budget but finds a promising lead, emit that in
for the parent planner to decide.follow_ups
Concurrency Guard
Delegate subtasks are cheap in parallel but expensive in total cost. Runtime AI should apply:
— hard cap, inspired by deer-flow'smax_parallel_subtasks: 4
.subagent_limit_middleware
— escalate to user if the plan generates more.max_subtasks_per_session: 12
— if any axis exceeds 1.2× its budget, kill immediately.subtask_timeout_headroom: 1.2x
When This Skill Is Used
- Runtime AI decomposed a complex research question into 3+ sub-questions and wants to execute them in parallel sessions.
- A single sub-question is so expensive that the parent planner wants it quarantined (cost / time).
- The parent planner wants per-subtask accountability (which sub-questions succeeded; which were over-budget; where to follow up).
When NOT Used
- Trivial single-query research — overkill; just call a channel directly.
- Cross-cutting reflection / synthesis — that's
, not a subtask boundary.synthesize-knowledge
Related Skills
- Produces input from →
.decompose-task - Feeds output to →
/assemble-context
/synthesize-knowledge
.citation-index - Cost controlled by →
(Standard tier default for the subtask body; Best for the final consolidation).autosearch:model-routing
MCP Tool Usage
Use the
delegate_subtask MCP tool to run a query across multiple channels in parallel:
delegate_subtask( task_description="Find Chinese UGC discussions about Cursor AI editor", channels=["xiaohongshu", "zhihu", "bilibili"], query="Cursor AI 编程助手 用户体验", max_per_channel=5 )
Returns
{evidence_by_channel: {"xiaohongshu": [...], "zhihu": [...]}, summary: "15 results from 3 channels", failed_channels: [], budget_used: {...}}.
Feed evidence_by_channel values directly into citation_add or your synthesis.
Failure Modes
- Budget exhausted before any evidence collected →
,status: failure
.failure_reason: "budget_exhausted_before_first_result" - Subagent crashed mid-execution →
,status: failure
. Parent planner decides retry vs. give up.failure_reason: "subagent_crash: <exception>" - Partial success →
,status: partial
populated. Parent planner decides whether to escalate budget or accept partial.follow_ups
Quality Bar
- Evidence items have non-empty title and url.
- No crash on empty or malformed API response.
- Source channel field matches the channel name.