EasyPlatform web-research
[Research] Broad web search on a topic. Collect sources, validate credibility, build source map. Use when starting any research task.
git clone https://github.com/duc01226/EasyPlatform
T=$(mktemp -d) && git clone --depth=1 https://github.com/duc01226/EasyPlatform "$T" && mkdir -p ~/.claude/skills && cp -r "$T/.claude/skills/web-research" ~/.claude/skills/duc01226-easyplatform-web-research && rm -rf "$T"
.claude/skills/web-research/SKILL.md[IMPORTANT] Use
to break ALL work into small tasks BEFORE starting.TaskCreate
External Memory: For complex or lengthy work (research, analysis, scan, review), write intermediate findings and final results to a report file in
— prevents context loss and serves as deliverable.plans/reports/
<!-- SYNC:critical-thinking-mindset -->Evidence Gate: MANDATORY IMPORTANT MUST ATTENTION — every claim, finding, and recommendation requires
proof or traced evidence with confidence percentage (>80% to act, <80% must verify first).file:line
<!-- /SYNC:critical-thinking-mindset --> <!-- SYNC:ai-mistake-prevention -->Critical Thinking Mindset — Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence >80% to act. Anti-hallucination: Never present guess as fact — cite sources for every claim, admit uncertainty freely, self-check output for errors, cross-reference independently, stay skeptical of own confidence — certainty without evidence root of all hallucination.
<!-- /SYNC:ai-mistake-prevention -->AI Mistake Prevention — Failure modes to avoid on every task:
- Check downstream references before deleting. Deleting components causes documentation and code staleness cascades. Map all referencing files before removal.
- Verify AI-generated content against actual code. AI hallucinates APIs, class names, and method signatures. Always grep to confirm existence before documenting or referencing.
- Trace full dependency chain after edits. Changing a definition misses downstream variables and consumers derived from it. Always trace the full chain.
- Trace ALL code paths when verifying correctness. Confirming code exists is not confirming it executes. Always trace early exits, error branches, and conditional skips — not just happy path.
- When debugging, ask "whose responsibility?" before fixing. Trace whether bug is in caller (wrong data) or callee (wrong handling). Fix at responsible layer — never patch symptom site.
- Assume existing values are intentional — ask WHY before changing. Before changing any constant, limit, flag, or pattern: read comments, check git blame, examine surrounding code.
- Verify ALL affected outputs, not just the first. Changes touching multiple stacks require verifying EVERY output. One green check is not all green checks.
- Holistic-first debugging — resist nearest-attention trap. When investigating any failure, list EVERY precondition first (config, env vars, DB names, endpoints, DI registrations, data preconditions), then verify each against evidence before forming any code-layer hypothesis.
- Surgical changes — apply the diff test. Bug fix: every changed line must trace directly to the bug. Don't restyle or improve adjacent code. Enhancement task: implement improvements AND announce them explicitly.
- Surface ambiguity before coding — don't pick silently. If request has multiple interpretations, present each with effort estimate and ask. Never assume all-records, file-based, or more complex path.
Quick Summary
Goal: Execute broad web search on a topic, collect and classify sources, build a structured source map.
Workflow:
- Define scope — Parse topic, generate 5-10 search queries from varied angles
- Execute searches — Run WebSearch for each query, collect results
- Source triage — Classify each source by Tier (1-4), filter duplicates
- Build source map — Write structured source list to working file
- Identify gaps — Note underexplored angles for deep-research
Key Rules:
- Maximum 10 WebSearch calls per invocation
- Follow source hierarchy: Official docs (Tier 1) > Peer-reviewed (Tier 2) > Industry blogs (Tier 3) > Forums (Tier 4)
- Output intermediate source map, not final report
Be skeptical. Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence percentages (Idea should be more than 80%).
Web Research
Step 1: Define Search Scope
Parse the user's topic and generate 5-10 search queries that cover:
- Definition/overview — "what is {topic}"
- Current state — "{topic} 2026" or "{topic} latest"
- Comparison — "{topic} vs alternatives"
- Data/statistics — "{topic} market size" or "{topic} statistics"
- Expert opinion — "{topic} expert analysis" or "{topic} review"
- Criticism/risks — "{topic} challenges" or "{topic} risks"
Step 2: Execute Searches
For each query:
- Run
with the queryWebSearch - Record: title, URL, snippet, apparent source type
- Stop at 10 WebSearch calls maximum
Step 3: Source Triage
For each result, classify by Tier:
- Tier 1: .gov, .edu, official docs, peer-reviewed
- Tier 2: Industry reports, major publications
- Tier 3: Established blogs, verified experts, Wikipedia
- Tier 4: Forums, personal blogs, social media
Filter out duplicates (same URL or same content from syndication).
Step 4: Build Source Map
Write to
.claude/tmp/_sources-{slug}.md:
# Source Map: {Topic} **Date:** {date} **Queries executed:** {count} **Sources found:** {count} (Tier 1: N, Tier 2: N, Tier 3: N, Tier 4: N) ## Sources | # | Title | URL | Tier | Relevance | Notes | | --- | ----- | --- | ---- | --------- | ------------- | | 1 | ... | ... | 1 | High | Official docs | ## Gaps Identified - {angle not covered} - {topic needing deeper research}
Step 5: Identify Gaps
Review source map for:
- Missing perspectives (only positive sources? need criticism)
- Missing data types (no quantitative data? need statistics)
- Recency issues (all sources old? need current data)
Note gaps for the
deep-research step.
Workflow Recommendation
MANDATORY IMPORTANT MUST ATTENTION — NO EXCEPTIONS: If you are NOT already in a workflow, you MUST ATTENTION use
to ask the user. Do NOT judge task complexity or decide this is "simple enough to skip" — the user decides whether to use a workflow, not you:AskUserQuestion
- Activate
workflow (Recommended) — web-research → deep-research → synthesis → reviewresearch- Execute
directly — run this skill standalone/web-research
Next Steps
MANDATORY IMPORTANT MUST ATTENTION — NO EXCEPTIONS after completing this skill, you MUST ATTENTION use
AskUserQuestion to present these options. Do NOT skip because the task seems "simple" or "obvious" — the user decides:
- "/deep-research (Recommended)" — Deep-dive into top sources
- "/business-evaluation" — If evaluating business viability
- "Skip, continue manually" — user decides
Closing Reminders
MANDATORY IMPORTANT MUST ATTENTION break work into small todo tasks using
TaskCreate BEFORE starting.
MANDATORY IMPORTANT MUST ATTENTION validate decisions with user via AskUserQuestion — never auto-decide.
MANDATORY IMPORTANT MUST ATTENTION add a final review todo task to verify work quality.
<!-- SYNC:critical-thinking-mindset:reminder -->
- MUST ATTENTION apply critical thinking — every claim needs traced proof, confidence >80% to act. Anti-hallucination: never present guess as fact. <!-- /SYNC:critical-thinking-mindset:reminder --> <!-- SYNC:ai-mistake-prevention:reminder -->
- MUST ATTENTION apply AI mistake prevention — holistic-first debugging, fix at responsible layer, surface ambiguity before coding, re-read files after compaction. <!-- /SYNC:ai-mistake-prevention:reminder -->