Everything-claude-code research-ops
Evidence-first current-state research workflow for ECC. Use when the user wants fresh facts, comparisons, enrichment, or a recommendation built from current public evidence and any supplied local context.
git clone https://github.com/affaan-m/everything-claude-code
T=$(mktemp -d) && git clone --depth=1 https://github.com/affaan-m/everything-claude-code "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/research-ops" ~/.claude/skills/affaan-m-everything-claude-code-research-ops && rm -rf "$T"
skills/research-ops/SKILL.mdResearch Ops
Use this when the user asks to research something current, compare options, enrich people or companies, or turn repeated lookups into a monitored workflow.
This is the operator wrapper around the repo's research stack. It is not a replacement for
deep-research, exa-search, or market-research; it tells you when and how to use them together.
Skill Stack
Pull these ECC-native skills into the workflow when relevant:
for fast current-web discoveryexa-search
for multi-source synthesis with citationsdeep-research
when the end result should be a recommendation or ranked decisionmarket-research
when the task is people/company targeting instead of generic researchlead-intelligence
when the result should be stored in durable context afterwardknowledge-ops
When to Use
- user says "research", "look up", "compare", "who should I talk to", or "what's the latest"
- the answer depends on current public information
- the user already supplied evidence and wants it factored into a fresh recommendation
- the task may be recurring enough that it should become a monitor instead of a one-off lookup
Guardrails
- do not answer current questions from stale memory when fresh search is cheap
- separate:
- sourced fact
- user-provided evidence
- inference
- recommendation
- do not spin up a heavyweight research pass if the answer is already in local code or docs
Workflow
1. Start from what the user already gave you
Normalize any supplied material into:
- already-evidenced facts
- needs verification
- open questions
Do not restart the analysis from zero if the user already built part of the model.
2. Classify the ask
Choose the right lane before searching:
- quick factual answer
- comparison or decision memo
- lead/enrichment pass
- recurring monitoring candidate
3. Take the lightest useful evidence path first
- use
for fast discoveryexa-search - escalate to
when synthesis or multiple sources matterdeep-research - use
when the outcome should end in a recommendationmarket-research - hand off to
when the real ask is target ranking or warm-path discoverylead-intelligence
4. Report with explicit evidence boundaries
For important claims, say whether they are:
- sourced facts
- user-supplied context
- inference
- recommendation
Freshness-sensitive answers should include concrete dates.
5. Decide whether the task should stay manual
If the user is likely to ask the same research question repeatedly, say so explicitly and recommend a monitoring or workflow layer instead of repeating the same manual search forever.
Output Format
QUESTION TYPE - factual / comparison / enrichment / monitoring EVIDENCE - sourced facts - user-provided context INFERENCE - what follows from the evidence RECOMMENDATION - answer or next move - whether this should become a monitor
Pitfalls
- do not mix inference into sourced facts without labeling it
- do not ignore user-provided evidence
- do not use a heavy research lane for a question local repo context can answer
- do not give freshness-sensitive answers without dates
Verification
- important claims are labeled by evidence type
- freshness-sensitive outputs include dates
- the final recommendation matches the actual research mode used