Gtm-eng-skills niche-signal-discovery
Discover niche first-party signals that differentiate Closed Won vs Closed Lost accounts for ICP analysis. Use when the user provides won/lost customer domain lists and wants differential signals (website content, job listings, tech stack, maturity markers) to build account scoring models and prospecting criteria. Triggers: ICP analysis, niche signals, won vs lost analysis, differential signals, signal discovery, ICP signal report, account scoring signals, lead scoring, first-party signals, buyer signals. Before reading this file, first read gtm-meta-skill to understand the Deepline CLI tool and how to use it. Then read this file for guidance on the task.
git clone https://github.com/getaero-io/gtm-eng-skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/getaero-io/gtm-eng-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/niche-signal-discovery" ~/.claude/skills/getaero-io-gtm-eng-skills-niche-signal-discovery && rm -rf "$T"
skills/niche-signal-discovery/SKILL.mdNiche Signal Discovery
Discover differential signals between Closed Won and Closed Lost accounts by extracting multi-page website content and job listings, then computing Laplace-smoothed lift scores to identify what distinguishes buyers from non-buyers.
Prerequisites
- Deepline CLI — All enrichment runs through
. No separate API keys for exa/crustdata/apollo etc.deepline enrich - Python 3 stdlib only — no pip dependencies for any shipped script.
- Credits — ~0.47 credits/company (serper 0.02 + firecrawl 0.05 + crustdata 0.40). Step 7 contact discovery is additional. Always get user approval before paid steps.
Deepline-First Principle
Use
deepline enrich for all enrichment, deepline tools execute for one-offs, deepline playground for inspection. Reruns are idempotent. Refer to gtm-meta-skill for command patterns and provider playbooks.
Input requirements
- Won and lost customer domain lists (≥20 won + ≥10 lost for statistical significance)
- Lookalikes can supplement Won if Closed Won < 15. Add a Dataset Caveat to the report.
- Target company context from Step 0 — what they sell, who they sell to, key personas.
Pipeline
0. Discover target company (what they sell, who they sell to) 0.5. Discover ecosystem (competitors, tech stack, buyer personas) 1. Prepare input CSV (deduplicate within won/lost groups) 1.0.5 Build "do not re-contact" index from user's existing list (scripts/dedupe_utils.py) 1.5. Generate vertical-specific configs (keywords, tools, job roles) 2. Multi-page website + job extraction (deepline enrich) 3. Quality gate — verify file completeness + coverage (>80%) 3.5. Review configs against enriched data 4. Differential analysis (scripts/analyze_signals.py) 5. Generate report — every top signal must include cited evidence 6. Signal interpretation review 7. Top 10 net-new prospects [REQUIRED] + contacts/emails [optional, costs credits]
Step 7 is required. A signal report without 10 actionable companies forces the reader to do their own prospecting pass — exactly the expensive thing they wanted to skip. Contacts/emails are optional only because they cost extra credits; always offer them.
Signal reliability hierarchy
Highest → lowest confidence:
- Job listings — active budget + acknowledged pain. Highest-intent.
- Analyst validation (Gartner/Forrester) — typically 4-7x lift, rare in lost.
- Compliance infrastructure (SOC2/GDPR/ISO) — procurement maturity.
- Buyer pain language on careers/blog — operational awareness.
- Tech stack tools (niche SaaS) — infrastructure readiness.
- Website product/marketing content — variable; can be buyer OR competitor.
When website signals fail: For B2B back-office tools (AR, billing, compliance), buyers don't publish their pain on marketing pages. Prioritize jobs + tech stack + firmographics for these verticals.
What NOT to use for scoring
CRM fields populated by AE activity — catalyst note count, OCR-derived counts (
number_of_champions_c, number_of_decision_makers_c), MEDDPICC picklists, any "did the AE do X on this opp" field — correlate with win-rate as engagement artifacts, not causal signals. They get filled in after the AE decides an opp is worth working. Never use them as scoring inputs. On one real run, catalyst notes showed "109x lift" — almost made the TL;DR before we caught the direction of causality.
Rule of thumb: every scoring input must be observable BEFORE the AE touches the account. Read
references/scoring-pitfalls.md for the full list and the "safer alternative read" for loss-reason data.
Step 0: Target company discovery
Do this FIRST. The entire pipeline (exa query, keywords, tech stack, job roles) adapts based on this discovery; skipping it produces generic/irrelevant signals.
deeplineagent: "Research {{company-domain}}. Summarize what the company sells, who they sell to, what makes them different, and any example customers."
Document: (1) product category, (2) target buyer persona, (3) key differentiation, (4) example customers.
Step 0.5: Ecosystem discovery
Three parallel
deeplineagent queries:
- Competitors —
→ 3-5 names"{product category} software alternatives competitors" - Tech stack —
→ 10-15 tools by category"{buyer persona} software stack" - Job roles —
→ 10-15 title variations"{buyer persona} job titles"
These feed Step 1.5 config generation.
Step 1: Prepare input CSV
domain,status customer1.com,won non-customer1.com,lost
Deduplicate within the input. If a domain appears in BOTH won and lost (same company, multiple deals), Deepline only fetches job listings once — silently undercounting
won_with_jobs. Remove ALL rows for cross-group domains:
from collections import Counter counts = Counter(r['domain'] for r in rows) duplicate_domains = {d for d, c in counts.items() if c > 1} # Drop every row in duplicate_domains, not just one copy.
Step 1.0.5: Build "do not re-contact" index
Before any prospects ship in Step 7, dedupe candidates against whatever "already known" list the user provides — customers, CRM export, past outbound, a previous run's output. Always ask explicitly; if the user has no list, note it as a caveat in the final report rather than silently skipping.
Order: apex domain first, fuzzy company name as fallback. Use the shipped helper — it handles public-suffix multi-label TLDs (
co.uk, co.jp, com.au) and corporate-suffix stripping:
python3 scripts/dedupe_utils.py --selftest # one-time sanity check python3 scripts/dedupe_utils.py \ --existing customers.csv --candidates prospects_raw.csv \ --out-actionable prospects_actionable.csv --out-matched already_known.csv
Don't silently drop CRM matches — categorize them: Net-new / Account-only / Re-engage / Active-open / Current-customer.
Read
for the failure modes (raw-string match missing references/dedupe.md
amsynergy.nikon.com → nikon.com cost 24 of 50 prospects in one run), category definitions, and library usage.
Step 1.5: Generate vertical-specific configs
Create three JSON files in
output/{{company}}/:
{{company}}-keywords.json # product category, pain language, competitor names, maturity terms {{company}}-tools.json # niche SaaS tools by category {{company}}-job-roles.json # buyer persona job titles
Read
for the JSON schema, generation patterns, and multi-vertical examples (creative ops, AR automation, sales engagement, developer tools).references/keyword-catalog.md
Validation: Do the configs match the target's vertical and buyer persona? If not, refine based on Step 0/0.5 findings.
Step 2: Deepline enrichment
Never scrape just the homepage. Use Serper to discover relevant pages, Firecrawl to extract content.
Step 2a - Discover pages with Serper (0.02 credits/company):
deepline enrich \ --input output/{{company}}-icp-input.csv \ --output output/{{company}}-discovered.csv \ --with '{"alias":"pages","tool":"serper_google_search","payload":{"query":"site:{{domain}} product OR features OR integrations OR customers OR security OR pricing OR careers OR about"}}' \ --json
Adapt the query by vertical: add
compliance OR audit for back-office, documentation OR api for developer tools, portfolio OR workflow for creative tools.
Step 2b - Scrape top 5 pages with Firecrawl (0.05 credits/company):
Extract URLs from Serper results, then scrape each:
deepline enrich \ --input output/{{company}}-urls.csv \ --output output/{{company}}-scraped.csv \ --with '{"alias":"content","tool":"firecrawl_scrape","payload":{"url":"{{url}}"}}' --json
Aggregate scraped pages back into one row per domain, formatted as
{"data":{"results":[{url, title, text}]}} for the analysis script.
Step 2c - Job listings with Crustdata (0.40 credits/company):
deepline enrich \ --input output/{{company}}-aggregated.csv \ --output output/{{company}}-enriched.csv \ --with '{"alias":"jobs","tool":"crustdata_job_listings","payload":{"companyDomains":["{{domain}}"]}}' --json
Total cost: ~0.47 credits/company. Get user approval first. Example: "60 companies x 0.47 = ~28 credits."
Step 3: Quality gate
deepline enrich returns to terminal before OS buffers fully flush. Running the analysis script immediately can read a partially-written file and produce won_with_jobs: 0 even when data is fine. Always verify:
INPUT_ROWS=$(wc -l < output/{{company}}-icp-input.csv) OUTPUT_ROWS=$(wc -l < output/{{company}}-enriched.csv) echo "Input: $INPUT_ROWS, Output: $OUTPUT_ROWS" # should match
Then spot-check that won rows have job data, that website coverage is >80%, and that average content depth is 6-8 pages / 12-20K chars per company.
Read
for the full verification script, the buffer-flush retry pattern, and the "auto-extracted domain validation" check that has caught up to 53% false-positive rates in CRM-exported customer lists.references/quality-gate.md
Step 3.5: Review configs against enriched data
deepline playground output/{{company}}-enriched.csv
Red flags:
- Keyword in <10% of enriched companies → too niche, broaden
- Keyword in >90% → too generic, refine
- Product-category keywords appear frequently in Won → wrong product category, those companies are competitors not buyers
- Job roles missing from actual listings → wrong buyer persona
Fix and regenerate configs if needed.
Step 4: Differential analysis
python3 scripts/analyze_signals.py \ --input output/{{company}}-enriched.csv \ --keywords output/{{company}}-keywords.json \ --tools output/{{company}}-tools.json \ --job-roles output/{{company}}-job-roles.json \ --output output/{{company}}-analysis.json
The script computes substring-match presence, Laplace-smoothed lift, source breakdown (website/jobs/both), tech-stack mentions, job-role prevalence, anti-fit signals, and per-keyword evidence quotes (±40 chars with URLs) — the evidence array is what Step 5 renders.
Step 5: Report generation
Read
for the full report structure (Quick Reference Dashboard at the top, then detail sections), the signal-strength visual scale, Apollo URL format, and all quality rules. Critical rules in brief:references/report-template.md
- Raw counts always (
, not just15% (6)
); sample sizes in headers (15%
)Won (n=37) - Bold only signals with lift > 2x AND count ≥ 3 companies
- Flag n=1 signals — they're statistically meaningless
- Source evidence is mandatory for every top signal (lift ≥ 1.5 AND won ≥ 3) — 3-5 cited quotes per signal with source type, company, page/job title, ±40-char quote, and live URL. The analysis script outputs this; render it, don't decide whether to. Signals without 3+ citations get demoted and flagged
.*(insufficient evidence)* - Annotate each evidence quote with ✅ (clear buyer signal) or ⚠️ (vendor-adjacent — the company sells something similar, so the keyword on their product page isn't a buyer signal)
- Tier 1 cheatsheet point values must match the Section 6 scoring model — cross-check both before shipping
Step 6: Signal interpretation
Read
before writing interpretation columns. Key rules:references/signal-interpretation.md
- Website content mentioning what the target sells = competitor signal (not buyer)
- Job listings = highest-intent buyer signal
- Same keyword means different things on product page vs careers page vs blog
- Tech stack tools need context — do they create or solve the target's problem?
Step 7: Top 10 net-new prospects (required)
10 companies are required for every run; contacts + emails are optional (additional Deepline credits). Always offer contact discovery; only run it if the user approves the spend.
# Companies only — no extra credits beyond Step 2 enrichment: python3 scripts/find_contacts.py --input prospects_actionable.csv --output top10.csv --top 10 --no-contacts # Companies + contacts + emails — asks for credit approval. # --roles is REQUIRED in --contacts mode and must be the buyer-persona job # titles surfaced in YOUR Step 0/0.5 (not a stale list from a different vertical): python3 scripts/find_contacts.py --input prospects_actionable.csv --output top10.csv --top 10 \ --contacts --roles "<persona job titles from Step 0.5>"
When
--contacts is on, the orchestrator runs a 3-phase chain via Deepline:
(free, mature companies)company_to_contact_by_role_waterfall
fallback for any company Phase 1 missed — mandatory. On the run that motivated this, Phase 1 returned 0 contacts on all 10 top prospects (small/non-US industrial); Exa found 15 real contacts at 6 of those 10 in the same pass.exa_search_people
withname_and_domain_to_email_waterfall
supplied and apex-domain validation — providers return stale addresses (linkedin_url
for someone now at X-Bow, personal Gmails, wrong-company false positives). Mismatched apex → publish "(email not found)", keep the raw value in@orbitalatk.com
for auditing.raw_email
Read
for the required vs. optional output fields, the prospect-card skeleton, the Phase 2 Exa guardrails (title parsing + company-match filter), and the "10 is a ceiling, not a floor" guidance.references/step-7-prospects.md
Enrichment data structure
After enrichment, each row has:
column → JSON:website
(aggregated from Firecrawl scrapes){"data":{"results":[{text, url, title}]}}
column → JSON:jobs
(Crustdata format - note{"result":{"listings":[{title, description, url}]}}
notresult
,data
nottitle
)job_title
scripts/analyze_signals.py auto-detects __dl_full_result__ columns; override with --website-col N --jobs-col N for other column names.
Common pitfalls (top 6 — full list in references/pitfalls.md)
- Skipping target discovery (Step 0) → generic/irrelevant configs.
- Homepage-only scraping → misses pricing, integrations, security, careers.
- Generic tech stack ("AWS", "GitHub", "Slack" appear on most B2B sites) → search for niche SaaS specific to the buyer persona.
- Trusting n=1 signals → require 3+ companies for Tier 1 scoring; flag single-company signals with a verification note.
- Raw-string dedupe missing parent domains —
for naive comparison. Always useamsynergy.nikon.com ≠ nikon.com
. 24 of 50 "net-new" prospects in one real run were already in the CRM as parent-domain entries the raw-string dedupe missed.extract_apex() - Trusting confirmation-biased CRM fields (catalyst notes, OCR counts, MEDDPICC) as signals — they're downstream of AE engagement, not causal. Read the "What NOT to use for scoring" section above.
Read
for the full 18-item list including substring false positives, vendor-vs-buyer signal context, back-office-tool interpretation, and shipping-without-prospects.references/pitfalls.md
Proven signal patterns
Read
for typical lift ranges across verticals (analyst validation 4.5-6.5x, hiring signals 3.8-5.5x, compliance infra 2.1-6.5x, etc.), high-confidence anti-fit patterns (consumer signals 0.2x, retention/churn 0.2-0.4x), and a starter 0-100 scoring model with three tiers (Core Fit / Buying Intent / Infrastructure Readiness).references/proven-signals.md
References
— JSON schema + multi-vertical examples for Step 1.5 config generationreferences/keyword-catalog.md
— Step 1.0.5 dedupe failure modes, categorization rules, library usagereferences/dedupe.md
— Step 3 verification scripts, buffer-flush retry pattern, auto-extracted-domain validationreferences/quality-gate.md
— Step 5 full report structure, signal-strength scale, Apollo URL format, all quality rulesreferences/report-template.md
— Step 6 buyer-vs-seller-vs-competitor rulesreferences/signal-interpretation.md
— Step 7 prospect-card skeleton, Exa guardrails, Phase 3 apex validationreferences/step-7-prospects.md
— Confirmation-biased CRM fields to exclude from scoringreferences/scoring-pitfalls.md
— Full 18-item pitfalls listreferences/pitfalls.md
— Typical lift ranges + scoring model guidancereferences/proven-signals.md
— Step 4 differential analysis. Auto-detects columns.scripts/analyze_signals.py
— Step 1.0.5 + Step 7 email validation.scripts/dedupe_utils.py
,extract_apex()
,norm_name()
. Stdlib only.match_against_existing()
flag for one-time install verification.--selftest
— Step 7 orchestrator.scripts/find_contacts.py
/--contacts
toggle, 3-phase Deepline chain.--no-contacts
Changelog
- 2026-04-13 — Switched Step 2 from exa_search (~5 credits) to Serper + Firecrawl (~0.07 credits) for website content. Total: ~0.47/company (was ~6). Fixed analyze_signals.py to handle Crustdata's
wrapper. Verified E2E on 15 companies.{"result":{"listings":[]}} - 2026-04-07 — Added Step 1.0.5 (dedupe with apex helper), Step 7 (top 10 prospects required, contacts optional via
/--contacts
),--no-contacts
warning about confirmation-biased CRM fields, mandatory citation rule. Shippedreferences/scoring-pitfalls.md
+scripts/dedupe_utils.py
. Aggressively trimmed inline detail to references — moved Step 3 quality gate, Step 5 quality rules, Common Pitfalls (items 7-15), and Proven Signal Patterns intoscripts/find_contacts.py
. SKILL.md went from 650 to ~250 lines via progressive disclosure.references/