Awesome-omni-skills lead-enrichment
Lead Enrichment Skill workflow skill. Use this skill when the user needs When the user wants to build data enrichment workflows, score leads against ICP, set up Clay waterfalls, or improve contact data quality. Also use when the user mentions 'enrichment,' 'data enrichment,' 'Clay,' 'waterfall enrichment,' 'ICP scoring,' 'lead scoring,' 'intent data,' 'contact verification,' 'Apollo,' 'ZoomInfo,' or 'data quality.' This skill covers lead enrichment waterfalls, ICP scoring frameworks, and contact verification systems. Do NOT use for technical implementation, code review, or software architecture and the operator should preserve the upstream workflow, copied support files, and provenance before merging or handing off.
git clone https://github.com/diegosouzapw/awesome-omni-skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills_omni/lead-enrichment" ~/.claude/skills/diegosouzapw-awesome-omni-skills-lead-enrichment-f43085 && rm -rf "$T"
skills_omni/lead-enrichment/SKILL.mdLead Enrichment Skill
Overview
This public intake copy packages
packages/skills-catalog/skills/(gtm)/lead-enrichment from https://github.com/tech-leads-club/agent-skills into the native Omni Skills editorial shape without hiding its origin.
Use it when the operator needs the upstream workflow, support files, and repository context to stay intact while the public validator and private enhancer continue their normal downstream flow.
This intake keeps the copied upstream files intact and uses
metadata.json plus ORIGIN.md as the provenance anchor for review.
Lead Enrichment Skill You are a B2B data enrichment architect. You build waterfall enrichment systems, ICP scoring frameworks, and contact verification pipelines that maximize coverage while minimizing cost per verified lead. You know the provider landscape cold and design workflows that sequence providers for maximum incremental yield.
Imported source sections that did not map cleanly to the public headings are still preserved below or in the support files. Notable imported sections: Before Starting, Section 1: ICP Scoring Framework, Section 2: Enrichment Waterfall Architecture, Section 4: Contact Verification Pipeline, Section 5: Performance Benchmarks, Section 6: Compliance.
When to Use This Skill
Use this section as the trigger filter. It should make the activation boundary explicit before the operator loads files, runs commands, or opens a pull request.
- Use when the request clearly matches the imported source intent: When the user wants to build data enrichment workflows, score leads against ICP, set up Clay waterfalls, or improve contact data quality. Also use when the user mentions 'enrichment,' 'data enrichment,' 'Clay,'....
- Use when the operator should preserve upstream workflow detail instead of rewriting the process from scratch.
- Use when provenance needs to stay visible in the answer, PR, or review packet.
- Use when copied upstream references, examples, or scripts materially improve the answer.
- Use when the workflow should remain reviewable in the public intake repo before the private enhancer takes over.
Operating Table
| Situation | Start here | Why it matters |
|---|---|---|
| First-time use | | Confirms repository, branch, commit, and imported path before touching the copied workflow |
| Provenance review | | Gives reviewers a plain-language audit trail for the imported source |
| Workflow execution | | Starts with the smallest copied file that materially changes execution |
| Supporting context | | Adds the next most relevant copied source file without loading the entire package |
| Handoff decision | | Helps the operator switch to a stronger native skill when the task drifts |
Workflow
This workflow is intentionally editorial and operational at the same time. It keeps the imported source useful to the operator while still satisfying the public intake standards that feed the downstream enhancer flow.
- Concept - What It Does
- Table - Your lead list - imported via CSV, CRM sync, or API
- Enrichment Column - Calls a provider to fill a specific field
- Waterfall Column - Tries multiple providers in sequence for one field
- AI Column - Uses GPT/Claude to derive insights from other columns
- Formula Column - Computes values from other columns (like ICP score)
- Integration Push - Sends enriched data to CRM, sequencer, or webhook
Imported Workflow Notes
Imported: Section 3: Clay Workflow Design
Clay Architecture Basics
Clay operates on a table-based model. Each row is a lead. Each column is a data field. Enrichment steps run left-to-right across columns, with waterfalls configured per field.
Core Clay concepts:
| Concept | What It Does |
|---|---|
| Table | Your lead list - imported via CSV, CRM sync, or API |
| Enrichment Column | Calls a provider to fill a specific field |
| Waterfall Column | Tries multiple providers in sequence for one field |
| AI Column | Uses GPT/Claude to derive insights from other columns |
| Formula Column | Computes values from other columns (like ICP score) |
| Integration Push | Sends enriched data to CRM, sequencer, or webhook |
Credit Consumption Guide
Clay charges credits per enrichment action. Budget carefully.
| Action Type | Credits Per Row | Example |
|---|---|---|
| Basic enrichment (1 provider) | 4-10 | Email lookup, job title |
| Waterfall enrichment (3 providers) | 12-30 | Email waterfall with fallbacks |
| AI/GPT column | 10-25 | Persona summary, pain point extraction |
| Multi-step automation | 30+ | Full enrichment + scoring + routing |
Credit math: 1,000 leads at 25 credits/lead = 25,000 credits. Starter plan handles that in 12.5 months, Explorer in 2.5 months, Pro in 0.5 months. Pre-filter aggressively to avoid burning credits on unqualified leads.
Clay Pricing (2026)
| Plan | Price/Mo | Credits/Mo | Per Credit |
|---|---|---|---|
| Free | $0 | 100 | N/A |
| Starter | $149 | 2,000 | $0.075 |
| Explorer | $349 | 10,000 | $0.035 |
| Pro | $800 | 50,000 | $0.016 |
| Enterprise | Custom | Custom | Custom |
Sample Clay Table Structure
Build your enrichment workflow in this column order:
Col A: Company Domain (input) Col B: Contact Name (input or enrichment) Col C: LinkedIn URL (Apollo waterfall) Col D: Verified Email (email waterfall: Apollo > Hunter > FindyMail) Col E: Job Title (Apollo or ZoomInfo) Col F: Employee Count (Clearbit or Clay built-in) Col G: Industry (Clearbit or Clay built-in) Col H: Tech Stack (BuiltWith via Clay) Col I: Bombora Surge Score (Bombora integration or manual import) Col J: Firmographic Score (Formula: weighted average of F, G, geography) Col K: Technographic Score (Formula: based on H match rules) Col L: Intent Score (Formula: based on I + hiring + funding signals) Col M: ICP Score (Formula: J*0.30 + K*0.30 + L*0.40) Col N: AI Personalization (AI column: generate first-line based on B, E, H) Col O: Routing (Formula: if M > 85 then "hot" elif M > 70 then "warm")
Credit Governance Rules
- Pre-qualify before enriching - domain check + firmographic filter before spending on email waterfall
- Cap per campaign - no single campaign burns more than 40% of monthly credits
- Alert at 75% - Slack/email alert when usage crosses 75% of monthly allowance
- Audit weekly - credits spent vs. leads enriched vs. leads qualified (target >60% qualification)
- 90-day re-enrichment - re-enrich stale contacts before including in new campaigns
Imported: Before Starting
Confirm with the user: (1) target ICP - industry, company size, geography, persona; (2) current stack - CRM, enrichment tools, outreach platforms; (3) data gaps - which fields are missing or unreliable; (4) volume - leads per month; (5) budget - optimizing for coverage or cost.
If the user provides a draft workflow or existing Clay table, analyze it before suggesting changes.
Examples
Example 1: Ask for the upstream workflow directly
Use @lead-enrichment to handle <task>. Start from the copied upstream workflow, load only the files that change the outcome, and keep provenance visible in the answer.
Explanation: This is the safest starting point when the operator needs the imported workflow, but not the entire repository.
Example 2: Ask for a provenance-grounded review
Review @lead-enrichment against metadata.json and ORIGIN.md, then explain which copied upstream files you would load first and why.
Explanation: Use this before review or troubleshooting when you need a precise, auditable explanation of origin and file selection.
Example 3: Narrow the copied support files before execution
Use @lead-enrichment for <task>. Load only the copied references, examples, or scripts that change the outcome, and name the files explicitly before proceeding.
Explanation: This keeps the skill aligned with progressive disclosure instead of loading the whole copied package by default.
Example 4: Build a reviewer packet
Review @lead-enrichment using the copied upstream files plus provenance, then summarize any gaps before merge.
Explanation: This is useful when the PR is waiting for human review and you want a repeatable audit packet.
Imported Usage Notes
Imported: Examples
- User says: "Set up lead enrichment for our outbound" → Result: Agent asks budget and volume; recommends waterfall tier (e.g. Clay + Apollo for $200–1K/mo); outlines steps: import → pre-filter → waterfall → verify (confidence >0.85) → score → route to SDR/sequence; suggests CRM push and 90-day re-enrich.
- User says: "Our email bounce rate is high" → Result: Agent checks verification (MillionVerifier, NeverBounce) and confidence threshold; recommends catch-all segment and list hygiene; suggests <2% bounce target and re-verification before each campaign.
- User says: "Which enrichment tools should we use?" → Result: Agent uses Quick Reference budget tiers; maps providers (Apollo, Clay, ZoomInfo, Clearbit, etc.); recommends primary/secondary/tertiary order and when to add intent (Bombora, G2).
Best Practices
Treat the generated public skill as a reviewable packaging layer around the upstream repository. The goal is to keep provenance explicit and load only the copied source material that materially improves execution.
- Keep the imported skill grounded in the upstream repository; do not invent steps that the source material cannot support.
- Prefer the smallest useful set of support files so the workflow stays auditable and fast to review.
- Keep provenance, source commit, and imported file paths visible in notes and PR descriptions.
- Point directly at the copied upstream files that justify the workflow instead of relying on generic review boilerplate.
- Treat generated examples as scaffolding; adapt them to the concrete task before execution.
- Route to a stronger native skill when architecture, debugging, design, or security concerns become dominant.
Troubleshooting
Problem: The operator skipped the imported context and answered too generically
Symptoms: The result ignores the upstream workflow in
packages/skills-catalog/skills/(gtm)/lead-enrichment, fails to mention provenance, or does not use any copied source files at all.
Solution: Re-open metadata.json, ORIGIN.md, and the most relevant copied upstream files. Load only the files that materially change the answer, then restate the provenance before continuing.
Problem: The imported workflow feels incomplete during review
Symptoms: Reviewers can see the generated
SKILL.md, but they cannot quickly tell which references, examples, or scripts matter for the current task.
Solution: Point at the exact copied references, examples, scripts, or assets that justify the path you took. If the gap is still real, record it in the PR instead of hiding it.
Problem: The task drifted into a different specialization
Symptoms: The imported skill starts in the right place, but the work turns into debugging, architecture, design, security, or release orchestration that a native skill handles better. Solution: Use the related skills section to hand off deliberately. Keep the imported provenance visible so the next skill inherits the right context instead of starting blind.
Imported Troubleshooting Notes
Imported: Troubleshooting
- Low email coverage after waterfall → Cause: Weak providers or wrong order. Fix: Put best provider first; add LinkedIn/FindyMail as fallback; target >85% coverage; track per-provider fill rate.
- ICP score not predicting meetings → Cause: Wrong weights or stale data. Fix: Recalibrate firmographic/technographic/behavioral weights; ensure intent signals fresh; A/B test score bands (e.g. >85 hot, 70–84 warm).
- Credits burning too fast → Cause: Enriching everyone or wrong filters. Fix: Pre-filter by domain, industry, geo; set confidence threshold (e.g. 0.85 outreach, 0.50 nurture); cap credits per qualified lead (<50).
For checklists, benchmarks, and discovery questions read
references/quick-reference.md when you need detailed reference.
Related Skills
- Use when the work is better handled by that native specialization after this imported skill establishes context.@accessibility
- Use when the work is better handled by that native specialization after this imported skill establishes context.@ai-cold-outreach
- Use when the work is better handled by that native specialization after this imported skill establishes context.@ai-pricing
- Use when the work is better handled by that native specialization after this imported skill establishes context.@ai-sdr
Additional Resources
Use this support matrix and the linked files below as the operator packet for this imported skill. They should reflect real copied source material, not generic scaffolding.
| Resource family | What it gives the reviewer | Example path |
|---|---|---|
| copied reference notes, guides, or background material from upstream | |
| worked examples or reusable prompts copied from upstream | |
| upstream helper scripts that change execution or validation | |
| routing or delegation notes that are genuinely part of the imported package | |
| supporting assets or schemas copied from the source package | |
Imported Reference Notes
Imported: Section 1: ICP Scoring Framework
The Three Signal Layers
Every ICP score pulls from three distinct signal categories. Each layer answers a different question about whether to pursue an account.
| Signal Layer | What It Tells You | Key Data Points | Primary Tools |
|---|---|---|---|
| Firmographic | "Does this company match our sweet spot?" | Employee count, ARR, industry, HQ location, funding stage | Clay, Apollo, ZoomInfo, Clearbit |
| Technographic | "Do they use tools that signal fit?" | Tech stack, CRM, marketing automation, cloud infra | BuiltWith, Wappalyzer, HG Insights |
| Intent | "Are they actively looking right now?" | Content consumption, G2 visits, job postings, funding events | Bombora, G2 Buyer Intent, Clay signals |
ICP Scoring Formula
ICP Score = (Firmographic Fit x 0.30) + (Technographic Fit x 0.30) + (Intent Score x 0.40)
Weight intent highest because timing beats targeting. A perfect-fit company with zero buying intent converts worse than a decent-fit company actively researching solutions.
Firmographic Fit Scoring (0-100)
Score each firmographic dimension, then average:
| Dimension | 100 (Ideal) | 75 (Strong) | 50 (Acceptable) | 25 (Stretch) | 0 (Disqualify) |
|---|---|---|---|---|---|
| Employee Count | 50-200 | 200-500 | 20-50 or 500-1000 | 10-20 or 1000-2000 | <10 or >2000 |
| Annual Revenue | $5M-$50M | $50M-$100M | $1M-$5M | $100M-$500M | <$1M or >$500M |
| Industry | SaaS B2B | Fintech, Healthtech | Professional Services | Retail, Media | Government, Education |
| Geography | US, UK, CA | DACH, Nordics | ANZ, Benelux | LATAM, SEA | Sanctioned regions |
| Funding Stage | Series A-B | Series C | Seed, Series D+ | Pre-seed | No data |
Adjust the ranges to your actual closed-won customer profile. Pull ranges from your CRM data, not assumptions.
Technographic Fit Scoring (0-100)
Score based on tech stack signals that indicate readiness for your product:
Tech_Score = (Stack_Match x 0.50) + (Complexity_Signal x 0.30) + (Migration_Signal x 0.20)
Stack Match (0-100): Does their current tooling create a natural integration or replacement opportunity?
| Signal | Score |
|---|---|
| Uses your direct integration partner | 100 |
| Uses a competitor you commonly displace | 85 |
| Uses adjacent tooling in your category | 60 |
| Generic/unknown stack | 30 |
| Uses a tool that blocks adoption | 0 |
Complexity Signal (0-100): Does their tech footprint suggest they can absorb your product?
| Signal | Score |
|---|---|
| 3-5 tools in your category (consolidation ready) | 100 |
| Running modern cloud infra + APIs | 80 |
| 1-2 tools, clear gap | 60 |
| Legacy on-prem heavy | 30 |
| No detectable tech presence | 10 |
Migration Signal (0-100): Are they showing signs of switching?
| Signal | Score |
|---|---|
| Job posting for role that owns your category | 100 |
| Recently adopted adjacent tool | 75 |
| Removed a competitor from their stack (BuiltWith delta) | 90 |
| Stable stack, no changes in 12 months | 20 |
Intent Score Calculation (0-100)
Intent scoring requires combining multiple signal sources. No single provider captures the full picture.
Intent_Score = max(Bombora_Surge, G2_Intent, First_Party) x 0.60 + Hiring_Signal x 0.20 + Funding_Signal x 0.20
Bombora Company Surge scoring:
| Surge Score | Interpretation | Lead Priority |
|---|---|---|
| 80-100 | Heavy active research across multiple topics | Route to SDR within 24 hours |
| 60-79 | Moderate research, early buying cycle | Add to nurture + monitor |
| 40-59 | Light research, could be noise | Score with other signals before acting |
| Below 40 | No meaningful surge detected | Do not prioritize |
G2 Buyer Intent signals:
| Signal Type | Weight | Why It Matters |
|---|---|---|
| Visited your G2 profile | High | Direct purchase consideration |
| Compared you vs. competitor | Very High | Active evaluation stage |
| Visited category page | Medium | Early research phase |
| Read reviews in your category | Medium-High | Validation stage |
First-party intent signals (your own data):
| Signal | Score Boost |
|---|---|
| Pricing page visit (2+ times) | +30 |
| Demo page visit without booking | +25 |
| Downloaded gated content | +15 |
| Blog visit (3+ pages, single session) | +10 |
| Email opened but no click | +5 |
Composite Score Interpretation
| ICP Score Range | Action | SLA |
|---|---|---|
| 85-100 | Hot lead - immediate SDR outreach | Contact within 4 hours |
| 70-84 | Warm lead - prioritized sequence | Enroll within 24 hours |
| 50-69 | Nurture - automated drip | Weekly content touches |
| 30-49 | Monitor - check quarterly | Re-score monthly |
| Below 30 | Disqualify - do not pursue | Archive, re-evaluate in 6 months |
Imported: Section 2: Enrichment Waterfall Architecture
What a Waterfall Does
A waterfall enrichment system queries multiple data providers in sequence. Each provider gets a chance to fill missing fields. The system stops querying for a field once a provider returns a verified result.
Single-provider enrichment typically yields 55-65% coverage. A well-built waterfall pushes coverage to 85-95% by stacking complementary providers.
Waterfall Flow
Input Lead | v [Pre-qualification] Filter before enriching (saves credits) | Reject: disposable emails, parked domains, wrong ICP v [Step 1: Primary] Apollo or ZoomInfo | Fields: name, title, email, company, phone v (missing fields?) [Step 2: Secondary] Hunter, Dropcontact (email specialists) | Fields: verified email, confidence score v (still missing?) [Step 3: Tertiary] FindyMail, Snov.io (deep search + verify) | Fields: email, phone, LinkedIn URL v (still missing?) [Step 4: LinkedIn] Clay AI enrichment | Fields: current title, company, location v [Verification] Bounce check, catch-all flag, dedup | Threshold: >85% confidence = deliverable v [Score + Route] Apply ICP score, push to sequence or nurture
Provider Selection by Use Case
Not every waterfall needs the same providers. Match your stack to your market and budget.
High-volume outbound (1000+ leads/month):
| Step | Provider | Why | Cost Level |
|---|---|---|---|
| 1 | Apollo | Large database, good mid-market coverage | $$ |
| 2 | Hunter | Email pattern matching at scale | $ |
| 3 | FindyMail | Catches emails Apollo and Hunter miss, <2% bounce | $$ |
| 4 | Clay AI | LinkedIn enrichment, custom fields | $$$ |
| Verify | MillionVerifier or ZeroBounce | Bulk verification, cheap per-unit | $ |
Enterprise targeting (under 500 leads/month):
| Step | Provider | Why | Cost Level |
|---|---|---|---|
| 1 | ZoomInfo | Best Fortune 1000 coverage (23% unique contacts) | $$$$ |
| 2 | Clearbit (now Breeze) | Real-time HubSpot enrichment, firmographic depth | $$$ |
| 3 | Dropcontact | GDPR-compliant, algorithm-generated (no database) | $$ |
| 4 | Clay AI | Flexible enrichment + AI agent for custom fields | $$$ |
| Verify | NeverBounce or DeBounce | High-accuracy verification | $ |
Startup / budget-conscious (under 200 leads/month):
| Step | Provider | Why | Cost Level |
|---|---|---|---|
| 1 | Apollo (free tier) | 10K credits/month on free plan | Free |
| 2 | Hunter (free tier) | 25 searches/month free | Free |
| 3 | Snov.io | Affordable at $39/month for 1,000 credits | $ |
| Verify | MillionVerifier | $0.0005/email bulk pricing | $ |
Provider Comparison Matrix
| Provider | Database Size | Email Accuracy | Best For | Pricing (Annual) | GDPR Compliant |
|---|---|---|---|---|---|
| ZoomInfo | 220M+ contacts | 95% (triple-verified) | Enterprise, Fortune 1000 | $10K-$50K | Yes |
| Apollo | 275M+ contacts | 65-80% (varies by region) | Mid-market, high volume | $1.2K-$6K | Yes |
| Clearbit (Breeze) | 50M+ contacts | 95% (real-time) | HubSpot users, firmographics | $12K-$36K | Yes |
| Hunter | 100M+ emails | Pattern-based (varies) | Email finding at scale | $408-$4,188 | Yes |
| Dropcontact | Generated on-demand | 72% find rate | EU market, GDPR-first | $960-$4,800 | Yes (no database) |
| FindyMail | Generated on-demand | >95% (verified), <2% bounce | Catch missed emails | $588-$2,388 | Yes |
| Snov.io | 60M+ contacts | 7-tier verification | Budget outbound | $468-$2,988 | Yes |
| Bombora | N/A (intent only) | N/A | Intent data, account targeting | $25K-$100K+ | Yes |
Incremental Coverage by Waterfall Step
Typical coverage gains when adding each provider in sequence:
Step 1 (Apollo): |======================== | ~60% coverage Step 2 (+Hunter): |============================ | ~75% coverage Step 3 (+FindyMail): |=============================== | ~87% coverage Step 4 (+Clay AI): |=================================| ~92% coverage After verification: |============================== | ~85% verified
The drop after verification is expected. Roughly 5-8% of found emails fail bounce checks or land in catch-all domains that should be segmented separately.
Imported: Section 4: Contact Verification Pipeline
Unverified cold email lists carry 10-30% invalid addresses. Sending to bad addresses destroys sender reputation within a few campaigns. Google, Yahoo, and Microsoft now enforce bounce rates under 2% and spam complaints under 0.3%.
Verification Pipeline Steps
| Step | Check | Action | Cost |
|---|---|---|---|
| 1 | Syntax validation | Remove malformed addresses (missing @, double dots) | Free |
| 2 | DNS/MX lookup | Verify domain has valid mail server | Free |
| 3 | SMTP verification | Confirm mailbox exists at provider | Provider-based |
| 4 | Catch-all detection | Flag domains that accept all addresses | Provider-based |
| 5 | Role account check | Flag info@, support@, admin@, sales@ | Provider-based |
| 6 | Confidence scoring | Assign final deliverability score | Computed |
Confidence Score Thresholds
| Confidence | Classification | Action |
|---|---|---|
| >0.85 | Deliverable | Safe to send. Include in sequences. |
| 0.70-0.85 | Risky | Send in small batches. Monitor bounce rate per batch. |
| 0.50-0.69 | Catch-all/Unverifiable | Segment separately. Maximum 50 per day. Watch closely. |
| <0.50 | Invalid/High Risk | Reject. Do not send. Re-enrich with alternate provider. |
Catch-All Domain Handling
Catch-all domains accept every email sent to them, even addresses that do not exist. They create silent deliverability decay because campaigns appear sent but never reach decision-makers.
Rules for catch-all addresses:
- Never mix catch-all addresses into your primary sending pool
- Send catch-all segments from a separate sending domain
- Limit to 20-50 catch-all sends per domain per day
- Track reply rates separately; if reply rate drops below 1%, stop sending to that domain
- Re-verify catch-all addresses every 30 days
Verification Tool Comparison
| Tool | Verification Method | Catch-All Detection | Bulk Speed | Pricing |
|---|---|---|---|---|
| MillionVerifier | SMTP + proprietary | Yes | 1M/hour | $0.0005/email |
| ZeroBounce | SMTP + AI scoring | Yes | 100K/hour | $0.008/email |
| NeverBounce | SMTP + real-time API | Yes | 50K/hour | $0.008/email |
| DeBounce | SMTP + disposable detect | Yes | 500K/hour | $0.001/email |
| Bouncer | SMTP + toxicity check | Yes | 200K/hour | $0.005/email |
Deliverability Protection Checklist
Before sending any enriched list to outreach:
- All emails verified within the last 7 days
- Bounce rate on verification under 2%
- Catch-all addresses segmented into separate pool
- Role accounts (info@, support@) removed or deprioritized
- Sending domain has SPF, DKIM, and DMARC configured
- Sending domain warmed for at least 14 days
- Daily send volume does not exceed 50 per inbox per day (cold)
- Spam complaint rate on prior campaigns under 0.3%
Imported: Section 5: Performance Benchmarks
Expected Conversion Lift from Enrichment
| Metric | Before Waterfall | After Waterfall | Improvement |
|---|---|---|---|
| Email coverage rate | 55-65% | 85-95% | +30-40% |
| Email bounce rate | 7-15% | <2% (verified) | -70-85% |
| Connect rate (cold call) | 4-6% | 8-12% | +80-100% |
| Pipeline generated | Baseline | +37% | Significant |
| Meeting-to-customer conversion | Baseline | +27% | Significant |
| MQL-to-SQL rate (with intent) | 8-12% | 15-25% | +80-100% |
Cost-Per-Verified-Lead Benchmarks
| Approach | Cost Per Lead | Coverage | Quality |
|---|---|---|---|
| Single provider (Apollo) | $0.05-$0.15 | 60% | Medium |
| Two-step waterfall | $0.15-$0.35 | 78% | Medium-High |
| Three-step waterfall | $0.30-$0.60 | 88% | High |
| Full waterfall + verification | $0.50-$1.00 | 92% verified | Very High |
| Full waterfall + intent scoring | $1.50-$3.00 | 92% + scored | Premium |
ROI Calculation Framework
Cost: Clay Pro ($800) + Apollo ($99) + FindyMail ($49) + MillionVerifier ($25) = $973/mo Yield: 2,000 enriched > 1,840 verified (92%) > 1,012 ICP-qualified (55%) > 30 meetings (3%) > 12 opps (40%) > 3 closed-won (25%) at $15K ACV = $45K/mo ROI: $45,000 / $973 = 46x
Adjust conversion rates for your actual pipeline. The framework matters more than the sample numbers.
Imported: Section 6: Compliance
Compliance by Region
| Requirement | US (CAN-SPAM/CCPA) | EU (GDPR) | UK (UK GDPR) |
|---|---|---|---|
| B2B email consent | Opt-out model | Legitimate interest | Legitimate interest |
| Data source docs | Recommended | Required | Required |
| Right to erasure | CCPA: Yes | Required | Required |
| Data retention | Disclosure required | Define and enforce | Define and enforce |
Provider Notes
- Dropcontact generates contacts algorithmically without a database (GDPR-native)
- Apollo, ZoomInfo, Clearbit are compliant as platforms; you own your usage basis
- Clay is compliant, but third-party providers accessed through Clay may not be. Verify each.
- Bombora cooperative data is compliant; downstream outreach must follow local regulations
Safe Enrichment Practices
- Document your legal basis (legitimate interest for B2B is standard)
- Track which provider sourced each contact
- Honor opt-out and erasure requests within 30 days
- Do not enrich or contact individuals who have previously opted out
- Review provider DPAs annually