Awesome-omni-skills lead-enrichment

Lead Enrichment Skill workflow skill. Use this skill when the user needs When the user wants to build data enrichment workflows, score leads against ICP, set up Clay waterfalls, or improve contact data quality. Also use when the user mentions 'enrichment,' 'data enrichment,' 'Clay,' 'waterfall enrichment,' 'ICP scoring,' 'lead scoring,' 'intent data,' 'contact verification,' 'Apollo,' 'ZoomInfo,' or 'data quality.' This skill covers lead enrichment waterfalls, ICP scoring frameworks, and contact verification systems. Do NOT use for technical implementation, code review, or software architecture and the operator should preserve the upstream workflow, copied support files, and provenance before merging or handing off.

install
source · Clone the upstream repo
git clone https://github.com/diegosouzapw/awesome-omni-skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/lead-enrichment" ~/.claude/skills/diegosouzapw-awesome-omni-skills-lead-enrichment && rm -rf "$T"
manifest: skills/lead-enrichment/SKILL.md
source content

Lead Enrichment Skill

Overview

This public intake copy packages

packages/skills-catalog/skills/(gtm)/lead-enrichment
from
https://github.com/tech-leads-club/agent-skills
into the native Omni Skills editorial shape without hiding its origin.

Use it when the operator needs the upstream workflow, support files, and repository context to stay intact while the public validator and private enhancer continue their normal downstream flow.

This intake keeps the copied upstream files intact and uses

metadata.json
plus
ORIGIN.md
as the provenance anchor for review.

Lead Enrichment Skill You are a B2B data enrichment architect. You build waterfall enrichment systems, ICP scoring frameworks, and contact verification pipelines that maximize coverage while minimizing cost per verified lead. You know the provider landscape cold and design workflows that sequence providers for maximum incremental yield.

Imported source sections that did not map cleanly to the public headings are still preserved below or in the support files. Notable imported sections: Before Starting, Section 1: ICP Scoring Framework, Section 2: Enrichment Waterfall Architecture, Section 4: Contact Verification Pipeline, Section 5: Performance Benchmarks, Section 6: Compliance.

When to Use This Skill

Use this section as the trigger filter. It should make the activation boundary explicit before the operator loads files, runs commands, or opens a pull request.

  • Use when the request clearly matches the imported source intent: When the user wants to build data enrichment workflows, score leads against ICP, set up Clay waterfalls, or improve contact data quality. Also use when the user mentions 'enrichment,' 'data enrichment,' 'Clay,'....
  • Use when the operator should preserve upstream workflow detail instead of rewriting the process from scratch.
  • Use when provenance needs to stay visible in the answer, PR, or review packet.
  • Use when copied upstream references, examples, or scripts materially improve the answer.
  • Use when the workflow should remain reviewable in the public intake repo before the private enhancer takes over.

Operating Table

SituationStart hereWhy it matters
First-time use
metadata.json
Confirms repository, branch, commit, and imported path before touching the copied workflow
Provenance review
ORIGIN.md
Gives reviewers a plain-language audit trail for the imported source
Workflow execution
references/quick-reference.md
Starts with the smallest copied file that materially changes execution
Supporting context
references/quick-reference.md
Adds the next most relevant copied source file without loading the entire package
Handoff decision
## Related Skills
Helps the operator switch to a stronger native skill when the task drifts

Workflow

This workflow is intentionally editorial and operational at the same time. It keeps the imported source useful to the operator while still satisfying the public intake standards that feed the downstream enhancer flow.

  1. Concept - What It Does
  2. Table - Your lead list - imported via CSV, CRM sync, or API
  3. Enrichment Column - Calls a provider to fill a specific field
  4. Waterfall Column - Tries multiple providers in sequence for one field
  5. AI Column - Uses GPT/Claude to derive insights from other columns
  6. Formula Column - Computes values from other columns (like ICP score)
  7. Integration Push - Sends enriched data to CRM, sequencer, or webhook

Imported Workflow Notes

Imported: Section 3: Clay Workflow Design

Clay Architecture Basics

Clay operates on a table-based model. Each row is a lead. Each column is a data field. Enrichment steps run left-to-right across columns, with waterfalls configured per field.

Core Clay concepts:

ConceptWhat It Does
TableYour lead list - imported via CSV, CRM sync, or API
Enrichment ColumnCalls a provider to fill a specific field
Waterfall ColumnTries multiple providers in sequence for one field
AI ColumnUses GPT/Claude to derive insights from other columns
Formula ColumnComputes values from other columns (like ICP score)
Integration PushSends enriched data to CRM, sequencer, or webhook

Credit Consumption Guide

Clay charges credits per enrichment action. Budget carefully.

Action TypeCredits Per RowExample
Basic enrichment (1 provider)4-10Email lookup, job title
Waterfall enrichment (3 providers)12-30Email waterfall with fallbacks
AI/GPT column10-25Persona summary, pain point extraction
Multi-step automation30+Full enrichment + scoring + routing

Credit math: 1,000 leads at 25 credits/lead = 25,000 credits. Starter plan handles that in 12.5 months, Explorer in 2.5 months, Pro in 0.5 months. Pre-filter aggressively to avoid burning credits on unqualified leads.

Clay Pricing (2026)

PlanPrice/MoCredits/MoPer Credit
Free$0100N/A
Starter$1492,000$0.075
Explorer$34910,000$0.035
Pro$80050,000$0.016
EnterpriseCustomCustomCustom

Sample Clay Table Structure

Build your enrichment workflow in this column order:

Col A: Company Domain        (input)
Col B: Contact Name          (input or enrichment)
Col C: LinkedIn URL          (Apollo waterfall)
Col D: Verified Email        (email waterfall: Apollo > Hunter > FindyMail)
Col E: Job Title             (Apollo or ZoomInfo)
Col F: Employee Count        (Clearbit or Clay built-in)
Col G: Industry              (Clearbit or Clay built-in)
Col H: Tech Stack            (BuiltWith via Clay)
Col I: Bombora Surge Score   (Bombora integration or manual import)
Col J: Firmographic Score    (Formula: weighted average of F, G, geography)
Col K: Technographic Score   (Formula: based on H match rules)
Col L: Intent Score          (Formula: based on I + hiring + funding signals)
Col M: ICP Score             (Formula: J*0.30 + K*0.30 + L*0.40)
Col N: AI Personalization    (AI column: generate first-line based on B, E, H)
Col O: Routing               (Formula: if M > 85 then "hot" elif M > 70 then "warm")

Credit Governance Rules

  1. Pre-qualify before enriching - domain check + firmographic filter before spending on email waterfall
  2. Cap per campaign - no single campaign burns more than 40% of monthly credits
  3. Alert at 75% - Slack/email alert when usage crosses 75% of monthly allowance
  4. Audit weekly - credits spent vs. leads enriched vs. leads qualified (target >60% qualification)
  5. 90-day re-enrichment - re-enrich stale contacts before including in new campaigns

Imported: Before Starting

Confirm with the user: (1) target ICP - industry, company size, geography, persona; (2) current stack - CRM, enrichment tools, outreach platforms; (3) data gaps - which fields are missing or unreliable; (4) volume - leads per month; (5) budget - optimizing for coverage or cost.

If the user provides a draft workflow or existing Clay table, analyze it before suggesting changes.


Examples

Example 1: Ask for the upstream workflow directly

Use @lead-enrichment to handle <task>. Start from the copied upstream workflow, load only the files that change the outcome, and keep provenance visible in the answer.

Explanation: This is the safest starting point when the operator needs the imported workflow, but not the entire repository.

Example 2: Ask for a provenance-grounded review

Review @lead-enrichment against metadata.json and ORIGIN.md, then explain which copied upstream files you would load first and why.

Explanation: Use this before review or troubleshooting when you need a precise, auditable explanation of origin and file selection.

Example 3: Narrow the copied support files before execution

Use @lead-enrichment for <task>. Load only the copied references, examples, or scripts that change the outcome, and name the files explicitly before proceeding.

Explanation: This keeps the skill aligned with progressive disclosure instead of loading the whole copied package by default.

Example 4: Build a reviewer packet

Review @lead-enrichment using the copied upstream files plus provenance, then summarize any gaps before merge.

Explanation: This is useful when the PR is waiting for human review and you want a repeatable audit packet.

Imported Usage Notes

Imported: Examples

  • User says: "Set up lead enrichment for our outbound" → Result: Agent asks budget and volume; recommends waterfall tier (e.g. Clay + Apollo for $200–1K/mo); outlines steps: import → pre-filter → waterfall → verify (confidence >0.85) → score → route to SDR/sequence; suggests CRM push and 90-day re-enrich.
  • User says: "Our email bounce rate is high" → Result: Agent checks verification (MillionVerifier, NeverBounce) and confidence threshold; recommends catch-all segment and list hygiene; suggests <2% bounce target and re-verification before each campaign.
  • User says: "Which enrichment tools should we use?" → Result: Agent uses Quick Reference budget tiers; maps providers (Apollo, Clay, ZoomInfo, Clearbit, etc.); recommends primary/secondary/tertiary order and when to add intent (Bombora, G2).

Best Practices

Treat the generated public skill as a reviewable packaging layer around the upstream repository. The goal is to keep provenance explicit and load only the copied source material that materially improves execution.

  • Keep the imported skill grounded in the upstream repository; do not invent steps that the source material cannot support.
  • Prefer the smallest useful set of support files so the workflow stays auditable and fast to review.
  • Keep provenance, source commit, and imported file paths visible in notes and PR descriptions.
  • Point directly at the copied upstream files that justify the workflow instead of relying on generic review boilerplate.
  • Treat generated examples as scaffolding; adapt them to the concrete task before execution.
  • Route to a stronger native skill when architecture, debugging, design, or security concerns become dominant.

Troubleshooting

Problem: The operator skipped the imported context and answered too generically

Symptoms: The result ignores the upstream workflow in

packages/skills-catalog/skills/(gtm)/lead-enrichment
, fails to mention provenance, or does not use any copied source files at all. Solution: Re-open
metadata.json
,
ORIGIN.md
, and the most relevant copied upstream files. Load only the files that materially change the answer, then restate the provenance before continuing.

Problem: The imported workflow feels incomplete during review

Symptoms: Reviewers can see the generated

SKILL.md
, but they cannot quickly tell which references, examples, or scripts matter for the current task. Solution: Point at the exact copied references, examples, scripts, or assets that justify the path you took. If the gap is still real, record it in the PR instead of hiding it.

Problem: The task drifted into a different specialization

Symptoms: The imported skill starts in the right place, but the work turns into debugging, architecture, design, security, or release orchestration that a native skill handles better. Solution: Use the related skills section to hand off deliberately. Keep the imported provenance visible so the next skill inherits the right context instead of starting blind.

Imported Troubleshooting Notes

Imported: Troubleshooting

  • Low email coverage after waterfallCause: Weak providers or wrong order. Fix: Put best provider first; add LinkedIn/FindyMail as fallback; target >85% coverage; track per-provider fill rate.
  • ICP score not predicting meetingsCause: Wrong weights or stale data. Fix: Recalibrate firmographic/technographic/behavioral weights; ensure intent signals fresh; A/B test score bands (e.g. >85 hot, 70–84 warm).
  • Credits burning too fastCause: Enriching everyone or wrong filters. Fix: Pre-filter by domain, industry, geo; set confidence threshold (e.g. 0.85 outreach, 0.50 nurture); cap credits per qualified lead (<50).

For checklists, benchmarks, and discovery questions read

references/quick-reference.md
when you need detailed reference.


Related Skills

  • @accessibility
    - Use when the work is better handled by that native specialization after this imported skill establishes context.
  • @ai-cold-outreach
    - Use when the work is better handled by that native specialization after this imported skill establishes context.
  • @ai-pricing
    - Use when the work is better handled by that native specialization after this imported skill establishes context.
  • @ai-sdr
    - Use when the work is better handled by that native specialization after this imported skill establishes context.

Additional Resources

Use this support matrix and the linked files below as the operator packet for this imported skill. They should reflect real copied source material, not generic scaffolding.

Resource familyWhat it gives the reviewerExample path
references
copied reference notes, guides, or background material from upstream
references/quick-reference.md
examples
worked examples or reusable prompts copied from upstream
examples/n/a
scripts
upstream helper scripts that change execution or validation
scripts/n/a
agents
routing or delegation notes that are genuinely part of the imported package
agents/n/a
assets
supporting assets or schemas copied from the source package
assets/n/a

Imported Reference Notes

Imported: Section 1: ICP Scoring Framework

The Three Signal Layers

Every ICP score pulls from three distinct signal categories. Each layer answers a different question about whether to pursue an account.

Signal LayerWhat It Tells YouKey Data PointsPrimary Tools
Firmographic"Does this company match our sweet spot?"Employee count, ARR, industry, HQ location, funding stageClay, Apollo, ZoomInfo, Clearbit
Technographic"Do they use tools that signal fit?"Tech stack, CRM, marketing automation, cloud infraBuiltWith, Wappalyzer, HG Insights
Intent"Are they actively looking right now?"Content consumption, G2 visits, job postings, funding eventsBombora, G2 Buyer Intent, Clay signals

ICP Scoring Formula

ICP Score = (Firmographic Fit x 0.30) + (Technographic Fit x 0.30) + (Intent Score x 0.40)

Weight intent highest because timing beats targeting. A perfect-fit company with zero buying intent converts worse than a decent-fit company actively researching solutions.

Firmographic Fit Scoring (0-100)

Score each firmographic dimension, then average:

Dimension100 (Ideal)75 (Strong)50 (Acceptable)25 (Stretch)0 (Disqualify)
Employee Count50-200200-50020-50 or 500-100010-20 or 1000-2000<10 or >2000
Annual Revenue$5M-$50M$50M-$100M$1M-$5M$100M-$500M<$1M or >$500M
IndustrySaaS B2BFintech, HealthtechProfessional ServicesRetail, MediaGovernment, Education
GeographyUS, UK, CADACH, NordicsANZ, BeneluxLATAM, SEASanctioned regions
Funding StageSeries A-BSeries CSeed, Series D+Pre-seedNo data

Adjust the ranges to your actual closed-won customer profile. Pull ranges from your CRM data, not assumptions.

Technographic Fit Scoring (0-100)

Score based on tech stack signals that indicate readiness for your product:

Tech_Score = (Stack_Match x 0.50) + (Complexity_Signal x 0.30) + (Migration_Signal x 0.20)

Stack Match (0-100): Does their current tooling create a natural integration or replacement opportunity?

SignalScore
Uses your direct integration partner100
Uses a competitor you commonly displace85
Uses adjacent tooling in your category60
Generic/unknown stack30
Uses a tool that blocks adoption0

Complexity Signal (0-100): Does their tech footprint suggest they can absorb your product?

SignalScore
3-5 tools in your category (consolidation ready)100
Running modern cloud infra + APIs80
1-2 tools, clear gap60
Legacy on-prem heavy30
No detectable tech presence10

Migration Signal (0-100): Are they showing signs of switching?

SignalScore
Job posting for role that owns your category100
Recently adopted adjacent tool75
Removed a competitor from their stack (BuiltWith delta)90
Stable stack, no changes in 12 months20

Intent Score Calculation (0-100)

Intent scoring requires combining multiple signal sources. No single provider captures the full picture.

Intent_Score = max(Bombora_Surge, G2_Intent, First_Party) x 0.60
             + Hiring_Signal x 0.20
             + Funding_Signal x 0.20

Bombora Company Surge scoring:

Surge ScoreInterpretationLead Priority
80-100Heavy active research across multiple topicsRoute to SDR within 24 hours
60-79Moderate research, early buying cycleAdd to nurture + monitor
40-59Light research, could be noiseScore with other signals before acting
Below 40No meaningful surge detectedDo not prioritize

G2 Buyer Intent signals:

Signal TypeWeightWhy It Matters
Visited your G2 profileHighDirect purchase consideration
Compared you vs. competitorVery HighActive evaluation stage
Visited category pageMediumEarly research phase
Read reviews in your categoryMedium-HighValidation stage

First-party intent signals (your own data):

SignalScore Boost
Pricing page visit (2+ times)+30
Demo page visit without booking+25
Downloaded gated content+15
Blog visit (3+ pages, single session)+10
Email opened but no click+5

Composite Score Interpretation

ICP Score RangeActionSLA
85-100Hot lead - immediate SDR outreachContact within 4 hours
70-84Warm lead - prioritized sequenceEnroll within 24 hours
50-69Nurture - automated dripWeekly content touches
30-49Monitor - check quarterlyRe-score monthly
Below 30Disqualify - do not pursueArchive, re-evaluate in 6 months

Imported: Section 2: Enrichment Waterfall Architecture

What a Waterfall Does

A waterfall enrichment system queries multiple data providers in sequence. Each provider gets a chance to fill missing fields. The system stops querying for a field once a provider returns a verified result.

Single-provider enrichment typically yields 55-65% coverage. A well-built waterfall pushes coverage to 85-95% by stacking complementary providers.

Waterfall Flow

Input Lead
  |
  v
[Pre-qualification]  Filter before enriching (saves credits)
  |                   Reject: disposable emails, parked domains, wrong ICP
  v
[Step 1: Primary]    Apollo or ZoomInfo
  |                   Fields: name, title, email, company, phone
  v (missing fields?)
[Step 2: Secondary]  Hunter, Dropcontact (email specialists)
  |                   Fields: verified email, confidence score
  v (still missing?)
[Step 3: Tertiary]   FindyMail, Snov.io (deep search + verify)
  |                   Fields: email, phone, LinkedIn URL
  v (still missing?)
[Step 4: LinkedIn]   Clay AI enrichment
  |                   Fields: current title, company, location
  v
[Verification]       Bounce check, catch-all flag, dedup
  |                   Threshold: >85% confidence = deliverable
  v
[Score + Route]      Apply ICP score, push to sequence or nurture

Provider Selection by Use Case

Not every waterfall needs the same providers. Match your stack to your market and budget.

High-volume outbound (1000+ leads/month):

StepProviderWhyCost Level
1ApolloLarge database, good mid-market coverage$$
2HunterEmail pattern matching at scale$
3FindyMailCatches emails Apollo and Hunter miss, <2% bounce$$
4Clay AILinkedIn enrichment, custom fields$$$
VerifyMillionVerifier or ZeroBounceBulk verification, cheap per-unit$

Enterprise targeting (under 500 leads/month):

StepProviderWhyCost Level
1ZoomInfoBest Fortune 1000 coverage (23% unique contacts)$$$$
2Clearbit (now Breeze)Real-time HubSpot enrichment, firmographic depth$$$
3DropcontactGDPR-compliant, algorithm-generated (no database)$$
4Clay AIFlexible enrichment + AI agent for custom fields$$$
VerifyNeverBounce or DeBounceHigh-accuracy verification$

Startup / budget-conscious (under 200 leads/month):

StepProviderWhyCost Level
1Apollo (free tier)10K credits/month on free planFree
2Hunter (free tier)25 searches/month freeFree
3Snov.ioAffordable at $39/month for 1,000 credits$
VerifyMillionVerifier$0.0005/email bulk pricing$

Provider Comparison Matrix

ProviderDatabase SizeEmail AccuracyBest ForPricing (Annual)GDPR Compliant
ZoomInfo220M+ contacts95% (triple-verified)Enterprise, Fortune 1000$10K-$50KYes
Apollo275M+ contacts65-80% (varies by region)Mid-market, high volume$1.2K-$6KYes
Clearbit (Breeze)50M+ contacts95% (real-time)HubSpot users, firmographics$12K-$36KYes
Hunter100M+ emailsPattern-based (varies)Email finding at scale$408-$4,188Yes
DropcontactGenerated on-demand72% find rateEU market, GDPR-first$960-$4,800Yes (no database)
FindyMailGenerated on-demand>95% (verified), <2% bounceCatch missed emails$588-$2,388Yes
Snov.io60M+ contacts7-tier verificationBudget outbound$468-$2,988Yes
BomboraN/A (intent only)N/AIntent data, account targeting$25K-$100K+Yes

Incremental Coverage by Waterfall Step

Typical coverage gains when adding each provider in sequence:

Step 1 (Apollo):      |========================          |  ~60% coverage
Step 2 (+Hunter):     |============================     |  ~75% coverage
Step 3 (+FindyMail):  |===============================  |  ~87% coverage
Step 4 (+Clay AI):    |=================================|  ~92% coverage
After verification:   |==============================   |  ~85% verified

The drop after verification is expected. Roughly 5-8% of found emails fail bounce checks or land in catch-all domains that should be segmented separately.


Imported: Section 4: Contact Verification Pipeline

Unverified cold email lists carry 10-30% invalid addresses. Sending to bad addresses destroys sender reputation within a few campaigns. Google, Yahoo, and Microsoft now enforce bounce rates under 2% and spam complaints under 0.3%.

Verification Pipeline Steps

StepCheckActionCost
1Syntax validationRemove malformed addresses (missing @, double dots)Free
2DNS/MX lookupVerify domain has valid mail serverFree
3SMTP verificationConfirm mailbox exists at providerProvider-based
4Catch-all detectionFlag domains that accept all addressesProvider-based
5Role account checkFlag info@, support@, admin@, sales@Provider-based
6Confidence scoringAssign final deliverability scoreComputed

Confidence Score Thresholds

ConfidenceClassificationAction
>0.85DeliverableSafe to send. Include in sequences.
0.70-0.85RiskySend in small batches. Monitor bounce rate per batch.
0.50-0.69Catch-all/UnverifiableSegment separately. Maximum 50 per day. Watch closely.
<0.50Invalid/High RiskReject. Do not send. Re-enrich with alternate provider.

Catch-All Domain Handling

Catch-all domains accept every email sent to them, even addresses that do not exist. They create silent deliverability decay because campaigns appear sent but never reach decision-makers.

Rules for catch-all addresses:

  • Never mix catch-all addresses into your primary sending pool
  • Send catch-all segments from a separate sending domain
  • Limit to 20-50 catch-all sends per domain per day
  • Track reply rates separately; if reply rate drops below 1%, stop sending to that domain
  • Re-verify catch-all addresses every 30 days

Verification Tool Comparison

ToolVerification MethodCatch-All DetectionBulk SpeedPricing
MillionVerifierSMTP + proprietaryYes1M/hour$0.0005/email
ZeroBounceSMTP + AI scoringYes100K/hour$0.008/email
NeverBounceSMTP + real-time APIYes50K/hour$0.008/email
DeBounceSMTP + disposable detectYes500K/hour$0.001/email
BouncerSMTP + toxicity checkYes200K/hour$0.005/email

Deliverability Protection Checklist

Before sending any enriched list to outreach:

  • All emails verified within the last 7 days
  • Bounce rate on verification under 2%
  • Catch-all addresses segmented into separate pool
  • Role accounts (info@, support@) removed or deprioritized
  • Sending domain has SPF, DKIM, and DMARC configured
  • Sending domain warmed for at least 14 days
  • Daily send volume does not exceed 50 per inbox per day (cold)
  • Spam complaint rate on prior campaigns under 0.3%

Imported: Section 5: Performance Benchmarks

Expected Conversion Lift from Enrichment

MetricBefore WaterfallAfter WaterfallImprovement
Email coverage rate55-65%85-95%+30-40%
Email bounce rate7-15%<2% (verified)-70-85%
Connect rate (cold call)4-6%8-12%+80-100%
Pipeline generatedBaseline+37%Significant
Meeting-to-customer conversionBaseline+27%Significant
MQL-to-SQL rate (with intent)8-12%15-25%+80-100%

Cost-Per-Verified-Lead Benchmarks

ApproachCost Per LeadCoverageQuality
Single provider (Apollo)$0.05-$0.1560%Medium
Two-step waterfall$0.15-$0.3578%Medium-High
Three-step waterfall$0.30-$0.6088%High
Full waterfall + verification$0.50-$1.0092% verifiedVery High
Full waterfall + intent scoring$1.50-$3.0092% + scoredPremium

ROI Calculation Framework

Cost:  Clay Pro ($800) + Apollo ($99) + FindyMail ($49) + MillionVerifier ($25) = $973/mo
Yield: 2,000 enriched > 1,840 verified (92%) > 1,012 ICP-qualified (55%)
       > 30 meetings (3%) > 12 opps (40%) > 3 closed-won (25%) at $15K ACV = $45K/mo
ROI:   $45,000 / $973 = 46x

Adjust conversion rates for your actual pipeline. The framework matters more than the sample numbers.


Imported: Section 6: Compliance

Compliance by Region

RequirementUS (CAN-SPAM/CCPA)EU (GDPR)UK (UK GDPR)
B2B email consentOpt-out modelLegitimate interestLegitimate interest
Data source docsRecommendedRequiredRequired
Right to erasureCCPA: YesRequiredRequired
Data retentionDisclosure requiredDefine and enforceDefine and enforce

Provider Notes

  • Dropcontact generates contacts algorithmically without a database (GDPR-native)
  • Apollo, ZoomInfo, Clearbit are compliant as platforms; you own your usage basis
  • Clay is compliant, but third-party providers accessed through Clay may not be. Verify each.
  • Bombora cooperative data is compliant; downstream outreach must follow local regulations

Safe Enrichment Practices

  1. Document your legal basis (legitimate interest for B2B is standard)
  2. Track which provider sourced each contact
  3. Honor opt-out and erasure requests within 30 days
  4. Do not enrich or contact individuals who have previously opted out
  5. Review provider DPAs annually