Awesome-omni-skills ai-sdr
AI SDR Skill workflow skill. Use this skill when the user needs When the user wants to deploy AI sales development reps, automate sales qualification, build signal-to-action routing, or design AI agent architecture for sales. Also use when the user mentions 'AI SDR,' 'AI sales agent,' 'automated qualification,' 'signal routing,' 'sales automation,' '11x,' 'Artisan,' 'AiSDR,' 'AI BDR,' or 'autonomous sales.' This skill covers AI SDR deployment, qualification automation, and agent architecture for sales development. Do NOT use for technical implementation, code review, or software architecture and the operator should preserve the upstream workflow, copied support files, and provenance before merging or handing off.
git clone https://github.com/diegosouzapw/awesome-omni-skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/ai-sdr" ~/.claude/skills/diegosouzapw-awesome-omni-skills-ai-sdr && rm -rf "$T"
skills/ai-sdr/SKILL.mdAI SDR Skill
Overview
This public intake copy packages
packages/skills-catalog/skills/(gtm)/ai-sdr from https://github.com/tech-leads-club/agent-skills into the native Omni Skills editorial shape without hiding its origin.
Use it when the operator needs the upstream workflow, support files, and repository context to stay intact while the public validator and private enhancer continue their normal downstream flow.
This intake keeps the copied upstream files intact and uses
metadata.json plus ORIGIN.md as the provenance anchor for review.
AI SDR Skill You are an AI SDR deployment strategist. You help founders and GTM teams design, deploy, and optimize AI-powered sales development systems. You combine signal-based targeting, automated qualification, multi-channel sequencing, and human-in-the-loop handoffs to build pipeline that converts.
Imported source sections that did not map cleanly to the public headings are still preserved below or in the support files. Notable imported sections: Before Starting, Section 1: AI SDR Landscape (2025-2026), Section 2: The 4-Week AI SDR Deployment Program.
When to Use This Skill
Use this section as the trigger filter. It should make the activation boundary explicit before the operator loads files, runs commands, or opens a pull request.
- Use when the request clearly matches the imported source intent: When the user wants to deploy AI sales development reps, automate sales qualification, build signal-to-action routing, or design AI agent architecture for sales. Also use when the user mentions 'AI SDR,' 'AI sales....
- Use when the operator should preserve upstream workflow detail instead of rewriting the process from scratch.
- Use when provenance needs to stay visible in the answer, PR, or review packet.
- Use when copied upstream references, examples, or scripts materially improve the answer.
- Use when the workflow should remain reviewable in the public intake repo before the private enhancer takes over.
Operating Table
| Situation | Start here | Why it matters |
|---|---|---|
| First-time use | | Confirms repository, branch, commit, and imported path before touching the copied workflow |
| Provenance review | | Gives reviewers a plain-language audit trail for the imported source |
| Workflow execution | | Starts with the smallest copied file that materially changes execution |
| Supporting context | | Adds the next most relevant copied source file without loading the entire package |
| Handoff decision | | Helps the operator switch to a stronger native skill when the task drifts |
Workflow
This workflow is intentionally editorial and operational at the same time. It keeps the imported source useful to the operator while still satisfying the public intake standards that feed the downstream enhancer flow.
- Confirm the user goal, the scope of the imported workflow, and whether this skill is still the right router for the task.
- Read the overview and provenance files before loading any copied upstream support files.
- Load only the references, examples, prompts, or scripts that materially change the outcome for the current request.
- Execute the upstream workflow while keeping provenance and source boundaries explicit in the working notes.
- Validate the result against the upstream expectations and the evidence you can point to in the copied files.
- Escalate or hand off to a related skill when the work moves out of this imported workflow's center of gravity.
- Before merge or closure, record what was used, what changed, and what the reviewer still needs to verify.
Imported Workflow Notes
Imported: Before Starting
Before giving AI SDR advice, establish:
- Current sales motion - Inbound-led, outbound-led, product-led, or hybrid?
- Team size - Solo founder, small team (2-5), or scaled org (10+)?
- ICP clarity - Do they have a defined ICP with firmographic + behavioral criteria?
- Tech stack - CRM (HubSpot, Salesforce, Pipedrive), enrichment tools, sending infrastructure?
- Budget range - Bootstrap ($500-1K/mo), growth ($1K-5K/mo), or scale ($5K+/mo)?
- Volume targets - How many qualified meetings per month do they need?
- Data quality - Clean CRM data vs. starting from scratch?
If any of these are unclear, ask before proceeding. Bad inputs produce bad AI SDR outputs.
Examples
Example 1: Ask for the upstream workflow directly
Use @ai-sdr to handle <task>. Start from the copied upstream workflow, load only the files that change the outcome, and keep provenance visible in the answer.
Explanation: This is the safest starting point when the operator needs the imported workflow, but not the entire repository.
Example 2: Ask for a provenance-grounded review
Review @ai-sdr against metadata.json and ORIGIN.md, then explain which copied upstream files you would load first and why.
Explanation: Use this before review or troubleshooting when you need a precise, auditable explanation of origin and file selection.
Example 3: Narrow the copied support files before execution
Use @ai-sdr for <task>. Load only the copied references, examples, or scripts that change the outcome, and name the files explicitly before proceeding.
Explanation: This keeps the skill aligned with progressive disclosure instead of loading the whole copied package by default.
Example 4: Build a reviewer packet
Review @ai-sdr using the copied upstream files plus provenance, then summarize any gaps before merge.
Explanation: This is useful when the PR is waiting for human review and you want a repeatable audit packet.
Imported Usage Notes
Imported: Examples
- User says: "Set up an AI SDR" → Result: Agent asks pipeline need, CRM, and budget; recommends platform (11x, Artisan, AiSDR) and 4-week program; outlines 30-second checklist (ICP, enrichment 80%+, 3 email variants, signal-to-action, sending, handoff, CRM, reply classification); sets speed-to-lead (P0 <5 min, reply handoff <5 min).
- User says: "Our AI SDR reply rate is low" → Result: Agent checks instruction stack (messaging, personalization, sequence); suggests A/B on first line and CTA; verifies enrichment and signal quality; ties to ai-cold-outreach and lead-enrichment.
- User says: "When to use AI SDR vs human SDR?" → Result: Agent maps use cases (volume, qualification, handoff); recommends AI for list build, sequences, reply classification; human for first close, complex deals, and handoff triggers; suggests 4-week ramp and weekly optimization.
Best Practices
Treat the generated public skill as a reviewable packaging layer around the upstream repository. The goal is to keep provenance explicit and load only the copied source material that materially improves execution.
- Keep the imported skill grounded in the upstream repository; do not invent steps that the source material cannot support.
- Prefer the smallest useful set of support files so the workflow stays auditable and fast to review.
- Keep provenance, source commit, and imported file paths visible in notes and PR descriptions.
- Point directly at the copied upstream files that justify the workflow instead of relying on generic review boilerplate.
- Treat generated examples as scaffolding; adapt them to the concrete task before execution.
- Route to a stronger native skill when architecture, debugging, design, or security concerns become dominant.
Troubleshooting
Problem: The operator skipped the imported context and answered too generically
Symptoms: The result ignores the upstream workflow in
packages/skills-catalog/skills/(gtm)/ai-sdr, fails to mention provenance, or does not use any copied source files at all.
Solution: Re-open metadata.json, ORIGIN.md, and the most relevant copied upstream files. Load only the files that materially change the answer, then restate the provenance before continuing.
Problem: The imported workflow feels incomplete during review
Symptoms: Reviewers can see the generated
SKILL.md, but they cannot quickly tell which references, examples, or scripts matter for the current task.
Solution: Point at the exact copied references, examples, scripts, or assets that justify the path you took. If the gap is still real, record it in the PR instead of hiding it.
Problem: The task drifted into a different specialization
Symptoms: The imported skill starts in the right place, but the work turns into debugging, architecture, design, security, or release orchestration that a native skill handles better. Solution: Use the related skills section to hand off deliberately. Keep the imported provenance visible so the next skill inherits the right context instead of starting blind.
Imported Troubleshooting Notes
Imported: Troubleshooting
- Low meeting conversion → Cause: Weak qualification or wrong handoff. Fix: Define qualification criteria and handoff triggers; ensure positive-reply-to-handoff <5 min; train on objection handling; review reply sentiment accuracy.
- Deliverability issues → Cause: Warmup, volume, or authentication. Fix: Run deliverability checklist (SPF, DKIM, DMARC, unsubscribe, bounce <2%, warmup 14–28d, <50/mailbox); test inbox placement (GlockApps, mail-tester).
- Tool swap didn't help → Cause: Instruction stack or context missing. Fix: Document ICP scoring, messaging framework, personalization rules, sequence logic; ensure persistent context and feedback loop; fix architecture before changing tools.
For checklists, speed-to-lead targets, deliverability checklist, and discovery questions read
references/quick-reference.md.
Related Skills
- Use when the work is better handled by that native specialization after this imported skill establishes context.@accessibility
- Use when the work is better handled by that native specialization after this imported skill establishes context.@ai-cold-outreach
- Use when the work is better handled by that native specialization after this imported skill establishes context.@ai-pricing
- Use when the work is better handled by that native specialization after this imported skill establishes context.@ai-seo
Additional Resources
Use this support matrix and the linked files below as the operator packet for this imported skill. They should reflect real copied source material, not generic scaffolding.
| Resource family | What it gives the reviewer | Example path |
|---|---|---|
| copied reference notes, guides, or background material from upstream | |
| worked examples or reusable prompts copied from upstream | |
| upstream helper scripts that change execution or validation | |
| routing or delegation notes that are genuinely part of the imported package | |
| supporting assets or schemas copied from the source package | |
Imported Reference Notes
Imported: Section 1: AI SDR Landscape (2025-2026)
What AI SDRs Actually Do
AI SDRs automate the repetitive work of sales development:
- List building and lead enrichment
- ICP scoring and qualification
- Personalized email/LinkedIn/SMS generation
- Multi-step sequence execution
- Meeting booking and calendar coordination
- Reply classification and routing
- CRM logging and data hygiene
They do NOT replace humans at conversion points. The handoff model matters more than the automation model.
Platform Comparison Table
+---------------+------------+-----------------+---------------------------+------------------+ | Platform | Price/mo | Best For | Key Differentiator | Channels | +---------------+------------+-----------------+---------------------------+------------------+ | 11x (Alice) | $5K-10K | Enterprise | Full autonomous agent | Email, LinkedIn | | | | outbound | with brand voice learning | Phone | +---------------+------------+-----------------+---------------------------+------------------+ | Artisan (Ava) | $2.4K-7.2K | Mid-market | Built-in enrichment + | Email, LinkedIn | | | | teams | brand-safe personalization| | +---------------+------------+-----------------+---------------------------+------------------+ | AiSDR | $900-2.5K | HubSpot-native | Managed service, GTM | Email, LinkedIn, | | | | teams | support included | SMS | +---------------+------------+-----------------+---------------------------+------------------+ | Relevance AI | Custom | Custom agent | Drag-and-drop agent | Any (API-based) | | | | builders | builder with full API | | +---------------+------------+-----------------+---------------------------+------------------+ | Clay | $149-800 | Data + enrich | 75+ provider waterfall, | Feeds into any | | | | workflows | Claygent AI research | sending tool | +---------------+------------+-----------------+---------------------------+------------------+ | Instantly | $30-97 | Cold email | 450M+ lead database, | Email | | | | at scale | built-in warmup network | | +---------------+------------+-----------------+---------------------------+------------------+ | Smartlead | $39-94 | Deliverability- | Unlimited mailboxes, | Email | | | | focused sending | AI warmup engine | | +---------------+------------+-----------------+---------------------------+------------------+ | Salesforge | $48-96 | Multi-channel | Agent Frank for LinkedIn | Email, LinkedIn | | | | sequences | + email combined | | +---------------+------------+-----------------+---------------------------+------------------+
Platform Selection Decision Framework
START | v Do you need a full autonomous agent (minimal human involvement)? | YES --> Budget > $5K/mo? | | | YES --> 11x (Alice/Julian) | NO --> Artisan (Ava) | NO --> Do you want to build custom agent workflows? | YES --> Relevance AI (or n8n + LLM) NO --> Do you need enrichment + list building? | YES --> Clay (feed into any sender) NO --> Do you need a managed AI SDR service? | YES --> AiSDR (especially if HubSpot) NO --> Instantly or Smartlead (sending layer only)
Key Metrics Benchmarks
+-------------------------------+-------------+-------------+ | Metric | Human SDR | AI SDR | +-------------------------------+-------------+-------------+ | Prospects contacted/day | 50-80 | 1,000+ | | Cold email reply rate | 5-8% | 8-12% | | Cost per meeting booked | $800-1,500 | $150-400 | | Meetings booked/month | 12-20 | 30-60 | | Meeting show rate | 75-85% | 65-75% | | Lead-to-opportunity rate | 20-25% | 15-20% | | Ramp time | 3-6 months | 2-4 weeks | | Annual cost (fully loaded) | $75K-120K | $12K-36K | +-------------------------------+-------------+-------------+
Important: AI SDRs win on volume and cost. Human SDRs win on conversion quality and complex deal navigation. The best teams combine both.
Imported: Section 2: The 4-Week AI SDR Deployment Program
Week 1: Foundation (Signal Setup + List Building)
Day 1-2: ICP Definition and Signal Configuration
Define your ICP with scoring criteria:
TIER 1 (Score 80-100) - Auto-enroll in sequence - Company size: 50-500 employees - Revenue: $5M-50M ARR - Industry: SaaS, fintech, e-commerce - Tech stack: Uses Salesforce/HubSpot + Slack - Hiring signal: Posted SDR/AE roles in last 90 days - Funding signal: Raised Series A-C in last 12 months TIER 2 (Score 50-79) - Review before enrolling - Meets 3 of 5 firmographic criteria - Has at least 1 intent signal - No disqualifying factors TIER 3 (Score 0-49) - Nurture or disqualify - Meets fewer than 3 criteria - No intent signals detected
Day 3-4: Enrichment Waterfall Setup
Build a Clay table (or equivalent) with cascading data providers:
Step 1: Apollo --> Email + phone + title Step 2: Clearbit --> Firmographics + tech stack Step 3: ZoomInfo --> Direct dials + org chart Step 4: Hunter.io --> Email verification Step 5: Claygent --> Custom web scraping for last-mile data Step 6: BuiltWith --> Technology signals Step 7: LinkedIn Sales --> Social proximity + mutual connections Navigator
Target: 80%+ email match rate across your ICP list. If you are below 60% after the waterfall, your source list quality is the problem.
Day 5: Build Initial Prospect List
- Pull 500 ICP-scored prospects into your enrichment workflow
- Score each prospect against your tier criteria
- Tag with relevant signals (funding, hiring, tech adoption, content engagement)
- Export Tier 1 prospects (target: 150-200) for Week 2 sequencing
Week 2: Content (Sequence Creation + Personalization)
Day 6-7: Persona-Based Email Variants
Create 3 email variants per buyer persona. Each variant needs:
VARIANT STRUCTURE: Subject line --> Pain-point or signal-based (no clickbait) Opening line --> Personalized to signal or recent event Value prop --> One specific outcome, with number if possible Social proof --> Name-drop a similar company or metric CTA --> Low-friction ask (reply, 15-min call, resource) Length --> 50-125 words (5-10 lines max)
Example persona matrix:
+------------------+--------------------+---------------------+--------------------+ | Persona | Variant A | Variant B | Variant C | +------------------+--------------------+---------------------+--------------------+ | VP Sales | Pipeline velocity | Rep productivity | Competitive intel | | | angle | angle | angle | +------------------+--------------------+---------------------+--------------------+ | Head of RevOps | Data accuracy | Process automation | Reporting/ | | | angle | angle | attribution angle | +------------------+--------------------+---------------------+--------------------+ | Founder/CEO | Revenue growth | Cost reduction | Market timing | | | angle | angle | angle | +------------------+--------------------+---------------------+--------------------+
Day 8-9: AI Personalization Layer
For each prospect, generate a personalized opening line using:
- Recent LinkedIn post or article they published
- Company news (funding, product launch, expansion)
- Hiring patterns that indicate pain points
- Mutual connections or shared communities
- Tech stack signals that indicate fit
Personalization formula: [Signal observation] + [Relevance to their role] + [Bridge to your value]
Day 10: Conditional Branching Logic
Build sequences with conditional paths:
Email 1 (Day 0) | +----------+----------+ | | Opens (no reply) No open | | Email 2 (Day 3) Email 2b (Day 4) [deeper value] [new subject line] | | +----+----+ +-----+-----+ | | | | Reply No reply Opens No open | | | | Route to LinkedIn Email 3 Sequence human touch (Day 7) ends (Day 5) | | Reply? Reply? | | +----+----+ +----+ | | | | Route Final Route Email 4 to email to (Day 10) human (Day 14) human break-up | email Archive
Week 3: Launch (Sending Infrastructure + Go-Live)
Day 11-12: Domain and Mailbox Setup
Infrastructure requirements:
DOMAIN SETUP: - Purchase 5-10 secondary domains (variations of primary) - Example: getacme.com, acmehq.io, tryacme.com, useacme.co - Set up SPF, DKIM, and DMARC records for each - Create 2-3 mailboxes per domain - Total: 10-30 sending mailboxes WARMUP PROTOCOL: - Day 1-7: 5 emails/day per mailbox (warmup only) - Day 8-14: 10 emails/day (mix of warmup + real) - Day 15-21: 20 emails/day (mostly real sends) - Day 22-28: 30-40 emails/day (full volume) - NEVER exceed 50 emails/day per mailbox
Compliance requirements (2025+ enforcement):
- SPF, DKIM, DMARC properly configured
- One-click unsubscribe header included
- Spam complaint rate below 0.3%
- Bounce rate below 2%
- Google, Yahoo, and Microsoft all enforce these rules now
Day 13: Sending Platform Configuration
Choose your sending layer:
+-------------------+-------------------+-------------------+ | Feature | Instantly | Smartlead | +-------------------+-------------------+-------------------+ | Warmup network | 4.2M+ accounts | AI-adaptive | | Mailbox limit | Unlimited | Unlimited | | Lead database | 450M+ contacts | No built-in DB | | Reply handling | AI Reply Agent | Unibox | | IP rotation | Automatic (SISR) | Manual config | | Starting price | $30/mo | $39/mo | | Best for | All-in-one | Deliverability | | | outbound | optimization | +-------------------+-------------------+-------------------+
Day 14-15: Soft Launch
- Launch to Tier 1 prospects only (100-150 contacts)
- Monitor deliverability metrics hourly for the first 24 hours
- Check inbox placement (use GlockApps or mail-tester.com)
- Watch for bounce rates above 2% and pause if triggered
- Target: 95%+ delivery rate before expanding volume
Week 4: Optimize (Measure + Iterate)
Day 16-18: A/B Testing Framework
Test one variable at a time:
PRIORITY TEST ORDER: 1. Subject lines --> Impact on open rate 2. Opening lines --> Impact on reply rate 3. CTA type --> Impact on positive reply rate 4. Send timing --> Impact on open + reply 5. Sequence length --> Impact on total conversion 6. Personalization --> Impact on reply sentiment depth
Minimum sample size: 100 sends per variant before drawing conclusions.
Day 19-20: Reply Sentiment Analysis
Classify all replies into categories:
POSITIVE (route to human immediately): - "Tell me more" - "Can you send details?" - "Let's set up a call" - Meeting booked via CTA NEUTRAL (AI follow-up, then route): - "Not now, maybe later" - "Send me more info" - "Who else do you work with?" NEGATIVE (remove from sequence): - "Not interested" - "Remove me" - "Wrong person" OBJECTION (AI handles with playbook): - "We already have a solution" - "No budget right now" - "Need to talk to my team"
Day 21: ICP Scoring Adjustment
Review first 3 weeks of data and adjust:
- Which firmographic traits correlate with positive replies?
- Which signals predicted meetings booked?
- Which personas converted at the highest rate?
- Which Tier 2 prospects should be upgraded or downgraded?
Recalibrate scoring weights based on actual conversion data, not assumptions.
For signal-to-action routing, agent architecture, qualification, human handoff, cost/ROI, and failure modes read
references/implementation-guide.md when designing or debugging an AI SDR deployment.