Awesome-omni-skills gtm-engineering

GTM Engineering: Automation, Architecture & Agent Orchestration workflow skill. Use this skill when the user needs When the user wants to build GTM automation with code, design workflow architectures, use AI agents for GTM tasks, or implement the 'architecture over tools' principle. Also use when the user mentions 'GTM engineering,' 'GTM automation,' 'n8n,' 'Make,' 'Zapier,' 'workflow automation,' 'Clay API,' 'instruction stacks,' 'AI agents for GTM,' or 'revenue automation.' This skill covers technical GTM infrastructure from workflow design through agent orchestration. Do NOT use for technical implementation, code review, or software architecture and the operator should preserve the upstream workflow, copied support files, and provenance before merging or handing off.

install
source · Clone the upstream repo
git clone https://github.com/diegosouzapw/awesome-omni-skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/gtm-engineering" ~/.claude/skills/diegosouzapw-awesome-omni-skills-gtm-engineering && rm -rf "$T"
manifest: skills/gtm-engineering/SKILL.md
source content

GTM Engineering: Automation, Architecture & Agent Orchestration

Overview

This public intake copy packages

packages/skills-catalog/skills/(gtm)/gtm-engineering
from
https://github.com/tech-leads-club/agent-skills
into the native Omni Skills editorial shape without hiding its origin.

Use it when the operator needs the upstream workflow, support files, and repository context to stay intact while the public validator and private enhancer continue their normal downstream flow.

This intake keeps the copied upstream files intact and uses

metadata.json
plus
ORIGIN.md
as the provenance anchor for review.

GTM Engineering: Automation, Architecture & Agent Orchestration You are an expert in GTM engineering, workflow automation architecture, and AI agent orchestration for revenue teams. You combine deep technical knowledge of automation platforms (n8n, Make, Zapier, Tray.io, Workato) with API-first design principles, event-driven architectures, and the "architecture over tools" philosophy. You understand that the advantage is never the tool itself but the instruction stack, persistent context, and feedback loops built around it. You help founders, RevOps teams, and GTM engineers design, build, and scale automation systems that turn manual GTM processes into reliable, observable, cost-efficient pipelines. You understand the 2025-2026 landscape where GTM Engineer has emerged as a dedicated role combining software engineering skills with commercial acumen, and where AI agents are shifting from simple task automation to autonomous multi-step workflow execution.

Imported source sections that did not map cleanly to the public headings are still preserved below or in the support files. Notable imported sections: Before Starting, 1. The GTM Engineer Role, 2. Architecture Over Tools, 3. Automation Platform Comparison.

When to Use This Skill

Use this section as the trigger filter. It should make the activation boundary explicit before the operator loads files, runs commands, or opens a pull request.

  • Use when the request clearly matches the imported source intent: When the user wants to build GTM automation with code, design workflow architectures, use AI agents for GTM tasks, or implement the 'architecture over tools' principle. Also use when the user mentions 'GTM....
  • Use when the operator should preserve upstream workflow detail instead of rewriting the process from scratch.
  • Use when provenance needs to stay visible in the answer, PR, or review packet.
  • Use when copied upstream references, examples, or scripts materially improve the answer.
  • Use when the workflow should remain reviewable in the public intake repo before the private enhancer takes over.

Operating Table

SituationStart hereWhy it matters
First-time use
metadata.json
Confirms repository, branch, commit, and imported path before touching the copied workflow
Provenance review
ORIGIN.md
Gives reviewers a plain-language audit trail for the imported source
Workflow execution
references/implementation-guide.md
Starts with the smallest copied file that materially changes execution
Supporting context
references/quick-reference.md
Adds the next most relevant copied source file without loading the entire package
Handoff decision
## Related Skills
Helps the operator switch to a stronger native skill when the task drifts

Workflow

This workflow is intentionally editorial and operational at the same time. It keeps the imported source useful to the operator while still satisfying the public intake standards that feed the downstream enhancer flow.

  1. Confirm the user goal, the scope of the imported workflow, and whether this skill is still the right router for the task.
  2. Read the overview and provenance files before loading any copied upstream support files.
  3. Load only the references, examples, prompts, or scripts that materially change the outcome for the current request.
  4. Execute the upstream workflow while keeping provenance and source boundaries explicit in the working notes.
  5. Validate the result against the upstream expectations and the evidence you can point to in the copied files.
  6. Escalate or hand off to a related skill when the work moves out of this imported workflow's center of gravity.
  7. Before merge or closure, record what was used, what changed, and what the reviewer still needs to verify.

Imported Workflow Notes

Imported: Before Starting

Gather this context before designing any GTM automation or architecture:

  • What GTM motions are currently running? Outbound, inbound, PLG, partner, or a mix. Which generates the most pipeline today.
  • What is the current tech stack? CRM (Salesforce, HubSpot, other), enrichment tools, outreach tools, analytics. Get specific product names and tiers.
  • What manual processes take the most time? Ask for the top 3 repetitive workflows the team does weekly.
  • What is the team's technical depth? Can they write Python/JS, or do they need no-code/low-code solutions exclusively.
  • What automation exists today? Any n8n, Make, Zapier flows already running. What breaks most often.
  • What data sources feed the GTM motion? Website analytics, intent providers, CRM events, product usage data, third-party enrichment.
  • What is the monthly budget for automation tooling? This determines platform choice and API call volume limits.
  • What is the lead volume? Matters for pricing models. 500 leads/month is a different architecture than 50,000.
  • Who maintains the automations today? A dedicated ops person, a founder wearing many hats, or nobody.
  • What compliance or security requirements exist? SOC2, GDPR, data residency, single-tenant requirements.

Examples

Example 1: Ask for the upstream workflow directly

Use @gtm-engineering to handle <task>. Start from the copied upstream workflow, load only the files that change the outcome, and keep provenance visible in the answer.

Explanation: This is the safest starting point when the operator needs the imported workflow, but not the entire repository.

Example 2: Ask for a provenance-grounded review

Review @gtm-engineering against metadata.json and ORIGIN.md, then explain which copied upstream files you would load first and why.

Explanation: Use this before review or troubleshooting when you need a precise, auditable explanation of origin and file selection.

Example 3: Narrow the copied support files before execution

Use @gtm-engineering for <task>. Load only the copied references, examples, or scripts that change the outcome, and name the files explicitly before proceeding.

Explanation: This keeps the skill aligned with progressive disclosure instead of loading the whole copied package by default.

Example 4: Build a reviewer packet

Review @gtm-engineering using the copied upstream files plus provenance, then summarize any gaps before merge.

Explanation: This is useful when the PR is waiting for human review and you want a repeatable audit packet.

Imported Usage Notes

Imported: Examples

  • User says: "Automate our lead routing and enrichment" → Result: Agent asks volume, CRM, and current stack; recommends n8n/Make/Zapier by complexity; designs instruction stack (ICP scoring, enrichment 0.85+ confidence, hot lead <1 hr SLA); suggests workflow export to Git and alerts (workflow <95%, bounce >5%).
  • User says: "Our automations break often" → Result: Agent asks what fails (enrichment, sending, CRM sync); recommends version control (JSON to Git), monitoring (Grafana + platform metrics), and caching TTL (30–90d); suggests LLM cost split (Haiku for classification, Sonnet for writing).
  • User says: "Build AI SDR infrastructure" → Result: Agent ties to ai-sdr and lead-enrichment; outlines enrichment waterfall, scoring (fit + intent), signal-to-action routing, and handoff; recommends hot/warm SLA and feedback loop back to targeting.

Best Practices

Treat the generated public skill as a reviewable packaging layer around the upstream repository. The goal is to keep provenance explicit and load only the copied source material that materially improves execution.

  • Keep the imported skill grounded in the upstream repository; do not invent steps that the source material cannot support.
  • Prefer the smallest useful set of support files so the workflow stays auditable and fast to review.
  • Keep provenance, source commit, and imported file paths visible in notes and PR descriptions.
  • Point directly at the copied upstream files that justify the workflow instead of relying on generic review boilerplate.
  • Treat generated examples as scaffolding; adapt them to the concrete task before execution.
  • Route to a stronger native skill when architecture, debugging, design, or security concerns become dominant.

Troubleshooting

Problem: The operator skipped the imported context and answered too generically

Symptoms: The result ignores the upstream workflow in

packages/skills-catalog/skills/(gtm)/gtm-engineering
, fails to mention provenance, or does not use any copied source files at all. Solution: Re-open
metadata.json
,
ORIGIN.md
, and the most relevant copied upstream files. Load only the files that materially change the answer, then restate the provenance before continuing.

Problem: The imported workflow feels incomplete during review

Symptoms: Reviewers can see the generated

SKILL.md
, but they cannot quickly tell which references, examples, or scripts matter for the current task. Solution: Point at the exact copied references, examples, scripts, or assets that justify the path you took. If the gap is still real, record it in the PR instead of hiding it.

Problem: The task drifted into a different specialization

Symptoms: The imported skill starts in the right place, but the work turns into debugging, architecture, design, security, or release orchestration that a native skill handles better. Solution: Use the related skills section to hand off deliberately. Keep the imported provenance visible so the next skill inherits the right context instead of starting blind.

Imported Troubleshooting Notes

Imported: Troubleshooting

  • Workflow success rate below 95%Cause: API rate limits, bad data, or timeouts. Fix: Add retries and backoff; validate inputs; alert on failure; cache enrichment; version workflows in Git.
  • Enrichment hit rate lowCause: Wrong provider order or stale cache. Fix: Reorder waterfall; set confidence threshold (0.85 accept, 0.50 flag, <0.50 reject); re-enrich cadence 30–90d; track per-provider fill.
  • Lead response time too slowCause: Manual steps or batch runs. Fix: Hot lead <5 min (inbound), <1 hr overall; warm <4 hr; automate routing and first-touch; use real-time enrichment where possible.

For checklists, benchmarks, and discovery questions read

references/quick-reference.md
when you need detailed reference.


Related Skills

  • @accessibility
    - Use when the work is better handled by that native specialization after this imported skill establishes context.
  • @ai-cold-outreach
    - Use when the work is better handled by that native specialization after this imported skill establishes context.
  • @ai-pricing
    - Use when the work is better handled by that native specialization after this imported skill establishes context.
  • @ai-sdr
    - Use when the work is better handled by that native specialization after this imported skill establishes context.

Additional Resources

Use this support matrix and the linked files below as the operator packet for this imported skill. They should reflect real copied source material, not generic scaffolding.

Resource familyWhat it gives the reviewerExample path
references
copied reference notes, guides, or background material from upstream
references/implementation-guide.md
examples
worked examples or reusable prompts copied from upstream
examples/n/a
scripts
upstream helper scripts that change execution or validation
scripts/n/a
agents
routing or delegation notes that are genuinely part of the imported package
agents/n/a
assets
supporting assets or schemas copied from the source package
assets/n/a

Imported Reference Notes

Imported: 1. The GTM Engineer Role

GTM engineering emerged as a named discipline in 2024-2025 and has rapidly become one of the highest-demand roles in B2B SaaS. By mid-2025, over 1,400 GTM Engineer job postings were active on LinkedIn. The role sits at the intersection of software engineering and revenue operations, applying engineering principles to the systems that generate pipeline and close deals.

What GTM Engineers Build

DomainExamplesTechnical Skills
Lead infrastructureEnrichment waterfalls, scoring models, routing logicAPI integration, data pipelines, SQL
Outreach automationMulti-channel sequences, personalization engines, response classificationWebhook architecture, NLP/LLM integration
CRM automationDeal stage progression, activity logging, alert systemsSalesforce/HubSpot APIs, event-driven design
Data pipelinesEnrichment flows, deduplication, hygiene scoringETL patterns, data validation, error handling
Internal toolsSales dashboards, territory mapping, quota calculatorsFrontend basics, charting libraries, database design
AI agent workflowsAutonomous research agents, email drafters, call summarizersLLM APIs, prompt engineering, agent orchestration

GTM Engineer vs Adjacent Roles

DimensionGTM EngineerRevOpsSales OpsMarketing OpsSoftware Engineer
Primary outputAutomated workflows + custom toolsProcess design + reportingTerritory/quota managementCampaign ops + attributionProduct features
Technical depthWrites code, builds APIs, deploys infraConfigures tools, writes formulasConfigures CRM, manages dataConfigures MAP, manages integrationsFull-stack engineering
Revenue proximityDirect: builds pipeline-generating systemsIndirect: designs processesIndirect: enables sales teamIndirect: enables marketing teamNone unless product-led
Tool relationshipBuilds on top of and between toolsSelects and configures toolsUses tools as providedUses tools as providedBuilds the tools
Typical backgroundEngineering + sales/marketing exposureOps + analyticsSales + analyticsMarketing + analyticsComputer science

Career Trajectory

GTM engineering compensation reflects the hybrid skill set. Engineers who can both write production code and understand pipeline mechanics command premium salaries. The role scales from individual contributor (building specific workflows) to architect (designing the entire GTM infrastructure) to VP/Head of GTM Engineering (managing a team of builders).


Imported: 2. Architecture Over Tools

The central principle of GTM engineering: the instruction stack, persistent context, and feedback loops matter more than which specific platform runs the workflow. Two teams with identical tooling get wildly different results because one has thoughtful architecture and the other has a pile of disconnected automations.

The Instruction Stack

Every GTM automation system needs four layers of instructions that compound on each other:

+-----------------------------------------------------------+
|  LAYER 4: SEQUENCE LOGIC                                   |
|  Timing, branching, follow-up rules, escalation paths      |
+-----------------------------------------------------------+
|  LAYER 3: PERSONALIZATION RULES                            |
|  What to reference, what to avoid, tone per segment        |
+-----------------------------------------------------------+
|  LAYER 2: MESSAGING FRAMEWORK                              |
|  Value props, objection handling, CTA templates by stage    |
+-----------------------------------------------------------+
|  LAYER 1: ICP DEFINITION + SCORING                         |
|  Firmographic/technographic/intent criteria, thresholds     |
+-----------------------------------------------------------+

Layer 1: ICP Definition + Scoring Every downstream automation depends on accurate targeting. Define who you sell to with scored criteria, not loose descriptions. This layer feeds routing, personalization, and sequence decisions.

  • Firmographic criteria: industry, employee count, revenue range, funding stage, geography
  • Technographic criteria: current tools, API maturity, cloud provider, data infrastructure
  • Intent signals: content consumption, G2 research, job postings, funding events
  • Scoring thresholds: minimum fit score to enter outreach, minimum intent score to route to sales

Layer 2: Messaging Framework Codify your messaging so automations produce consistent output. Store this as structured data, not scattered documents.

  • Value propositions mapped to ICP segments and pain points
  • Objection responses for the top 10 objections by segment
  • CTA variants by funnel stage (awareness, consideration, decision)
  • Proof vectors (case studies, metrics, testimonials) indexed by industry and use case

Layer 3: Personalization Rules Define what the AI or automation should reference and what it must avoid. Without explicit rules, personalization degrades to generic flattery.

  • Reference: recent company news, job postings, tech stack signals, mutual connections
  • Avoid: personal information unrelated to business, assumptions about pain points, competitor bashing
  • Tone guidelines per segment: enterprise (formal, ROI-focused) vs startup (direct, speed-focused)
  • Variable insertion rules: which fields get personalized, which stay templated

Layer 4: Sequence Logic Timing, branching, and escalation rules that govern the flow across touchpoints.

  • Channel sequence: email > LinkedIn > email > phone > breakup email
  • Timing rules: delay between steps, business-hours-only sending, timezone awareness
  • Branch conditions: if opened but no reply, if clicked pricing page, if bounced
  • Escalation: when to route from automation to human, when to alert a manager

Persistent Context

Every prospect interaction must be logged and accessible to the next automation in the chain. Without persistent context, each touchpoint starts from zero.

Implementation pattern:

Prospect Record (CRM or custom DB)
  |
  +-- Enrichment data (firmographic, technographic, intent scores)
  +-- Interaction log
  |     +-- Email 1: sent, opened 2x, no reply
  |     +-- LinkedIn: connection accepted, viewed profile
  |     +-- Email 2: sent, clicked pricing link
  |     +-- Website: visited /pricing, /case-studies (2 pages, 4 min)
  |
  +-- AI context window
  |     +-- Previous email bodies sent
  |     +-- Personalization variables used
  |     +-- Objections raised (if reply received)
  |
  +-- Routing state
        +-- Current sequence step
        +-- Assigned owner
        +-- Next scheduled action
        +-- Score changes over time

Feedback Loops

The system must learn from outcomes. Without feedback loops, automations repeat the same mistakes at scale.

SignalActionSystem Update
Positive replyTag attributes of the responder (industry, title, signals present)Refine ICP scoring weights toward this profile
Negative replyAnalyze messaging that triggered the rejectionAdjust templates, update objection handling
No reply after full sequenceCompare against positive respondersIdentify differentiating signals, update targeting
Meeting bookedLog which sequence step and message variant convertedWeight that variant higher in future sends
Deal closed-wonFull attribution: which enrichment, sequence, and personalization drove the dealUpdate scoring model, replicate the pattern
Deal closed-lostAnalyze where the process broke downUpdate disqualification criteria, fix the gap

Architecture vs Tools: Decision Framework

QuestionArchitecture AnswerTool Answer
"Why did this lead get this message?"Traceable through instruction stack layers"The workflow sent it"
"Why did results drop this month?"Feedback loop data shows scoring driftNo idea, rebuild the workflow
"Can we replicate this for a new segment?"Clone the instruction stack, adjust Layer 1Rebuild from scratch
"What happens when this tool's API changes?"Swap the connector, architecture holdsEverything breaks
"Why did two leads get contradictory messages?"Persistent context prevents thisRace condition in parallel workflows

Imported: 3. Automation Platform Comparison

Choosing the right platform depends on team technical depth, lead volume, budget, and integration requirements. No single tool wins across all dimensions.

n8n vs Make vs Zapier: Detailed Comparison

Dimensionn8nMake (Integromat)Zapier
ArchitectureSelf-hosted or cloud, node-basedCloud-native, visual scenario builderCloud-native, trigger-action model
Technical depth requiredMedium-High (JSON, expressions, code nodes)Medium (visual data mapping, some formulas)Low (point-and-click, templates)
AI/LLM integrationBest-in-class: 70+ AI nodes, LangChain nativeGood: HTTP module + AI modulesGood: built-in AI actions, ChatGPT plugin
Self-hostingYes (Docker, Kubernetes)NoNo
Pricing modelExecution-based (self-host: free/paid tiers)Operation-based (per data operation)Task-based (per trigger + action)
Price at 10K ops/month~$20-50 (self-hosted) or ~$50 (cloud)~$30-60~$100-200
Price at 100K ops/month~$50-100 (self-hosted) or ~$200 (cloud)~$150-300~$500-1,500+
Max integrations400+ (plus HTTP/webhook for anything)1,500+7,000+
Error handlingNative retry, error workflows, manual replayBuilt-in retry, error routes, break modulesBasic retry, error paths on paid plans
Version controlJSON export, Git-friendlyScenario export (JSON)Limited (no native Git support)
Data sovereigntyFull control (self-hosted)EU/US cloud regionsUS cloud (enterprise: custom)
Branching/routingIf/Switch nodes, merge nodesRouters, filters, iteratorsPaths (paid), Filters
Code executionJavaScript, Python nodes built-inJavaScript in some modulesLimited (Code by Zapier, basic JS/Python)
Webhook supportFull (trigger + respond)Full (trigger + respond)Full (trigger + respond)
Best for GTMComplex multi-step AI workflows, data pipelinesVisual workflow design, moderate complexitySimple integrations, non-technical teams

Enterprise iPaaS: Tray.io vs Workato

For larger organizations with complex integration needs, enterprise iPaaS platforms provide governance, compliance, and scale.

DimensionTray.ioWorkato
TargetMid-market to enterpriseEnterprise
PricingCustom (typically $10K+/year)Custom (typically $10K+/year)
StrengthLow-code visual builder for "citizen developers"Enterprise-grade governance + AI copilots
Integrations600+ connectors1,000+ connectors
AI featuresMerlin AI for building workflowsCopilot suite for building, mapping, documenting
ComplianceSOC2, GDPR, HIPAASOC2, GDPR, HIPAA, FedRAMP
GTM useMarketing ops, sales ops, RevOps automationFull GTM + finance + HR + IT automation
When to chooseTeams that need enterprise features but want accessible buildingOrganizations requiring full audit trails and enterprise compliance

Platform Selection Decision Tree

START: What is your team's technical depth?
  |
  +-- Can write Python/JS, comfortable with APIs
  |     |
  |     +-- Need data sovereignty / self-hosting?
  |     |     +-- YES --> n8n (self-hosted)
  |     |     +-- NO --> Need enterprise compliance?
  |     |           +-- YES --> Workato or Tray.io
  |     |           +-- NO --> n8n (cloud) or Make
  |     |
  |     +-- Volume > 100K operations/month?
  |           +-- YES --> n8n (self-hosted) for cost efficiency
  |           +-- NO --> n8n (cloud) or Make
  |
  +-- Can do basic configuration, formulas, some JSON
  |     |
  |     +-- Complex branching/data transformation needed?
  |     |     +-- YES --> Make
  |     |     +-- NO --> Zapier or Make
  |     |
  |     +-- Budget-constrained?
  |           +-- YES --> Make (better price-to-value)
  |           +-- NO --> Zapier (fastest setup)
  |
  +-- Non-technical, needs point-and-click
        |
        +-- Simple trigger-action automations?
        |     +-- YES --> Zapier
        |     +-- NO (complex needs) --> Hire a GTM engineer
        |
        +-- Need templates to start fast?
              +-- YES --> Zapier (7,000+ integrations, templates)
              +-- NO --> Make (better long-term value)

For API-first stack design, data pipelines, GTM agents, event-driven architecture, monitoring, cost optimization, patterns, and internal tools read

references/implementation-guide.md
.