Product-org-os deal-diligence-checklist
Archetype-parameterized M&A due diligence checklist with AI Target Addendum, value-stack alignment, and diligence-hook consumption.
git clone https://github.com/yohayetsion/product-org-os
T=$(mktemp -d) && git clone --depth=1 https://github.com/yohayetsion/product-org-os "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills-mirror/deal-diligence-checklist" ~/.claude/skills/yohayetsion-product-org-os-deal-diligence-checklist && rm -rf "$T"
skills-mirror/deal-diligence-checklist/SKILL.md/deal-diligence-checklist
⚠️ Not legal, financial, or regulatory advice. This output is a drafting and triage aid generated by a product-organization skill, not counsel. No attorney-client relationship, investment advisory relationship, or regulatory-compliance relationship is created by its production or use. Jurisdiction-specific questions, contested matters, deal structure decisions, regulatory exposure assessment, and any decision with material legal, financial, or regulatory consequences require review by licensed attorneys, qualified financial advisors, and competent regulatory counsel in the relevant jurisdictions. Do not rely on this output as the sole basis for any deal-structuring, investment, integration, or regulatory decision.
Jurisdiction Assumed: {jurisdiction — default: target's home jurisdiction plus any jurisdictions in which the target derives material revenue, flagged explicitly in output}. If your jurisdiction differs, treat every finding below as a hypothesis to verify with local counsel.
1. Archetype Parameterization — Why One Skill, Not Five
An M&A due diligence checklist is not a static document. A strategic bolt-on and a financial buyout share very little diligence DNA: the bolt-on cares about capability absorption, the buyout cares about cost base and working capital release. An AI-first talent acquisition cares about training data consent flows, which a distressed turnaround cares about approximately not at all. The temptation is to produce five separate checklists. That temptation is wrong. Five separate checklists means five maintenance burdens, five drift surfaces, five opportunities for the hook ordering to diverge, and five places for the AI Target Addendum to get out of sync when the EU AI Act's next implementing regulation lands. The right structural choice is one skill parameterized by
deal_archetype, sharing a single hook bank, a single layer framework, a single set of anti-patterns, and a single scaffolding discipline — with archetype-driven reweighting of hooks, workstreams, and layer emphasis as the output shape the user sees.
The
--archetype input drives three transformations. First, it reweights the five EthosData layers from ma-value-stack so that the dominant layers for the archetype are foregrounded and the non-dominant layers are sent to a supplementary section. Second, it reorders the 25 base hooks from m-and-a-playbooks Section 7 into P0 (deal-killing — must answer before term sheet), P1 (material — must answer before close), and P2 (nice-to-have — post-close 100-day plan) using the hook-by-archetype weighting matrix in that pack. Third, when --archetype ai-first-target or --sector ai is set, it triggers the AI Target Addendum module (Section 11) that adds roughly 30 AI-specific questions covering EU AI Act, NIST AI RMF, FTC Section 5, OECD AI Principles, and the IP/data/talent retention questions that dominate AI-first deal risk. The archetype is the organizing principle; everything else in the skill is a reweighting of shared content against that principle.
2. Purpose
Produce a tailored, archetype-aware due diligence question set that: (a) mechanically consumes the 25 base hooks from
m-and-a-playbooks v1.1.0 Section 7 with archetype-driven weighting, (b) foregrounds the dominant layers from ma-value-stack v1.0.0 for the selected archetype, (c) triggers domain-specific addenda (AI Target, sector modules, jurisdiction modules) on demand, and (d) identifies team capability gaps that could invalidate the diligence itself (e.g., no AI architect on an AI-first deal). The output is a structured list of questions with evidence type, responsibility, and completion milestone — not an analysis, not a recommendation, and not a substitute for substantive diligence.
3. When to Use
- New deal diligence: Term sheet is being drafted, LOI is not yet signed, need a structured question set to run substantive diligence against.
- Refresh mid-diligence: Diligence is running, a new workstream is surfacing issues, and the checklist needs to be re-weighted (e.g., capability transfer risk emerged, need to add P0 questions).
- Post-LOI deep dive: LOI signed, substantive diligence window open, need an archetype-specific question bank to drive the virtual data room review.
- Pre-term-sheet risk inventory: Pre-LOI discovery work, need to know what questions will dominate before committing to a price.
- Post-acquisition health check: Deal closed, 90 or 180 days in, running a retrospective against what diligence should have caught.
4. When NOT to Use
- Regulatory compliance audit — use
for a standalone framework-fit check, or/compliance-audit
for AI-specific regulatory posture (regulatory audit is a different workflow; this skill generates diligence questions, not compliance assessments)./ai-regulatory-audit - Technical AI control audit — use
(C1.2a) to collect evidence against the six AI control categories; this skill references that evidence bank but does not produce it./ai-control-audit - Risk-across-deals strategy — use
for portfolio-level or cross-deal risk landscape./risk-analysis - Integration playbook authoring — use
directly as a knowledge pack; that is the post-close artifact, not a skill output.m-and-a-playbooks - Financial modeling — the skill flags where financial modeling is required but does not produce models; hand off to
or@fpa-analyst
for modeling work.@revenue-analyst - Deal strategy recommendation — this skill does not recommend whether to pursue a deal; it produces the question set that informs that recommendation.
5. Required Inputs
Required:
— exactly one of the five from--archetype {strategic-bolt-on | strategic-platform | financial-buyout | ai-first-target | distressed}
Section 5.ma-value-stack
— target company name for attribution in the output.--target NAME
Optional (but high-signal):
— comma-separated list (e.g.,--jurisdiction LIST
). Activates jurisdictional modules. Default: target's home jurisdiction, flagged.US-DE,US-NY,EU,UK
— target's primary sector (e.g.,--sector NAME
,financial-services
,health
,consumer
,b2b-saas
,ai
). Activates sector modules. Triggers AI Target Addendum whenindustrial
.ai
— path to a prior diligence document; switches the skill to refresh mode (surfaces what changed, what is stale, what is new).--prior-diligence FILE
— comma-separated list of deal team roles (e.g.,--team-composition LIST
). Enables team capability gap check. If the target is AI-first butma-analyst,bizops,contracts-counsel
is absent → P0 gap.ai-architect
— optional deal size bucket (--deal-size SIZE
,small
,mid-market
,large
). Used only to flag scale-sensitive questions; no numeric estimates produced.mega
Non-inputs (explicitly): the skill does not consume financial projections, revenue numbers, ARR, EV, synergy targets, or any other numeric value that would require fabrication under
no-estimates.md. The checklist produces questions about those numbers; it does not produce the numbers.
6. Method
Step 1 — Validate archetype and load configuration
Verify
--archetype is one of the five valid values. If invalid, fail with error naming the five valid values. If valid, load the archetype-specific configuration: dominant layers (from Section 5 of ma-value-stack), lever weighting (from Section 4 of m-and-a-playbooks), archetype-specific pitfalls (Section 4 of m-and-a-playbooks), and integration pattern.
Step 2 — Load and weight diligence hooks
Load all 25 base hooks from
m-and-a-playbooks Section 7.1 Hook Index (Q(L1-1) through Q(L5-5)). For each hook, read the "Archetype Weighting" column in Section 7.1 and compute P0/P1/P2 classification for the selected archetype using the following rule:
- P0 (deal-killing, must answer before term sheet) — hook is named as "dominant" or "critical" for this archetype in the playbook's hook-by-archetype matrix, OR hook is "universal" AND touches a layer foregrounded for this archetype.
- P1 (material, must answer before close) — hook is "important" or "critical" for this archetype, OR hook is "universal" but does not touch a foregrounded layer.
- P2 (nice-to-have, post-close 100-day plan) — hook is "secondary," "low," or "minimal" for this archetype, OR hook is not explicitly weighted for this archetype in the matrix.
Each archetype produces a different P0/P1/P2 split of the same 25 hooks. Do not add new hooks at this stage; the hook bank is the bank.
Step 3 — Apply layer weighting from ma-value-stack
ma-value-stackFor the selected archetype, foreground the dominant layers per
ma-value-stack Section 5:
| Archetype | Dominant Layers | Secondary | Deprecated in output ordering |
|---|---|---|---|
| Strategic bolt-on | Layer 2, 3, 4 | Layer 1, 5 | — |
| Strategic platform | Layer 3, 5 | Layer 2, 4 | Layer 1 cost synergies (anti-pattern in this archetype) |
| Financial buyout | Layer 1, 5 | Layer 2 | Layer 3, 4 to diligence minimum |
| AI-first target | Layer 3, 4, 5 | — | Layer 1 (anti-pattern — D1 destructive zone) |
| Distressed | Layer 1, 2 | — | Layer 3, 4, 5 to diligence minimum |
The output sections are ordered so that dominant-layer questions appear first. Non-dominant layers are sent to a "supplementary diligence" section with a note explaining why the layer is de-emphasized for this archetype.
Step 4 — Apply sector filter
If
--sector is provided, activate the sector module (Section 12). Sector modules add 8-15 sector-specific questions on top of the base hook bank. The ai sector automatically triggers the AI Target Addendum (Step 6) even if the archetype is not ai-first-target.
Step 5 — Apply jurisdiction filter
If
--jurisdiction is provided, activate jurisdiction modules (Section 13) via pointer to compliance-frameworks v1.1.0. Jurisdictions add 5-10 questions each on enforceability of deal structure, employment law continuity, regulatory change-of-control notification, and local tax structure. The skill does NOT opine on the questions; it surfaces them.
Step 6 — Trigger AI Target Addendum
If
--archetype ai-first-target OR --sector ai, activate the AI Target Addendum (Section 11). The addendum adds ~30 AI-specific questions covering EU AI Act, NIST AI RMF, FTC Section 5, OECD AI Principles, IP/data/talent, and regulatory relationships. The addendum is non-optional for either trigger; it cannot be suppressed.
Step 7 — Team capability gap check
If
--team-composition is provided, run the following gap matrix:
| Archetype or signal | Required roles | Gap severity if missing |
|---|---|---|
| Any deal | , , | P0 |
or | or | P1 |
| , , | P0 |
or | , | P0 (hard-flag — without AI technical reviewer the diligence itself is unreliable) |
| Regulated sector | , | P1 |
| Cross-border | , | P1 |
Flag gaps in a dedicated output section with the verdict "Diligence may be unreliable without [role]; recommend adding before advancing to [milestone]." The gap is a diligence-process risk, not a deal risk; it tells the deal team whose review should happen before the checklist output is acted upon.
Step 8 — Generate final checklist output
Produce the structured output (Section 7). Every question includes: Hook ID (or addendum ID), Question text, Evidence type (document / data / interview / observation), Responsibility (deal-team role), Target completion milestone (pre-LOI / post-LOI / pre-close / post-close 30 / post-close 100), Source pointer (
m-and-a-playbooks section or addendum name).
7. Output Structure
Every run produces the following sections in order. Empty sections are rendered with an explicit "Not applicable for this archetype" note rather than being silently dropped.
7.1 Header
- Target name
- Archetype (single, validated value)
- Sector (if provided)
- Jurisdiction list (if provided; otherwise "target home jurisdiction assumed, flag")
- Team composition (if provided)
- Mode (new / refresh)
- Date of run
7.2 Dominant Layers Foregrounded
Per the table in Step 3. Name the layers the checklist emphasizes and the layers it de-emphasizes, with a one-sentence rationale per layer citing
ma-value-stack Section 5.
7.3 P0 Questions — Deal-killing (must answer before term sheet)
Numbered list. Each question:
- Hook ID / Addendum ID
- Question
- Why P0 for this archetype: one-sentence rationale citing the playbook's weighting or the layer foregrounding
- Evidence type: document / data / interview / observation
- Responsibility: deal-team role (e.g.,
,@ma-analyst
,@contracts-counsel
)@fpa-analyst - Target milestone: pre-LOI / post-LOI / pre-close
- Source:
Section 3.X or addendum namem-and-a-playbooks
7.4 P1 Questions — Material (must answer before close)
Same structure as P0.
7.5 P2 Questions — Nice-to-have (post-close 100-day plan)
Same structure as P0.
7.6 AI Target Addendum (if triggered)
Full addendum from Section 11 — ~30 questions across six sub-sections.
7.7 Sector Module (if triggered)
Full sector module from Section 12.
7.8 Jurisdiction Module (if triggered)
Pointer to
compliance-frameworks jurisdiction matrix plus 5-10 deal-specific questions per jurisdiction.
7.9 Team Capability Gaps (if any)
Named gaps, severity, and the verdict "Diligence unreliable without [role]; recommend adding before [milestone]."
7.10 Evidence Request List
A consolidated list of what the target must provide in the data room to answer the questions in 7.3 through 7.8. Cross-referenced by hook ID so the data room administrator can mechanically check completeness.
7.11 Diligence Schedule
Milestone-based schedule:
| Milestone | Questions to close by this milestone | Lead |
|---|---|---|
| Pre-LOI | P0 questions | Deal lead |
| Post-LOI week 1 | Evidence request sent; P1 questions scoped | @ma-analyst |
| Post-LOI weeks 2-4 | P0 answered; P1 in-flight | @ma-analyst |
| Pre-close | P1 answered; P2 scoped | Deal lead |
| Post-close day 30 | P2 closed or handed to integration | IMO lead |
The schedule is a pattern, not a commitment; real timelines depend on data room velocity, counterparty responsiveness, and discovered issues.
7.12 Cross-reference to m-and-a-playbooks
Workstreams
m-and-a-playbooksFor each of the eleven post-close integration workstreams in
m-and-a-playbooks Section 2, name the hooks from this checklist whose answers feed into the workstream. This is the diligence-to-integration bridge; it lets the IMO lead pick up the checklist on Day 1 and carry the answers into the integration plan without re-discovery.
8. Quality Gates
Before output is returned, verify:
- Archetype valid —
is one of the five declared values; otherwise fail.--archetype - All 25 base hooks consumed — every Q(LX-N) from
Section 7.1 appears in P0, P1, or P2 (or is explicitly noted as "suppressed for this archetype" with rationale; suppression is rare and must be justified).m-and-a-playbooks - AI Target Addendum triggered correctly — addendum is present if
OR--archetype ai-first-target
; absent otherwise.--sector ai - P0 rationale present — every P0 question has a one-sentence justification naming the archetype weighting or foregrounded layer.
- Evidence type named — every question has a document/data/interview/observation tag.
- Responsibility assigned — every question has a deal-team role.
- Team capability gap check completed — if
provided, gaps are flagged; if not provided, the output notes "Team composition not provided; capability gap check skipped — user should re-run with--team-composition
before advancing to pre-LOI gate."--team-composition - Sector and jurisdiction filters applied — if provided, modules are rendered.
- Cross-reference to workstreams present — Section 7.12 is non-empty.
- No fabricated numbers — no deal size, ARR, synergy dollar amount, headcount count, multiple, or timeline day count is invented. Any number in the output is either user-provided (via
or the prior diligence file) or a--deal-size
placeholder.[TBD]
Gate failures are not warnings; they block output. The skill returns a "gate failed" error with the specific gate and the fix.
9. Findings Section Template
Once the skill has run, a reviewer adds findings to the output in this shape:
Finding {N}
- What: {specific gap, risk, or open question surfaced by the diligence run}
- Why it matters: {the risk or implication for the deal}
- Severity: P0 / P1 / P2
- Suggested next step: {specific reviewer action — e.g., "engage @ai-architect to run /ai-control-audit," "escalate to @general-counsel for change-of-control contract review"}
Findings are produced by the deal team and reviewer, not by the skill. The skill produces the question set; the findings emerge from running the question set against the target.
10. Reviewer Checklist
Before the diligence output is acted upon, the reviewer confirms:
- Archetype classification is correct (re-read
Section 5 if uncertain)ma-value-stack - Jurisdiction is confirmed and non-default; if default, reviewer has named the jurisdictions to verify
- Material facts about the target (sector, size, geography) verified against at least one independent source
- Team capability gaps addressed or explicitly accepted-with-risk by the deal lead
- All P0 findings addressed or explicitly accepted-with-risk before term sheet
- Counsel engaged for all items flagged in "Cannot Assess Without" (Section 14)
- AI Target Addendum reviewed by
if triggered (C1.2a cross-reference)@ai-architect - Cross-reference to
workstreams confirmed before hand-off to IMOm-and-a-playbooks
11. AI Target Addendum
Triggered by:
--archetype ai-first-target OR --sector ai.
Purpose: To surface the additional ~30 questions that dominate AI-first deal risk and that the base 25 hooks do not cover. The addendum is organized around six real public frameworks plus IP/data/talent and regulatory relationship questions. Framework citations are pointer-only — the skill does not paraphrase framework text. Wherever the skill names an enforcement action, it does so only by citation to the regulator's public record; this skill does not fabricate enforcement precedent.
Cross-references:
(C1.2a) — for evidence collection against the six AI control categories/ai-control-audit
(C1.2b) — for regulatory posture assessment across jurisdictions/ai-regulatory-audit
v1.1.0 — for framework-to-jurisdiction mappingcompliance-frameworks
11.1 EU AI Act Questions (Regulation (EU) 2024/1689)
Real framework; in force across the EU. The skill does not paraphrase the Act's text; these questions point to Articles the deal team must consult with counsel.
- AI-EU-1 (Art. 6 — High-risk classification): Does the target's AI system fall under Annex III high-risk categories (biometric identification, critical infrastructure, employment, credit scoring, law enforcement, migration, administration of justice)? Evidence: product documentation, customer use cases, target's own Art. 6 self-assessment if any. Responsibility:
+@ai-architect
. Milestone: pre-LOI.@compliance-officer - AI-EU-2 (Art. 9 — Risk management system): Does the target operate a continuous risk management system across the AI lifecycle? Evidence: risk register, risk management SOP, review cadence records. Responsibility:
. Milestone: post-LOI.@ai-architect - AI-EU-3 (Art. 10 — Data governance): What are the target's training-data provenance, data quality management, bias testing, and data-minimization practices? Evidence: data governance documentation, bias test reports, data provenance logs. Responsibility:
+@ai-architect
. Milestone: post-LOI.@privacy-counsel - AI-EU-4 (Art. 11 + Annex IV — Technical documentation): Does the target maintain the technical documentation package required for high-risk systems? Evidence: technical documentation package; gap analysis if incomplete. Responsibility:
. Milestone: post-LOI.@ai-architect - AI-EU-5 (Art. 12 — Logging): Are automatic event logs preserved and are log retention periods aligned with regulatory requirements? Evidence: logging architecture diagram, retention policy, sample logs. Responsibility:
. Milestone: post-LOI.@ai-architect - AI-EU-6 (Art. 13 — Transparency and information to deployers): How does the target provide transparent information to deployers on system capabilities, limitations, and intended purpose? Evidence: deployer-facing documentation, model cards, system cards. Responsibility:
+@ai-architect
. Milestone: pre-close.@contracts-counsel - AI-EU-7 (Art. 14 — Human oversight): What human-oversight mechanisms does the target implement, and are they effective for the system's risk class? Evidence: oversight design documentation, oversight-operator training records, review-rate metrics. Responsibility:
. Milestone: post-LOI.@ai-architect - AI-EU-8 (Art. 15 — Accuracy, robustness, and cybersecurity): How does the target demonstrate accuracy, robustness, and cybersecurity of its AI systems against the Art. 15 standard? Evidence: accuracy metrics, robustness test reports, security test results, incident log. Responsibility:
+@ai-architect
. Milestone: post-LOI.@security-architect - AI-EU-9 (Art. 50 — GPAI model obligations, where applicable): If the target develops or uses general-purpose AI models, has the target complied with Art. 50 obligations? Evidence: GPAI compliance pack if applicable. Responsibility:
+@ai-architect
. Milestone: pre-close.@compliance-officer - AI-EU-10 (Conformity assessment and CE marking): Has the target completed conformity assessment procedures where required, and is the system CE-marked? Evidence: conformity assessment records, notified body involvement if required, CE marking documentation. Responsibility:
. Milestone: pre-close.@compliance-officer
11.2 NIST AI RMF Questions (NIST AI 100-1)
Real public framework; voluntary in the US but adopted as a baseline by many regulators. Four function blocks: Govern, Map, Measure, Manage.
- AI-NIST-1 (Govern): Does the target have a named AI governance structure with documented roles, responsibilities, and decision rights? Evidence: governance charter, RACI, decision log. Responsibility:
. Milestone: post-LOI.@ai-architect - AI-NIST-2 (Govern): Does the target have a documented AI risk tolerance statement and how is it operationalized? Evidence: risk appetite statement, risk management policy. Responsibility:
. Milestone: post-LOI.@ai-architect - AI-NIST-3 (Map): Has the target mapped the context of each AI system's intended use, downstream users, and impacted populations? Evidence: context-mapping documentation per system, impact assessment. Responsibility:
. Milestone: post-LOI.@ai-architect - AI-NIST-4 (Measure): What metrics does the target track for AI performance, fairness, robustness, security, and explainability, and what is the measurement cadence? Evidence: metrics dashboard, measurement SOPs, review cadence. Responsibility:
. Milestone: post-LOI.@ai-architect - AI-NIST-5 (Measure): How does the target validate that metrics capture the real risk (as opposed to proxy metrics that under-measure)? Evidence: metric validation documentation, third-party assessment if any. Responsibility:
. Milestone: pre-close.@ai-architect - AI-NIST-6 (Manage): What incident response, model rollback, and post-deployment monitoring capabilities does the target operate? Evidence: incident response SOP, rollback capability demonstration, monitoring dashboard. Responsibility:
. Milestone: post-LOI.@ai-architect - AI-NIST-7 (Manage): How are risks prioritized, assigned owners, and tracked to resolution? Evidence: risk register with owner and status fields, risk review cadence. Responsibility:
. Milestone: post-LOI.@ai-architect
11.3 FTC Section 5 Enforcement Questions
Real statutory authority (15 U.S.C. § 45) covering unfair or deceptive practices. The skill does NOT fabricate specific enforcement actions; it surfaces the risk categories the FTC has publicly articulated concern about.
- AI-FTC-1 (Unfairness — training data): How does the target source training data, and are there any risks of deceptive or unfair data collection practices that the FTC has publicly signaled concern about? Evidence: data sourcing documentation, consent records, data processing notices. Responsibility:
. Milestone: post-LOI.@privacy-counsel - AI-FTC-2 (Deceptive claims): What public claims does the target make about its AI system's capabilities, accuracy, or performance? Are these claims substantiated? Evidence: marketing materials, customer-facing documentation, substantiation records for performance claims. Responsibility:
+@contracts-counsel
. Milestone: pre-close.@ai-architect - AI-FTC-3 (Algorithmic disgorgement risk): Is the target's training data derived from sources that could trigger algorithmic disgorgement under FTC remedial authority? Evidence: training data provenance documentation, FTC consent decree survey (if any). Responsibility:
+@ip-counsel
. Milestone: pre-LOI.@privacy-counsel - AI-FTC-4 (Children / sensitive data): Does the target process data from children under 13 or other sensitive populations in ways that could trigger COPPA or Section 5 unfairness concerns? Evidence: age verification records, sensitive data inventory, COPPA compliance documentation. Responsibility:
. Milestone: post-LOI.@privacy-counsel - AI-FTC-5 (Prior enforcement contact): Has the target had any prior enforcement contact with the FTC, state attorneys general, or equivalent regulators? Evidence: disclosure under diligence representations, regulator correspondence. Responsibility:
. Milestone: pre-LOI.@general-counsel
11.4 OECD AI Principles Questions
Real public framework adopted by 47+ countries. Five principles: inclusive growth, human-centered values, transparency, robustness/security, accountability.
- AI-OECD-1 (Inclusive growth, sustainable development, and well-being): How does the target consider inclusive growth, sustainable development, and well-being in its AI system design and deployment? Evidence: impact assessment, stakeholder engagement records. Responsibility:
. Milestone: post-LOI.@ai-architect - AI-OECD-2 (Human-centered values and fairness): How does the target demonstrate respect for human rights, democratic values, and fairness in its AI systems? Evidence: fairness testing, human rights impact assessment if any. Responsibility:
+@ai-architect
. Milestone: post-LOI.@privacy-counsel - AI-OECD-3 (Transparency and explainability): How does the target provide transparency and explainability to AI system users? Evidence: model cards, explainability documentation, user-facing notices. Responsibility:
. Milestone: post-LOI.@ai-architect - AI-OECD-4 (Robustness, security, and safety): How does the target demonstrate system robustness, security, and safety, and how does it respond to identified vulnerabilities? Evidence: security assessment, vulnerability management records, incident history. Responsibility:
. Milestone: post-LOI.@security-architect - AI-OECD-5 (Accountability): Who is accountable for each AI system's performance and outcomes, and how is accountability operationalized in the target's governance? Evidence: accountability matrix, governance charter. Responsibility:
. Milestone: pre-close.@ai-architect
11.5 IP, Data, and Talent Retention Questions
These questions are load-bearing for AI-first deals because the value sits in three places: the model weights, the training data, and the people who produced both. All three must transfer or the deal is a write-off (see
ma-value-stack Cell [E3] and m-and-a-playbooks Section 4.4).
- AI-IP-1 (Model weights ownership): Who owns the model weights, and is ownership clear from assignment agreements covering every contributor (employees, contractors, open-source usage)? Evidence: IP assignment documents, contributor inventory, OSS usage audit. Responsibility:
. Milestone: pre-LOI.@ip-counsel - AI-IP-2 (Training data provenance): Is the provenance of every material training dataset documented, licensed, and free of open legal questions? Evidence: data provenance log, license documentation per dataset, legal opinion on any open questions. Responsibility:
+@ip-counsel
. Milestone: post-LOI.@privacy-counsel - AI-IP-3 (Change-of-control consent flows for data): Do the target's data use consents survive change of control, or will a subset of data become unusable post-close? Evidence: consent terms review, change-of-control clause analysis, data continuity estimate. Responsibility:
. Milestone: pre-close.@privacy-counsel - AI-TALENT-1 (Key person retention — researchers): Which researchers, engineers, and scientists are load-bearing to the target's model development, and what is the retention mechanism for each? Evidence: named key-person list, retention agreement terms, tenure distribution of research team. Responsibility:
+@ma-analyst
. Milestone: pre-LOI.@chro - AI-TALENT-2 (Tenure distribution and institutional knowledge concentration): What is the tenure distribution of the research team, and how concentrated is institutional knowledge in the longest-tenured researchers? Evidence: tenure data, knowledge-concentration analysis (e.g., code ownership, paper authorship). Responsibility:
. Milestone: post-LOI.@ma-analyst - AI-TALENT-3 (Non-compete and post-employment restrictions): Are key researchers subject to enforceable non-compete and non-solicit clauses in their jurisdiction? Evidence: employment contracts, jurisdictional enforceability assessment. Responsibility:
. Milestone: post-LOI.@employment-counsel
11.6 Regulatory Relationships Questions
- AI-REG-1 (Existing regulator contact): Does the target have existing relationships with AI regulators, and what is the nature of those relationships (informal dialogue, formal review, sandbox participation)? Evidence: regulator correspondence, sandbox documentation if any. Responsibility:
+@compliance-officer
. Milestone: pre-LOI.@general-counsel - AI-REG-2 (Sandbox participation): Is the target participating in any regulatory sandbox programs, and what are the transfer implications of change of control? Evidence: sandbox agreement, transfer clause review. Responsibility:
. Milestone: post-LOI.@compliance-officer - AI-REG-3 (Prior enforcement or investigation): Has the target been subject to any prior regulatory enforcement action, investigation, or inquiry related to its AI systems? Evidence: disclosure under diligence representations. Responsibility:
. Milestone: pre-LOI.@general-counsel
Addendum total: 36 questions when fully expanded, foregrounded around the EU AI Act (10), NIST AI RMF (7), FTC (5), OECD (5), IP/data/talent (6), and regulatory (3) categories. The addendum is in addition to the 25 base hooks; an AI-first deal diligence therefore runs against ~61 questions, not 25.
Cross-reference to
: Questions AI-EU-2 through AI-EU-8 and AI-NIST-1 through AI-NIST-7 can be answered by running /ai-control-audit
/ai-control-audit against the target, which produces a structured evidence collection against the six AI control categories (Governance, Data, Model, Deployment, Monitoring, Incident Response). The diligence team should spawn /ai-control-audit as a sub-deliverable post-LOI with the target's cooperation.
12. Sector Modules
Sector modules add 8-15 questions. The skill does NOT opine on the answers; it surfaces the questions. All framework references are pointer-only.
12.1 Financial Services
Jurisdictionally gated: US → FDIC, OCC, Federal Reserve + state regulators. UK → FCA, PRA. EU → ECB, EBA, national competent authorities. Israel → Bank of Israel, ISA. Singapore → MAS.
Questions (up to 12):
- FS-1: Bank-ownership / critical-service-provider change-of-control notification requirements in target jurisdictions?
- FS-2: Capital adequacy continuity post-close (if target is a regulated entity)?
- FS-3: AML/KYC program continuity and any open enforcement matters?
- FS-4: Customer data portability and data residency requirements?
- FS-5: Outsourcing / third-party risk management obligations (e.g., EBA Guidelines on outsourcing)?
- FS-6: Operational resilience framework (e.g., UK PRA operational resilience, EU DORA)?
- FS-7: Cross-border data flow restrictions (e.g., bank secrecy, data localization)?
- FS-8: Prior regulator correspondence, enforcement actions, or consent orders?
- FS-9: Financial crime exposure and SAR filing history?
- FS-10: Fiduciary duty and conflict-of-interest disclosures in advisory arms?
- FS-11: Consumer protection / fair lending / UDAAP exposure (US)?
- FS-12: Prudential stress-test participation and outcomes?
12.2 Health / Healthcare
Questions (up to 10):
- H-1: HIPAA covered-entity or business-associate status; BAA inventory?
- H-2: HITECH and state health privacy law compliance (e.g., NY SHIELD, CA CMIA)?
- H-3: FDA regulatory status for any software-as-medical-device components (510(k), De Novo, PMA)?
- H-4: Clinical data provenance, IRB approvals, informed consent flows?
- H-5: Stark Law / Anti-Kickback exposure in any business model?
- H-6: Medicare / Medicaid enrollment status and any prior revocation risk?
- H-7: Clinical decision-support exemption analysis if applicable?
- H-8: Breach history under HIPAA breach notification rule?
- H-9: Change-of-control notification to CMS or state licensure boards?
- H-10: Cybersecurity incident disclosure obligations?
12.3 Consumer
Questions (up to 10):
- C-1: FTC Section 5 unfair-or-deceptive exposure beyond AI (see AI Target Addendum)?
- C-2: State AG consumer-protection enforcement history?
- C-3: CAN-SPAM / TCPA / telemarketing compliance posture?
- C-4: Product liability exposure and insurance coverage?
- C-5: Advertising substantiation practices?
- C-6: Subscription disclosure compliance (ROSCA, state auto-renewal laws)?
- C-7: Returns, refunds, and customer remedies policy continuity?
- C-8: Class action exposure and history?
- C-9: Loyalty program / gift card / stored-value compliance?
- C-10: Consumer data privacy (CCPA / CPRA / state privacy laws)?
12.4 B2B SaaS
Questions (up to 10):
- SAAS-1: SOC 2 Type II attestation current? Gap analysis against any lapsed controls?
- SAAS-2: Customer data processing obligations (GDPR, CCPA); DPA inventory?
- SAAS-3: Sub-processor inventory and flow-down obligations?
- SAAS-4: Security incident history and customer notification records?
- SAAS-5: Uptime / SLA commitments and historical breach history?
- SAAS-6: Customer contract change-of-control clauses; flagging clauses that give termination rights on close?
- SAAS-7: MRR / ARR churn and gross retention trends (see also Q(L2-5))?
- SAAS-8: Source-code escrow obligations to enterprise customers?
- SAAS-9: Open-source license usage audit (GPL, AGPL exposure)?
- SAAS-10: Data export and portability provisions for customers?
12.5 Industrial / Regulated Industry
Questions (up to 8):
- IND-1: Environmental compliance history and liabilities?
- IND-2: Occupational safety (OSHA, EU-OSHA, equivalent) enforcement record?
- IND-3: Permits and licenses requiring change-of-control notification or renewal?
- IND-4: Supplier concentration risk and key-supplier contracts?
- IND-5: Labor relations and union agreements; change-of-control implications?
- IND-6: Product recalls history and warranty exposure?
- IND-7: Export control and trade sanctions compliance?
- IND-8: Real estate environmental liabilities (Phase I, Phase II reports)?
13. Jurisdiction Modules
Jurisdiction modules are short (5-10 questions each) and route via pointer to
compliance-frameworks v1.1.0 jurisdiction matrix. The skill does not re-derive jurisdictional law; it points to the framework pack and adds deal-specific questions.
Default jurisdictions supported (via pointer):
- US — federal + state (employment, tax, FTC, state AG, state privacy)
- EU / EEA — GDPR, EU AI Act, DSA, DMA, national competent authorities
- UK — GDPR-UK, FCA/PRA (if regulated), CMA merger control threshold
- Canada — PIPEDA, provincial privacy, CSA if regulated
- Israel — PPL 2024, ISA if regulated, Bank of Israel if regulated
- Singapore — PDPA, MAS if regulated
- Australia — Privacy Act, APRA if regulated
- Japan — APPI, regulator-specific
- India — DPDPA 2023, sectoral regulators
Per-jurisdiction question pattern (5-10 questions):
- J-1: Merger control / antitrust filing threshold and timeline?
- J-2: Foreign direct investment / national security review (CFIUS, NSI Act, FDI screening)?
- J-3: Employment law continuity (TUPE in EU/UK, ARD equivalents)?
- J-4: Tax structure implications of change of control?
- J-5: Data residency and cross-border transfer obligations?
- J-6: Regulator change-of-control notification (sector-specific, routed via Section 12)?
- J-7: Jurisdiction-specific IP enforcement posture?
- J-8: Local governance / board composition / nominee director requirements?
- J-9: Currency controls or repatriation restrictions?
- J-10: Change-of-control clauses in material local contracts?
14. Cannot Assess Without
This skill deliberately does NOT opine on the following. Items below require engagement with named specialists before any action is taken on the checklist output.
- Licensed counsel (deal structure) — stock vs asset deal, merger vs reverse merger, earn-out structure, indemnity caps, representations and warranties insurance, escrow mechanics. Engage
+@general-counsel
.@contracts-counsel - Licensed counsel (regulatory exposure) — specific applicability of named regulations (EU AI Act, GDPR, HIPAA, FCRA, state privacy laws). Engage
+@compliance-officer
.@privacy-counsel - Financial modeling — synergy quantification, valuation, IRR, multiple analysis, sensitivity, scenario modeling. This skill flags where modeling is needed; it does not produce models. Hand off to
or@fpa-analyst
.@revenue-analyst - Technical AI control audit — evidence collection against the six AI control categories (Governance, Data, Model, Deployment, Monitoring, Incident Response). Use
(C1.2a)./ai-control-audit - Regulatory posture assessment — jurisdiction-by-jurisdiction posture against AI regulatory frameworks. Use
(C1.2b)./ai-regulatory-audit - Market intelligence and competitive context — positioning of the target in its market, competitive dynamics, alternative deals. Engage
or@ci
.@market-researcher - Reference checks on target team — qualitative human capital assessment, founder-reputation checks, prior-venture outcome analysis. Engage
+ external reference-check specialists.@chro - Sensitive counterparty data — customer contracts, supplier concentration, unit economics, cost structure, employee compensation. Disclosed only under NDA post-LOI; the checklist identifies the questions but cannot answer them pre-LOI.
- Jurisdiction-specific counsel — local law, local enforcement posture, local regulator relationships. Engage local counsel for every jurisdiction in the
list.--jurisdiction - Tax counsel — deal structure tax implications, transfer pricing, tax attribute preservation, change-of-law risk. Engage
.@tax-planning
15. Related Skills and Hand-Off
15.1 Inputs (what this skill reads from)
v1.0.0 — 5x5 matrix, 5 archetypes, D1 destructive-interaction cell, layer definitionsma-value-stack
v1.1.0 — 25 hooks, 5 archetype modules, 11 integration workstreams, D1 destructive-interaction sectionm-and-a-playbooks
15.2 Packs consumed
v1.1.0 — jurisdiction matrix and framework-to-jurisdiction mapping (for Section 13)compliance-frameworks
15.3 Cross-references (skills that consume this output or are consumed by it)
— technical evidence collection against AI control categories (consumed by AI Target Addendum)/ai-control-audit
— regulatory posture assessment (consumed by AI Target Addendum + jurisdictional modules)/ai-regulatory-audit
— framework-level compliance gap assessment (alternative workflow for non-deal compliance work)/compliance-audit
— portfolio-level or cross-deal risk landscape (consumed for portfolio context)/risk-analysis
— clause-by-clause review of specific contracts (consumed post-LOI for material contracts)/contract-review
— Pattern 5 Adversarial Review for near-final contract drafts (consumed pre-signing)/contract-stress-test
— disclosure-sufficiency audit for privacy policies (consumed when target is consumer-facing)/privacy-policy-audit
— NDA decision-tree triage (consumed pre-diligence-disclosure)/nda-triage
15.4 Hand-off to post-close
— eleven integration workstreams. Every question in this checklist maps to one or more workstreams via Section 7.12. The IMO lead picks up the checklist on Day 1 and carries forward the answers into the integration plan.m-and-a-playbooks
16. Pattern Constraints
- Sensitive: true — output scaffolding (disclaimer, Findings, Reviewer Checklist, Cannot Assess Without) is non-negotiable
- Delegation default: Pattern 1 Consultation — consult
for deal structure framing,@general-counsel
for regulatory posture,@compliance-officer
for technical AI sanity,@ai-architect
for data flow analysis,@privacy-counsel
for IP structure,@ip-counsel
for financial modeling scope,@fpa-analyst
for tax structure scope. Integrate findings in first person ("I consulted @general-counsel who flagged...").@tax-planning - Delegation escalation: Pattern 5 Adversarial Review optional on high-stakes deals — spawn a fresh-context adversarial reviewer (typically
or@general-counsel
) to stress-test the checklist against the target's specific risk profile. Iteration cap: 2. Tiebreaker:@head-corpdev
+@head-corpdev
.@general-counsel - First-principles authoring — no content lifted from Bain, McKinsey, BCG, PwC, EY, Deloitte, KPMG, HBR, or any M&A advisory publication. Framework references (EU AI Act, NIST AI RMF, OECD AI Principles, FTC Section 5, EthosData, Bain 5-lever) are named as existence proofs only.
- ROI framing: "saved on drafting and triage" at a $300/hr M&A blended rate.
- No fabricated numbers — no deal size, ARR, synergy dollar amount, multiple, timeline day count invented. User-provided numbers are honored; all other numbers are
placeholders.[TBD]
17. Birth Test Pointer
A birth test for this skill was run on 2026-04-11 against a synthetic AI-first target ("SmallAgentLab," 12-person agent infrastructure startup). The test exercises the AI Target Addendum (most load-bearing module), the team capability gap check (triggering a P0 gap for missing
@ai-architect), and the sector+jurisdiction modules (AI sector + US-DE + US-NY). Results: AXIA/Product/deal-diligence-checklist-birth-test-smallagentlab-2026-04-11.md.
18. ROI Display
After execution, display:
⏱️ ~[X] hrs saved on drafting and triage in [Y] min, [Z]k tkns ~$[C] cost, Value ~$[V]
Time saved baseline: a manually-authored archetype-tailored diligence checklist with AI addendum takes ~8-14 hours for a senior M&A analyst. Complexity multiplier: 1.5× for AI-first (addendum complexity), 1.0× for other archetypes. Blended rate for M&A work: $300/hr.
19. Framework References (Existence Proofs Only)
The frameworks below are named as pointers to real public sources. No content has been lifted. The skill references each by name so users can consult the source directly with counsel.
- EU AI Act — Regulation (EU) 2024/1689 on artificial intelligence
- NIST AI Risk Management Framework — NIST AI 100-1 (January 2023) + AI RMF Playbook
- OECD AI Principles — OECD/LEGAL/0449, adopted May 2019, updated May 2024
- FTC Section 5 — 15 U.S.C. § 45, unfair or deceptive acts or practices
- EthosData 5-layer value stack — referenced via
v1.0.0ma-value-stack - Bain 5-lever value creation — referenced via
v1.0.0 andma-value-stack
v1.1.0m-and-a-playbooks - GDPR / UK-GDPR / LGPD / PDPA / DPDPA / PPL — data protection frameworks (routed via
)compliance-frameworks - CFIUS, NSI Act, FDI screening — national security / FDI review frameworks (routed via jurisdiction modules)
Dated claims in this skill are marked "as of 2026-04-11." The core structure (archetype parameterization, 25-hook bank, addendum triggers, scaffolding gates) is evergreen; specific framework references evolve as regulations are updated.
End of skill.