Product-org-os ai-regulatory-audit

Organizational AI regulatory posture audit across all applicable frameworks simultaneously, producing a gap register and mitigation plan as a drafting and triage aid for counsel review.

install
source · Clone the upstream repo
git clone https://github.com/yohayetsion/product-org-os
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/yohayetsion/product-org-os "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills-mirror/ai-regulatory-audit" ~/.claude/skills/yohayetsion-product-org-os-ai-regulatory-audit && rm -rf "$T"
manifest: skills-mirror/ai-regulatory-audit/SKILL.md
source content

/ai-regulatory-audit

⚠️ Not legal advice. This skill produces a drafting and triage aid generated by a product-organization compliance skill. Its output is NOT a regulatory posture attestation, NOT a certification, NOT a legal opinion, and NOT a substitute for licensed counsel. No attorney-client relationship is created by its production or use. Jurisdiction-specific questions, contested enforcement positions, and any decision with material legal or regulatory consequence require review by a licensed attorney in each relevant jurisdiction. Do not rely on this output as the sole basis for any AI governance, regulatory, or market-access decision.

Jurisdiction Assumed: declared in every audit output (Section: Applicability Determination). If your jurisdiction differs from the assumed list, every finding below must be re-verified with local counsel.


Purpose

/ai-regulatory-audit
produces an organizational regulatory posture assessment for an AI system, business unit, or organization across multiple applicable AI governance frameworks simultaneously. For each framework that the input triggers (by jurisdiction or system-characteristic), the skill identifies the in-scope obligations, looks up the technical controls that satisfy each obligation via the shared control-to-obligation mapping table in the
compliance-frameworks
knowledge pack, and produces a gap register, an overlap register, and a prioritized mitigation plan. The output is a drafting and triage aid for human review — the Compliance Officer plus licensed counsel in each jurisdiction.

The skill answers one question and one question only:

"Does this organization / AI system demonstrate responsible AI governance across the multiple frameworks that apply to it, regardless of whether any single framework is fully compliant?"

That is the posture question. It is not "are we compliant with GDPR?" (that is

/compliance-audit
A4). It is not "are our technical controls working?" (that is
/ai-control-audit
C1.2a). It is not "does our privacy policy disclose enough?" (that is
/privacy-policy-audit
A3). It is not "what could hurt this deal?" (that is
/risk-analysis
A5). Posture is the cross-framework, organization-level view: can we credibly say our AI governance program addresses what the frameworks that apply to us demand, and where are the material gaps?

What the skill IS: an analytical cross-framework pass that (a) determines which frameworks apply, (b) derives the in-scope obligations inside each, (c) maps those obligations to the organization's technical controls via the shared mapping table, (d) identifies gaps, overlaps, and remediation priority.

What the skill is NOT: a framework-specific compliance audit (A4), a technical control audit of a specific system (C1.2a), a privacy policy disclosure audit (A3), a deal-level risk landscape (A5), a legal opinion on regulator enforcement, a replacement for counsel review in any jurisdiction, a formal attestation, or a pass/fail grade.


Boundary Statement (Critical — Four-Sided)

The AI regulatory audit sits at a specific intersection and MUST not drift into adjacent skills' territory. Scope drift on a posture audit is especially dangerous because the output is cross-framework and the temptation to opine on any individual framework's compliance is strong. The boundary converts that temptation from invisible to blocking.

SkillUnit of AnalysisPrimary QuestionFramework Coverage
/compliance-audit
(A4)
Org or initiative vs ONE named framework"Are we compliant with THIS regulation?"One at a time
/ai-control-audit
(C1.2a)
One AI system's technical controls"Do our 6-category technical controls exist and operate?"Control-taxonomy view
/privacy-policy-audit
(A3)
A public-facing policy document"Does the policy disclose what the regs require?"Disclosure-sufficiency view
/risk-analysis
(A5)
Initiative or deal"What could hurt us across six risk domains?"Upstream risk landscape
/ai-regulatory-audit
(C1.2b)
An organization or AI system"Is our AI governance posture defensible across ALL applicable frameworks at once?"Multi-framework simultaneously

C1.2b is bounded on four sides:

  1. Against

    /compliance-audit
    (A4) — DEPTH vs BREADTH. A4 goes deep on ONE framework's control catalog. C1.2b goes across ALL applicable frameworks at the obligation level. When an organization wants to know "are we GDPR-ready for audit?" — that is A4, run
    --framework gdpr
    . When an organization wants to know "we operate in EU, US, and Singapore simultaneously; is our AI governance posture defensible across all of them?" — that is C1.2b. C1.2b does NOT substitute for A4 when certification readiness is the question; it is the cross-framework posture view that precedes and complements single-framework deep-dives. A comprehensive program runs C1.2b quarterly for posture and A4 before each certification.

  2. Against

    /ai-control-audit
    (C1.2a) — TOP-DOWN vs BOTTOM-UP. C1.2a reads the shared control-to-obligation mapping table bottom-up: "given these technical controls, which obligations do they satisfy?" C1.2b reads the same table top-down: "given these applicable obligations, which controls are required, and are they in place?" The two skills consume the SAME mapping table in opposite traversal directions. They are designed to be run together: C1.2a first to validate that controls exist and operate, then C1.2b to verify that the controls map to the full set of applicable regulatory obligations. Neither substitutes for the other. C1.2b's gap register typically references "per C1.2a output on {date}" as evidence for control status.

  3. Against

    /privacy-policy-audit
    (A3) — DISCLOSURE vs POSTURE. A3 audits a public-facing privacy policy document for disclosure sufficiency (does the policy tell users what the regulations require it to tell them?). C1.2b audits the underlying organizational governance posture (does the organization actually have AI governance in place, whether the disclosures reflect it or not?). A3 is a documentation-layer audit; C1.2b is a substance-layer audit. When a C1.2b finding is "disclosure gap under GDPR Art. 13-14," it points at A3 rather than re-doing A3's work.

  4. Against

    /risk-analysis
    (A5) — REGULATORY vs MULTI-DOMAIN. A5 produces a six-domain (legal, commercial, operational, regulatory, financial, reputational) risk landscape for an initiative or deal. C1.2b is downstream of A5's regulatory domain specifically, and narrow to the AI-frameworks subset of regulatory risk. When A5 surfaces "AI Act exposure on this product," C1.2b takes that as input and produces the multi-framework posture audit. Where A5 and C1.2b findings overlap, C1.2b frames the finding at the obligation level (which AI Act article, which NIST function, which IMDA risk category), not at the deal level.

When a user asks C1.2b to do something in the adjacent territory, the skill refuses and points at the correct sibling skill. Scope drift is the principal failure mode of a posture audit; the boundary is the gate.


When to Use

Invoke

/ai-regulatory-audit
when:

  • Quarterly regulatory posture review. Standard cadence for any organization operating an AI system in production across multiple jurisdictions. The skill is designed for quarterly rhythm, not per-release cadence, because regulatory obligations change at regulator tempo.
  • Entering a new jurisdiction. Organization is about to deploy an existing AI system into a new jurisdiction (EU for the first time, Singapore, California, Israel banking). C1.2b produces the multi-framework posture view across the expanded jurisdiction list.
  • New framework announcement. A framework that applies to the organization enters a new phase (EU AI Act high-risk obligations applying 2026-08-02; GPAI Code of Practice landing; California CPRA ADMT rules finalized; Illinois HB 3773 effective date; Colorado AI Act entering force). Re-run C1.2b against the updated framework applicability set.
  • Post-enforcement-action calibration. A regulator takes public enforcement action under a framework in scope (EU AI Office action, DPA fine, CPPA enforcement). Re-run C1.2b to check whether the pattern in the enforcement action intersects the organization's current posture.
  • M&A target AI governance assessment. Diligence on an acquisition target's AI governance posture across the jurisdictions the target operates in. Produces a multi-framework posture exhibit for the deal team.
  • Investor or board AI governance review. Organization needs a credible cross-framework statement of AI governance posture for a board pack, investor update, or annual governance review.
  • Pre-certification readiness complement. When the organization is about to pursue ISO 42001 certification or EU AI Act conformity assessment, C1.2b produces the multi-framework posture view as context; A4 (
    --framework iso42001
    or
    --framework eu-ai-act
    ) produces the framework-specific deep dive.

When NOT to Use

Do NOT use

/ai-regulatory-audit
when:

  • You need single-framework depth → use
    /compliance-audit
    (A4) with
    --framework {name}
    . If the question is "are we GDPR-ready for audit?" or "what's our SOC 2 gap list?" — that is A4's job. C1.2b is cross-framework and stays at the obligation level; it does not reproduce A4's control-level granularity.
  • You need a per-system technical control audit → use
    /ai-control-audit
    (C1.2a). C1.2a goes system-by-system through the 6-category technical taxonomy. C1.2b stays at the organizational posture level and treats C1.2a output as an input.
  • You need a privacy policy disclosure audit → use
    /privacy-policy-audit
    (A3). A3 audits the policy document; C1.2b audits underlying governance posture.
  • You need a deal-level six-domain risk landscape → use
    /risk-analysis
    (A5). A5 covers legal, commercial, operational, regulatory, financial, reputational; C1.2b is narrow to AI regulatory.
  • You need a contract clause review → use
    /contract-review
    (A1).
  • You need a licensed legal opinion on enforcement exposure under any specific framework → engage outside counsel; the skill does not substitute, and its output explicitly disclaims the role.
  • You need a formal attestation or certification → engage the appropriate certification body (ISO 42001 registrar, EU AI Act notified body, QSA, CPA firm) — C1.2b is pre-work, not the attestation.
  • The organization operates in only one jurisdiction with only one applicable framework → A4 is the right tool. C1.2b's value is multi-framework synthesis; a single-framework case has no synthesis to do.

Required Inputs

The skill requires the following inputs before it can produce a defensible posture audit:

  1. --organization NAME
    — the organizational unit being audited. Can be a company, a subsidiary, a business unit, or a specific AI product line. Must match a real entity (no hypothetical audits).

  2. --jurisdiction LIST
    — comma-separated list of jurisdictions where the organization deploys AI systems OR where its AI systems touch users/data. Must be specific: "EU" (not "Europe"), "US-California" (not "US"), "Singapore", "Israel", "UK", etc. The jurisdiction list drives framework applicability determination in Step 1.

  3. --frameworks LIST
    — comma-separated list of frameworks to audit against. Can be "auto" (derive from jurisdiction list using the applicability logic in Step 1) or an explicit list. Explicit override is allowed (e.g., "include NIST AI RMF as voluntary benchmark even though not strictly triggered").

  4. --system NAME
    (optional) — if scoping to a specific AI system rather than the whole organization, name the system. When specified, the framework applicability check filters to frameworks triggered by that system's specific characteristics (agentic, GPAI, processes personal data, makes automated decisions, etc.).

  5. --prior PATH
    (optional) — path to a prior
    /ai-regulatory-audit
    output for the same organization. If supplied, the skill runs in delta mode: it re-derives applicability, re-checks the gap register, and annotates findings as new / unchanged / resolved / newly-surfaced rather than producing a fresh audit from scratch.

  6. --mode adversarial
    (optional) — invokes Pattern 5 Adversarial Review from
    delegation-protocol.md
    . Reserved for high-stakes posture attestations (M&A, pre-IPO, enforcement response). Requires a named human tiebreaker BEFORE the review starts. See Section "Adversarial Mode" below.

If any required input is missing or ambiguous, the skill MUST ask the user before producing any output. No default jurisdictions. No default frameworks. No default organization. Posture audits with assumed inputs create the exact liability framing this skill is designed to avoid.


Method (8 Steps)

The method is the same regardless of which frameworks are triggered. Only the obligation content and the applicability output change. The control-to-obligation mapping table in

compliance-frameworks
Section 3 is the primary reference throughout; the
current-status.md
sidecar is read at Step 2 for all date-sensitive claims.

Step 1 — Load applicable frameworks from input + cross-reference jurisdiction matrix

Read the

--jurisdiction LIST
and
--frameworks LIST
arguments. Cross-reference against
compliance-frameworks.md
Section 4 (Framework Selection Rubric) to validate the framework list:

  • For every jurisdiction in the list, the applicability logic in Step 8 below determines which frameworks are triggered.
  • For every framework in
    --frameworks LIST
    that is not triggered by any jurisdiction, ask the user whether it is a voluntary inclusion (OK, annotate as voluntary) or a mistake (drop it).
  • For every framework triggered by jurisdiction but not in
    --frameworks LIST
    , warn the user that a triggered framework is missing and ask for explicit confirmation that it is deliberately out of scope.

Output: the reconciled framework list with, for each entry, the applicability reason ("triggered by {jurisdiction} + {system-characteristic}" or "voluntary inclusion at user request").

Step 2 — Load
current-status.md
sidecar; check freshness

Read

Extension Teams/reference/knowledge/compliance-frameworks/current-status.md
. Confirm:

  • last_verified
    date is within the 90-day freshness window (per sidecar Section "When downstream skills cite this sidecar").
  • Every framework in the reconciled list has a current-status entry.
  • Entries marked
    [CHECK ...]
    or
    [VERIFY ...]
    are flagged in the skill output as "sidecar flagged for verification."

If

last_verified
is older than 90 days, the skill output MUST include a staleness warning at the top of the Applicability section and recommend the Compliance Officer re-run the quarterly re-verification procedure before relying on the audit.

The skill reads framework version numbers, effective dates, and enforcement-status claims from this sidecar at runtime. These values are NOT hardcoded into SKILL.md. This is the first real consumer of the sidecar pattern.

Step 3 — Derive in-scope obligations per framework

For each framework in the reconciled list, determine which obligations are in scope given the organization's input profile. Not every obligation in every framework is relevant — filter by:

  • System characteristics: Is the system GPAI? Agentic? Does it make automated decisions about individuals? Does it process biometric data? Does it process personal data at all? Does it deploy in a high-risk domain per EU AI Act Annex III?
  • Deployment context: Is the system deployed in production? In pilot? In a sandbox? Does it touch consumer users, business users, or internal users?
  • Jurisdiction-specific triggers: GDPR applies only if personal data processing touches EU residents. CCPA applies only if California consumer data is in scope. Singapore IMDA applies only if Singapore deployment.

Output: per framework, the filtered list of in-scope obligations with citation to the framework's stable structural elements (article numbers, control categories, function names). Use the pack's citation format — no reproduction of framework text, only pointer references.

Step 4 — Map obligations to controls via the shared mapping table

For each in-scope obligation, look up the control-to-obligation mapping table in

compliance-frameworks.md
Section 3 (25 rows, 6 control categories × 7 frameworks as of v1.1.0).

The lookup direction is top-down: obligation → required controls. This is the opposite direction from

/ai-control-audit
(C1.2a), which reads the same table bottom-up (control → obligations it satisfies).

For each obligation:

  • Identify every row in the mapping table where the obligation appears.
  • Extract the control category, the specific control, and the evidence type required.
  • If an in-scope obligation has NO corresponding row in the mapping table, flag it as "unmapped obligation" — this is a gap in the mapping table itself and must be surfaced to the Compliance Officer for pack extension (per pack Section 3 "This table does NOT claim exhaustive coverage").

Output: a working list of (obligation, required control, evidence type) tuples per framework.

Step 5 — Check control presence

For each required control in the working list, check whether the organization actually has that control in place. Source of evidence, in order of preference:

  1. A fresh
    /ai-control-audit
    (C1.2a) output
    — the authoritative technical control status from the bottom-up audit. If a recent (within 90 days) C1.2a audit exists for the system, use it as the primary control evidence source.
  2. Organization-level documentation — AI governance policy, DPIAs on file, incident response runbooks, model documentation, audit logs. Point to the documents by path.
  3. Direct attestation by the Compliance Officer — when document evidence is unavailable, the Compliance Officer's on-the-record attestation is acceptable for the audit's purposes (the downstream reviewer will verify).

For each control, the status is one of:

  • Present and operating — evidence exists and the control is validated by C1.2a or direct review.
  • Present but unvalidated — evidence exists but has not been validated by C1.2a or direct review. Treat as partial until validated.
  • Partial — the control exists in some form but does not fully satisfy the obligation (e.g., incident response exists but does not address the 15-day AI Act Article 73 SLA).
  • Absent — no evidence of the control exists.

Step 6 — Build gap register

Compile the gap register: every in-scope obligation where the required controls are absent or partial or present but unvalidated. Rank each finding by:

  • Regulatory enforcement risk — how likely is a regulator to act on this, given the jurisdiction, the framework's enforcement history, and the system's public exposure? This is the dominant ranking factor, not gap size.
  • Gap size — is the control entirely absent, partial, or merely unvalidated?
  • Framework concurrency — is this gap one that multiple applicable frameworks flag? A gap surfaced by three frameworks is more material than one surfaced by one.

Output: the gap register as a numbered list, sorted by

regulatory enforcement risk × gap size
, with framework citations per finding.

Severity thresholds:

  • P0 (Blocker) — gap creates material regulatory exposure; remediation required before the next audit cycle. Examples: EU AI Act high-risk system obligation absent after 2026-08-02; GDPR Article 22 automated decision-making with no human-in-the-loop; Article 73 incident response with no 15-day SLA.
  • P1 (Important) — gap creates meaningful exposure; remediation required in the current remediation cycle. Examples: partial DPIA coverage; incident response exists but has no AI-specific playbook; bias assessment exists but is documentation-only.
  • P2 (Nice-to-have) — gap is scaffolding-level; remediation improves posture but is not materially exposing. Examples: NIST AI RMF MEASURE documentation exists but is not continuously updated; ISO 42001 SoA exists but has not been reviewed in 12 months.

Step 7 — Build overlap register (efficiency view)

For each control in the working list, check how many applicable frameworks have obligations that it satisfies. Controls that satisfy obligations under multiple frameworks simultaneously are overlap controls — high-efficiency investments because one control implementation reduces gap exposure across multiple frameworks.

Examples:

  • Decision logging (Control #14 in the mapping) satisfies EU AI Act Article 12 + GDPR Article 30 + NIST AI RMF MEASURE + (indirectly) Singapore Agentic AI transparency category. One control, 4 obligations.
  • Human-in-the-loop for consequential actions (Control #11) satisfies EU AI Act Article 14 + GDPR Article 22 + Singapore Agentic AI intended-purpose-misalignment category. One control, 3 obligations.

Output: the overlap register as a numbered list, sorted by (count of obligations satisfied, descending). Controls that satisfy ≥3 obligations are flagged as high-leverage remediation priorities.

Step 8 — Build mitigation plan ordered by enforcement risk

Synthesize Steps 6 and 7 into a prioritized mitigation plan:

  • P0 remediations come first, ordered by regulatory enforcement risk (not gap size, not framework count).
  • High-leverage controls from the overlap register are next — even if individual findings are P1, the leverage from a single implementation across multiple obligations makes them high-priority.
  • P1 remediations follow.
  • P2 remediations last.

For each mitigation item, frame effort using relative complexity language only ("small: documentation update", "medium: process change", "large: technical remediation requiring engineering investment") — NEVER fabricated timeline or cost estimates per

.claude/rules/no-estimates.md
.


Framework Applicability Logic

The applicability rules below are consumed at Step 1. They mirror

compliance-frameworks.md
Section 4 (Framework Selection Rubric) and are cited here for skill self-sufficiency.

FrameworkTriggering signal
EU AI Act (Regulation (EU) 2024/1689)EU deployment OR EU users (via Art. 3 territorial scope) OR output used in EU. Filter within EU AI Act to the risk tier: prohibited (Art. 5), high-risk (Art. 6 + Annex I/III), limited-risk transparency (Art. 52), or minimal-risk.
GDPR (Regulation (EU) 2016/679)Personal data processing touching EU residents (Art. 3 territorial scope). Within GDPR, filter to the articles in scope — especially Articles 5, 6, 9, 22, 25, 30, 32, 33-34, 35. Article 22 (automated individual decision-making) is the primary AI-specific article.
Singapore Agentic AI Governance Framework (IMDA)Singapore deployment OR Singapore users. Voluntary but strongly expected for Singapore-deployed AI systems. Filter to the 7 risk categories (intended purpose misalignment, autonomy creep, deception, privacy violation, security compromise, discrimination, transparency failure).
Singapore Model AI Governance Framework (IMDA)Singapore deployment for any AI system (not just agentic). Voluntary.
NIST AI Risk Management FrameworkUS federal agency context (directed) OR US federal contractor pathway OR voluntary US-market alignment. Filter to the four core functions (GOVERN, MAP, MEASURE, MANAGE) and, for generative AI, the Generative AI Profile.
ISO/IEC 42001Voluntary certification pursuit OR customer-required evidence of AI management system maturity. Applicability is business-driven, not jurisdictional.
ISO/IEC 27001Voluntary certification pursuit for underlying information security controls. Applicability is business-driven. Relevant to AI posture via Annex A.5 and A.8 controls that satisfy AI control obligations.
CCPA / CPRA (California)California consumer data processing. ADMT rules (automated decision-making) apply to AI systems making automated decisions about California consumers once finalized — see
current-status.md
for current ADMT rulemaking status.
HIPAAUS healthcare data (PHI) processing under a covered entity or business associate relationship. Healthcare-sector only.
Illinois HB 3773 / Colorado AI Act / NYC Local Law 144Jurisdiction-specific employment context (hiring, promotion, termination decisions). Pointer to sector-specific
hr-ai-governance
analysis — this skill flags the applicability but does not deep-dive; refers to HR-specific counsel.
Israeli Protection of Privacy Law (PPL) 5741-1981Personal data processing touching Israeli residents. 2025 amendment expanded obligations; check
current-status.md
for phased effective dates.
Bank of Israel Directive 361Israeli banking sector. Sector-specific, banking-only. Pointer to
/compliance-audit --framework israeli-ppl-bank-discount
for the deep dive.
OECD AI PrinciplesIntergovernmental alignment signal. Non-binding. Applicability is always "voluntary alignment layer" unless the organization is explicitly pursuing OECD signaling.
SOC 2 (AICPA TSC)Voluntary customer-facing trust signal for B2B SaaS (US market). Applicability is business-driven. Relevant to AI posture via CC6/CC7/CC8 controls that satisfy AI control obligations.

Special cases:

  • Multi-jurisdictional default baseline: If the organization operates in EU + US + multiple non-EU non-US jurisdictions and the
    --frameworks LIST
    is "auto", the default baseline is GDPR + EU AI Act + NIST AI RMF + ISO 42001 (per pack Section 4.1 last row). The skill will flag additional jurisdiction-specific frameworks as they apply but will not default to including them.
  • Voluntary-override: Users can force inclusion of voluntary frameworks (NIST, ISO 42001, OECD) even when not triggered. The skill annotates these as "voluntary — user override."

Output Structure

The skill produces a single markdown output file. Structure:

# AI Regulatory Posture Audit — {Organization}

## Disclaimer and Jurisdiction Assumed
{Standard block from Section 3.1 of sensitive-skill-guardrails, with Jurisdiction Assumed declared as the list from input}

## Applicability Determination
- Jurisdictions in scope: {list}
- Frameworks in scope: {list with triggering signal per framework}
- Frameworks considered but out of scope: {list with reason}
- Sidecar freshness: {current-status.md last_verified date; staleness flag if >90 days}

## Per-Framework Obligation Scope
### {Framework 1}
- In-scope obligations: {article numbers + bucket names}
- Out-of-scope obligations and why: {short reason}
- Current-status notes: {version number, effective date, any [CHECK] flags from sidecar}
### {Framework 2}
...

## Control-to-Obligation Coverage Map
| # | Obligation | Framework | Required Controls | Status (Present / Partial / Absent / Unvalidated) | Evidence Source |
|---|---|---|---|---|---|
...

## Gap Register
{Numbered findings, sorted by regulatory enforcement risk × gap size}

### Finding 1
**What**: {specific obligation with no satisfying control}
**Framework**: {framework citation with article/section}
**Missing control**: {control category + specific control}
**Severity**: P0 / P1 / P2
**Evidence**: {what the reviewer would need to close this}
**Suggested next step**: {remediation pointer}

## Overlap Register (Efficiency View)
{Numbered controls satisfying ≥2 obligations, sorted by obligation count descending}

## Mitigation Plan
### P0 (Blockers)
1. {Remediation item with relative-complexity framing and leverage note if applicable}
### P1 (Important)
...
### P2 (Nice-to-have)
...

## Posture Assessment
**As of {date}, {Organization} demonstrates [STRONG / PARTIAL / WEAK] regulatory posture across {N} applicable frameworks with {M} material gaps.**

Rationale: {one paragraph — framing of the strongest areas and the principal exposure}

## Next Audit Date
- Quarterly cadence: {date}
- Earlier trigger if: {list of re-verification triggers from pack Section 7.2 that would require re-running before the quarterly date}

## Findings (full list)
{All findings in order, numbered across frameworks}

## Reviewer Checklist
- [ ] Jurisdiction list confirmed against actual operations
- [ ] Framework list confirmed (no triggered framework silently dropped)
- [ ] `current-status.md` sidecar checked for freshness and [CHECK]/[VERIFY] flags
- [ ] Control-to-obligation mapping consumed from compliance-frameworks v1.1.0 Section 3
- [ ] C1.2a `/ai-control-audit` output consulted where available
- [ ] All P0 gaps addressed or explicitly accepted-with-risk by named tiebreaker
- [ ] Overlap register reviewed for high-leverage remediations
- [ ] Posture assessment sentence reviewed by Compliance Officer
- [ ] Licensed counsel engaged for every framework where enforcement risk is material

## Cannot Assess Without
- Licensed counsel review in each jurisdiction (regulatory posture opinions require counsel)
- Current architecture documentation for the system in scope
- System deployment topology (which jurisdictions the system actually touches at the network/data-flow level)
- C1.2a `/ai-control-audit` output for validated technical control evidence (when available)
- Organization-level governance documentation (AI policy, incident response runbooks, prior DPIAs)
- Prior regulator enforcement actions (if any) — they change the risk calculus materially
- Counterparty contractual obligations (vendor AI clauses, customer AI clauses) — these modify the applicability set
- Sector-specific regulatory overlays (healthcare HIPAA, financial services, children's data) not covered by the horizontal frameworks

## Related Skills and Handoff
- **Input**: `compliance-frameworks` knowledge pack v1.1.0 + `current-status.md` sidecar
- **Complement**: `/ai-control-audit` (C1.2a) — bottom-up control view (run first, then C1.2b consumes its output)
- **Adjacent**: `/compliance-audit` (A4, `--framework {name}`) for single-framework deep dive
- **Cross-reference**: `/privacy-policy-audit` (A3) for disclosure layer; `/risk-analysis` (A5) for six-domain upstream
- **Hand-off**: `@general-counsel` for any material gap; `@privacy-counsel` for GDPR Art. 22 / CCPA ADMT nuance; `@ai-architect` for control-evidence interpretation
- **Co-authored artifact**: the control-to-obligation mapping table in `compliance-frameworks.md` Section 3 — co-consumed with C1.2a, changes flow through the Compliance Officer

Quality Gates (10 Checks)

Before the skill declares an audit complete, each of the following MUST pass. Any failure blocks publication until resolved.

  1. Applicability determined for every input framework. Every framework in
    --frameworks LIST
    has an applicability reason (triggered / voluntary override / out of scope with justification). No framework is silently dropped or silently added.
  2. current-status.md
    sidecar read at runtime.
    No date-sensitive claim in the output is hardcoded; every version number, effective date, and status claim traces to a sidecar entry. Staleness >90 days is flagged at the top of the output.
  3. Control-to-obligation mapping consumed from
    compliance-frameworks
    v1.1.0 Section 3.
    No inline re-creation of the mapping. The skill cites the pack version explicitly.
  4. Gap register sorted by regulatory enforcement risk × gap size. Not by gap size alone, not alphabetical. The enforcement risk factor is the dominant sort key.
  5. Overlap register populated. Every framework pair has been checked for shared controls. High-leverage controls (≥3 obligations satisfied) are flagged.
  6. Mitigation plan has P0/P1/P2 priorities. Every mitigation item has a severity tag. P0 items are ordered by enforcement risk. High-leverage controls are called out even when individual findings are P1.
  7. Posture assessment is a single sentence. The one-sentence summary ("STRONG / PARTIAL / WEAK across {N} frameworks with {M} gaps") is present and supported by a one-paragraph rationale.
  8. Next audit date is named. Either the quarterly date or an earlier trigger date if a known trigger is imminent (e.g., EU AI Act high-risk obligations applying 2026-08-02).
  9. No invented enforcement positions. The skill cites frameworks by article number and obligation bucket name only. It does NOT assert "regulators typically interpret X as Y" without a cited source. It does NOT invent counterparty or regulator behavior.
  10. No vague language. Words like "comprehensive," "robust," "industry-leading," "best-in-class," "adequate," "sufficient" are prohibited in the output unless quoted from a framework's own text. The skill uses specific language: "present and operating per C1.2a output dated {date}" or "partial — incident response exists but no AI Act Article 73 15-day SLA."

Adversarial Mode

--mode adversarial
invokes Pattern 5 Adversarial Review from
delegation-protocol.md
. Reserved for high-stakes posture attestations where the cost of a missed gap is material. Use only when ALL of the following are true:

  1. The audit supports a board pack, investor update, M&A diligence exhibit, pre-IPO filing, or enforcement response.
  2. A named human tiebreaker is designated BEFORE the review starts. Default tiebreakers:
    @general-counsel
    (baseline) +
    @privacy-counsel
    (if GDPR/CCPA in scope) +
    @ip-counsel
    (if GPAI model training data is in scope).
  3. The audit is near-final, not still evolving in scope.
  4. Two iterations are feasible (adversarial review caps at two iterations per
    delegation-protocol.md
    ).

The adversarial agent operates in fresh context per

delegation-protocol.md
Section "Role Separation (CRITICAL)" — does not see the drafter's prior-turn rationale, does not see earlier adversarial iterations. Scope is stress-testing clauses, assumptions, and structural choices. The adversarial agent MAY NOT invent regulator behavior or hallucinate enforcement positions — any such finding is marked "reject-as-hypothetical" per the pattern's scope boundary.

When NOT to use adversarial mode: routine quarterly reviews, pre-deployment checks where a single-pass Pattern 3 Review is sufficient, audits where two iterations are infeasible, audits without a named tiebreaker.


Delegation Pattern

Default: Pattern 1 Consultation per

delegation-protocol.md
.

The Compliance Officer owns the posture audit end-to-end but consults specialists for domain-specific calibration:

  • @ai-architect
    — for interpretation of technical control evidence (what does "input validation and prompt injection defense" look like in practice? is the organization's current design sufficient to claim the control is "present and operating"?). Read-only: the Compliance Officer retains authorship.
  • @general-counsel
    — for enforcement risk calibration (is the regulator active in this jurisdiction? is a specific article likely to be enforced on this kind of system? what is the likely magnitude of exposure?). Read-only.
  • @privacy-counsel
    — for GDPR Article 22 / CCPA ADMT nuance (is the system "solely automated"? does an exception apply? is consent viable as a legal basis?). Read-only.
  • @ip-counsel
    — for GPAI training-data obligations (EU AI Act Article 53 training data summary, copyright policy, Art. 55 systemic risk). Read-only.

When consulting a specialist, the Compliance Officer spawns the sub-agent via Task tool using the standard identity protocol per

agent-spawn-protocol.md
Section 2, integrates the specialist's contribution into the audit, and attributes: "Consulted @ai-architect who noted..."

Alternative patterns:

  • Pattern 3 Review is appropriate when a completed audit is near-final and needs a single quality-validation pass before the Compliance Officer signs off. Reviewer defaults:
    @general-counsel
    +
    @privacy-counsel
    .
  • Pattern 5 Adversarial Review is reserved for
    --mode adversarial
    per the Adversarial Mode section above.

Sensitive Skill Protocol Compliance

This skill is marked

sensitive: true
in frontmatter. Per
.claude/rules/sensitive-skill-guardrails.md
:

  • Disclaimer block is mandatory at the top of every output (see top of this SKILL.md).
  • Jurisdiction Assumed is declared in every audit output's Applicability section.
  • Findings / Reviewer Checklist / Cannot Assess Without are mandatory structural sections per Section 3 of the guardrails rule.
  • Two-pass publication gate applies: Pass 1 Scaffolding Check by Director of Legal Affairs; Pass 2 Substantive Check by General Counsel. Both are required before this skill goes to v1.0.0 production use.
  • ROI framing uses "drafting and triage" language — NEVER "time saved on legal review" or "time saved on compliance review" per
    roi-display.md
    Section "Sensitive Skill ROI Framing."

The skill's scaffolding is load-bearing: the review gate is the structure, not a disclaimer at the top of an otherwise confident-looking document.


ROI Framing

Time saved is framed as drafting and triage of a cross-framework regulatory posture audit. The skill does NOT claim to save legal review time — the whole point of the two-pass publication gate is that a licensed human reviews every finding before action.

Typical manual drafting and triage baseline for a multi-framework posture audit:

  • 3-5 applicable frameworks, single AI system, single organization: ~20-30 hours of Compliance Officer drafting and triage work (framework applicability review, obligation scope, control-to-obligation mapping, gap register assembly, overlap analysis, mitigation plan drafting)
  • Post-handoff reviewer time (counsel review, tiebreaker decisions, remediation planning) is NOT included in the ROI framing — that work remains with the human.

Blended professional rate for ROI calculation: $350/hr (legal blended, per

memory/feedback_roi_rates.md
).


Related Skills Map

SkillRelationship
/compliance-audit
(A4)
Single-framework depth; sibling. Run A4 before certification; run C1.2b for quarterly posture.
/ai-control-audit
(C1.2a)
Technical control taxonomy; bottom-up sibling. Run C1.2a first, then C1.2b consumes its output.
/privacy-policy-audit
(A3)
Disclosure layer; adjacent. C1.2b points at A3 when a disclosure gap is found.
/risk-analysis
(A5)
Upstream six-domain risk landscape. C1.2b takes A5's regulatory exposure as input and maps to frameworks.
/contract-review
(A1)
Clause-by-clause; adjacent. C1.2b points at A1 when a contractual AI clause gap is found.
/nda-triage
(A2)
NDA-specific; not in C1.2b scope.
/contract-stress-test
(A7)
Adversarial contract review; not in C1.2b scope.

Knowledge Pack Dependency

  • Primary pack:
    compliance-frameworks
    v1.1.0 at
    Extension Teams/reference/knowledge/compliance-frameworks.md
  • Sidecar:
    compliance-frameworks/current-status.md
    (read at runtime, Step 2)
  • Sources manifest:
    compliance-frameworks/sources.md
    (canonical URLs for mechanical re-verification)
  • Mapping table consumed: Section 3 of the pack — 25 rows, 6 control categories, 7 frameworks
  • Pattern consumed: Shared Skill Orchestrator per
    .claude/rules/delegate-first.md
    — the mapping table is co-consumed with
    /ai-control-audit
    and changes flow through the Compliance Officer owner

Frontmatter Consumers Rationale

ConsumerWhy listed
ext-legal
Primary consumer. The Legal Team gateway routes regulatory-posture questions here.
ext-architecture
Secondary consumer.
@ai-architect
is the control-evidence interpretation consultation resource.
compliance-audit
Peer sibling. A4 invokes C1.2b as the multi-framework posture view when A4's single-framework output needs cross-framework context.
ai-control-audit
Peer sibling. C1.2a invokes C1.2b when its bottom-up control view needs to be mapped to obligation-level regulatory posture.

Any additional consumer gateway MUST be added by the Compliance Officer owner per

delegate-first.md
Shared Skill Orchestrator pattern before invocation is supported.


Version History

  • v1.0.0 (2026-04-11) — Initial authoring. Phase 5A C1.2b. First real consumer of the
    current-status.md
    sidecar pattern. Consumes
    compliance-frameworks
    v1.1.0 Section 3 mapping table top-down. Four-sided boundary against
    /compliance-audit
    ,
    /ai-control-audit
    ,
    /privacy-policy-audit
    ,
    /risk-analysis
    . Birth test: Legionis organizational regulatory posture.

Authoring Discipline

  • First-principles authoring: every section drafted from public regulator sources and the
    compliance-frameworks
    knowledge pack. No content lifted from A4
    /compliance-audit
    or any other template. Structural parallels with A4 are intentional (boundary statement, sensitive-skill scaffolding) because both skills live under the same guardrails rule; content is distinct.
  • No law firm citations: per
    compliance-frameworks.md
    Section 1 authoring discipline, only regulators, standards bodies, and public-interest institutions are cited.
  • No fabricated numbers: per
    .claude/rules/no-estimates.md
    , remediation effort is framed as relative complexity (small / medium / large), never as timeline or cost.
  • No interpretation drift: the skill identifies gaps and maps obligations to controls. It does NOT opine on whether a specific control satisfies a specific regulator's interpretation of an obligation. That opinion is counsel's to render.