Product-org-os resume-summarizer
Descriptive structured extraction for batches of resumes with proxy redaction, AEDT non-classification wall, HITL gate, and annual deployer re-attestation. Drafting and triage aid, not HR or
git clone https://github.com/yohayetsion/product-org-os
T=$(mktemp -d) && git clone --depth=1 https://github.com/yohayetsion/product-org-os "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills-mirror/resume-summarizer" ~/.claude/skills/yohayetsion-product-org-os-resume-summarizer && rm -rf "$T"
skills-mirror/resume-summarizer/SKILL.md/resume-summarizer
⚠️ Not HR or employment-law advice. This output is a drafting and triage aid generated by a product-organization skill, not HR counsel or employment counsel. No attorney-client relationship is created by its production or use. Jurisdiction-specific questions on automated-employment-decision-tool classification, protected-class signaling, candidate disclosure, Ban-the-Box application, caregiver status, GINA exposure, or adverse impact require review by a licensed employment attorney in the relevant jurisdiction and by a qualified HR professional. Do not rely on this output as the sole basis for any hiring, screening, ranking, scoring, or employment decision.
Jurisdiction Assumed: {jurisdiction from required input — e.g., US-NYC, US-IL, US-CO, US-CA, EU, UK, IL}. If your jurisdiction differs, treat every finding below as a hypothesis to verify with local counsel.
AEDT Non-Classification Memo (embedded, visible, load-bearing)
This skill is deliberately not classified as an Automated Employment Decision Tool under NYC Local Law 144 (NYC Admin Code §20-870 to §20-874). The full legal argument lives in
hr-ai-governance Section 6. The operative bright lines are reproduced here verbatim because they are load-bearing: if any one of them is violated in any released version, the classification flips and the skill is pulled from NYC-jurisdiction deployment immediately pending a third-party bias audit under Section 20-871.
The eight bright lines (
Section 6.2, verbatim):hr-ai-governance
| # | Bright line | What it means |
|---|---|---|
| 1 | No numeric scores | The output contains no integer, float, percentage, star rating, or any value on a numeric scale — whether labeled "score," "fit," "match," "rating," or anything else |
| 2 | No "top skills" ordering | Skills are extracted as an unordered set. No "top 3," "primary," "key," or any implication of ranking within the candidate's own skill list |
| 3 | No "fit" language | The output contains no words or phrases implying fit, suitability, recommendation, or match to a role. Banned words include: "fit," "match," "suitable," "qualified," "recommended," "strong candidate," "weak candidate," "good fit," "cultural fit," "role fit" |
| 4 | No implicit ranking via field order or highlighting | When multiple items are extracted (skills, roles, projects), they are presented in a deterministic, candidate-neutral order — most commonly reverse-chronological for experience and alphabetical for skills. Highlighting, bolding, or ordering that a reasonable reader interprets as "more important" is prohibited |
| 5 | No aggregated summary that reads as a recommendation | The output contains no free-text paragraph that summarizes the candidate's strengths, weaknesses, or suitability. Structured extraction only |
| 6 | No cross-candidate comparison in one response | The skill processes one resume per invocation. It NEVER emits an ordered list of candidates in a single response. If it ever does, it becomes an AEDT on that response |
| 7 | No derived "categories" | The skill does not classify the candidate as "senior," "mid-level," "entry-level," "technical," "managerial," or any other category that could substantially assist a decision. Specifically, is emitted ONLY as a raw integer (e.g., ). It is NEVER emitted as a label like "senior," "mid," or "entry," and no other field may be derived from it that reads as a seniority classification |
| 8 | Neutral source-faithful language | Extraction uses neutral, source-faithful language. The skill does NOT add adjectives or qualitative modifiers that were not present in the source resume. Specifically banned as model-generated additions: "significant," "substantial," "extensive," "strong," "impressive," "deep," "broad," "notable," "accomplished," "proven," or any equivalent qualitative modifier. If the resume says "5 years of Python," the extraction says "5 years of Python," not "5 years of significant Python experience." This bright line is required to hold the AEDT non-classification wall under NYC DCWP's broadest semantic-intent reading: a model-inserted qualitative modifier reads as an implicit recommendation even when no numeric score is present |
Review cadence (
hr-ai-governance Section 6.5): 12 months from publication → next review 2027-04-11. Out-of-cycle review on NYC DCWP enforcement guidance updates, CA CPRA ADMT final rules landing (expected 2027), or any case law flagged by @employment-counsel. Immediate review if any bright line is violated in any released version.
Deployer warning (runtime, NYC jurisdiction):
This skill outputs structured extraction only and is not classified as an AEDT under NYC Local Law 144. If your workflow uses this output to filter, route, auto-reject, or pre-rank candidates before a human review, the classification may change and you become subject to Local Law 144 obligations including annual bias audit by an independent auditor, public audit results, and 10-business-day candidate notice. Consult your employment counsel.
The deployer's acknowledgment is required before first production use per tenant and re-attested annually (
hr-ai-governance Section 6.4). If re-attestation lapses, the skill refuses to run for that tenant until re-attestation is provided.
Positioning
This output is descriptive, not evaluative. Any ranking or comparison must be performed by a human reviewer. This skill is deliberately not classified as an Automated Employment Decision Tool under NYC Local Law 144.
That sentence is not decorative. It is the AEDT wall made visible, and it appears verbatim at the top of every production output. The rest of this skill exists to hold that sentence true.
Purpose
/resume-summarizer produces high-volume structured extraction of resumes: skills (unordered), education, experience with dates, certifications, languages, location (city/country only), and work authorization where the candidate volunteered it. The value proposition is structured extraction at volume for batches of 20+ resumes so human reviewers can focus their attention on the parts of the resume that matter. It is not a faster way to make hiring decisions — the hiring decisions still happen entirely in the human reviewer's workflow.
What it IS: a descriptive, non-evaluative, structured-extraction skill that runs every resume through the
hr-ai-governance proxy register, redacts or blocks protected-class proxies before any model sees them, emits an audit log record per resume, and gates every output behind a blocking HITL review.
What it is NOT: a screener, a ranker, a scorer, a candidate comparator, a fit assessor, a seniority classifier, a recommendation engine, or an Automated Employment Decision Tool. Any prompt asking this skill to rank, score, compare, classify seniority, or recommend is refused with a routing message to human review.
This skill is governed by
hr-ai-governance pack v1.0.1. Every extraction run inherits the proxy register (Section 3.2), mitigation patterns (Section 3.3), audit log schema (Section 4.1), counterfactual fairness fixture set (Section 5), AEDT non-classification memo (Section 6 — reproduced above), HITL enforcement with throughput tripwire (Section 10.2.1), and jurisdiction matrix (Section 8.1) from the pack.
When to Use
- High-volume candidate pools (20+ resumes) where structured extraction helps human reviewers triage efficiently
- Standardized candidate database ingestion from ATS imports, referral uploads, or career-page submissions
- Batch re-extraction after a schema update (e.g., new jurisdiction added to the matrix, new proxy added to the register)
- Producing a consistent structured view of resumes that were collected under different templates or conventions
When NOT to Use
- Ranking, scoring, comparison, or recommendation — ABSOLUTELY NOT. This is AEDT territory. Use a human reviewer. Any prompt asking for these is hard-refused.
- Single-resume review (1-4 resumes) — use a human reviewer directly; the high-volume framing that holds the AEDT wall breaks down at small N
- Fit assessment, "strong candidate" evaluation, culture fit, team fit — not this skill's job and not any skill's job under this governance pack
- Hiring recommendations, interview/reject suggestions, next-step advice — not emitted under any circumstances
- Executive-level searches (VP+, C-suite) where the recruiter owns the whole funnel manually — no throughput benefit and high review cost
- Referral candidates where a human recruiter already knows the context — skip the extraction, go straight to human review
- Any workflow where the deployer intends to auto-reject, auto-route, or pre-filter candidates before a human reads the raw resume — this flips AEDT classification
Required Inputs
| Input | Required | Example |
|---|---|---|
| Resume batch (file list or directory of resumes) | Yes | containing 20+ PDF/DOCX/text resumes |
| Jurisdiction | Yes (no default) | , , , , , , , |
| Deployer attestation | Yes on first run per tenant; annually thereafter | Signed attestation file with + per Section 4.1 |
| Role name | Optional, descriptive only | "Senior Backend Engineer" — used for structured schema mapping, NEVER for evaluation |
| HITL reviewer identity | Yes, authenticated | Supplied by deployer auth system per Section 4.2 rule 2 |
Jurisdiction is never defaulted. If the user does not supply it, the skill refuses to run. Jurisdiction determines applicable AEDT/ADMT rules, retention period, candidate-disclosure obligations, and which runtime warnings fire.
Deployer attestation is never bypassable. If the attestation is missing or lapsed, the skill refuses to run for that tenant and emits a re-attestation prompt. This is the AEDT deployer-warning enforcement mechanism (
hr-ai-governance Section 6.4).
Batch size is a soft floor. For batches under 5 resumes, the skill warns and suggests human review instead. The high-volume framing (value comes from volume triage efficiency) breaks down at single-resume scale, and the AEDT wall is held in part by the volume framing.
Refusal Conditions (Hard, Non-Overridable)
The skill refuses — fully, structurally, with no override flag — under any of the following conditions:
- Prompt asks for ranking, scoring, comparison, or recommendation. Response: "This skill is descriptive, not evaluative. Use a human reviewer for ranking, scoring, comparison, or recommendations. Per NYC Local Law 144, automated ranking and scoring of candidates is regulated as an AEDT — this skill is deliberately not classified as one. Any such workflow must go through a licensed employment attorney and (in NYC) a third-party bias audit."
- Prompt asks for fit, suitability, seniority classification, strengths/weaknesses, or next-step recommendation. Same refusal, same routing message.
- GINA proxy detected in a resume (genetic information, family medical history, BRCA, Huntington's, predisposition language, genetic test results). Per
Section 3.2, GINA is strict liability — redaction is insufficient because the model has already seen the data. The skill refuses processing that resume, the input is not passed to any model, and the event is logged as a GINA-adjacent exposure event for @employment-counsel review.hr-ai-governance - Union membership or organizing activity detected in a resume ("shop steward," "union representative," "bargaining committee," "participated in organizing drive"). Per
Section 3.2, NLRA Section 7 is federal strict protection — a single instance of AI-surfaced union language in an adverse employment decision is direct evidence of discrimination. The skill refuses processing that resume and routes to HR-only human handling.hr-ai-governance - No jurisdiction provided. Refuses to run with "jurisdiction is required, no default applies" message.
- Deployer attestation missing, lapsed, or unrecognized tenant. Refuses with re-attestation prompt and pointer to
Section 6.4.hr-ai-governance - Batch size < 5 resumes. Warns and routes to human review as the more appropriate path; does not emit extraction.
- HITL reviewer identity cannot be authenticated by deployer auth system. Refuses per
Section 4.2 rule 2 — no "unknown" or "system" reviewer values allowed, ever.hr-ai-governance
Refusals are logged. Refusal logging is critical because "the skill refused GINA exposure" is itself valuable audit evidence for @employment-counsel review and future third-party bias audits.
Method
Nine-step pipeline. Order matters because later steps depend on earlier ones. Every step produces audit-log input; the audit log record is the receipt the regulator sees.
Step 1 — Check deployer attestation currency
Read the deployer attestation file. Verify:
matches the current tenantdeployer_id
is present and valid ISO 8601 UTCacknowledgment_date
=re_attestation_due_dateacknowledgment_date + 365 days- Current date ≤
re_attestation_due_date
If any check fails → REFUSE with re-attestation prompt (see Refusal Condition 6).
Step 2 — Check jurisdiction + load applicable rules
Read
--jurisdiction. Look up the row in hr-ai-governance Section 8.1. Extract:
- Retention period for the audit log entry (Section 4.3)
- Candidate disclosure requirement (Section 9.1) — in EU, NYC, CA, and CO this is mandatory pre-use, not on-request
- Runtime warning block to emit (e.g., EU → Section 7.3 EU AI Act high-risk notice; NYC → Section 6.4 AEDT deployer warning; CO → Section 8.1 Colorado AI Act deployer notice)
- Ban-the-Box application (if any): if jurisdiction is in the 37+ US states + NYC Fair Chance Act, criminal-history redaction is pre-conditional-offer
If jurisdiction is not in the matrix → BLOCK and route to @employment-counsel per
hr-ai-governance Section 11.1.
Step 3 — For each resume: run proxy scanner (inherit Section 3.2)
Scan every resume against the full proxy register from
hr-ai-governance Section 3.2:
- 9 standard proxies (given name, family name, zip code, graduation year, photo, gender-coded language, date of birth, marital status, pronouns field)
- 27 non-obvious proxies (religious hobbies, IDF/military mentions, ROTC/VFW,
email, year-in-username email, address format, "native English" vs "fluent English," language inventory, military unit/deployment details, disability accommodations in cover letter, employment-gap explanations, visa text in cover letter, LinkedIn URL customizations, tenure-plus-founding-date combinations, headshot URLs, calendar/availability mentions, dietary preferences, salary history, "part-time/flexible/remote" preference, educational-institution ranking, "culture fit" references, criminal history / Ban the Box, caregiver status, credit history, union membership, genetic information).edu
For each detected proxy, record
proxies_detected entries (per Section 4.1 schema) with proxy_type, confidence, and field_path. This is audit-log input, not output.
Step 4 — Apply proxy redaction / mitigation per pattern
Each detected proxy gets one of four treatments per
hr-ai-governance Section 3.3:
| Pattern | Applied to |
|---|---|
| Redact | Given name (before model sees it), family name (before model), DOB, marital status, pregnancy/childbirth gap explanations, caregiver language, credit history, criminal history (pre-conditional-offer in Ban-the-Box jurisdictions), dietary preferences, disability accommodation language, religious hobby terms, accommodation mentions, LinkedIn URL customizations, salary history, zip-only location, visa status text |
| Normalize | Military service → literal string "military service" without country, unit, deployment, or years in the analysis path; country retained only in the separate secured contact store. Email domain → . Year-in-username → stripped. Language proficiency phrasing → flat list, no levels. Address → city + country only. |
| Flag for human review | Free-text references with "culture fit," "personality," or "attitude" language. Resume contains a photo (flag + strip). Accommodation language in cover letter routed to HR-only path. |
| Block | GINA proxies (genetic information, family medical history, BRCA, Huntington's, predisposition, genetic test results) — the resume is refused, not redacted ( Section 3.2 GINA strict-liability rule). Union membership / organizing activity — the resume is refused and routed to HR-only human handling. |
Redacted and normalized values are removed from the content the model sees. Original values are retained only in the secured contact store for post-hire contact use, never in the analysis path, never in the audit log, never in the extraction output.
Step 5 — Extract structured fields per canonical schema
The model produces extraction output conforming to the canonical schema in the "Canonical Extraction Schema" section below. Only the allowed fields are extracted. Source-faithful language only: if the resume says "5 years of Python," the extraction says "5 years of Python," never "5 years of significant Python experience" or "5+ years of Python expertise." Bright line #8 is enforced at the prompt level and verified in Step 6.
Experience is emitted in reverse-chronological order (candidate's own ordering of their roles, not the skill's imposed ordering — Bright line #4). Skills are emitted as an unordered set (alphabetical, deterministic). Education is a list in whatever order the candidate listed it, most commonly reverse-chronological.
Step 6 — Run semantic-intent validator on every field
The validator is the enforcement mechanism for Bright lines #1, #2, #3, #5, #7, and #8. It runs over every field name + value pair in the extraction output and rejects any that is semantically equivalent to a banned field, even if literally renamed. Examples of what the validator catches:
- Field
→ REJECT (semantically equivalent to a score)candidate_quality - Field
→ REJECT (ordered + "top" = Bright line #2)top_skills_ordered - Field
→ REJECT (seniority label, Bright line #7)experience_level: "senior" - Field
→ REJECT (qualitative modifier, Bright line #8)years_of_python: "significant" - Field
→ REJECT (recommendation, Bright line #5)recommended_next_step: "interview" - Free-text value containing "strong candidate for" → REJECT (fit language, Bright line #3)
- Field
→ REJECT (implicit ranking, Bright line #2)primary_skills
The validator's refusal of a bad field does NOT downgrade to a finding — it is a structural failure. The extraction output is rejected in full, a finding is emitted ("semantic intent validator rejected field
X — reason: {bright line violated}"), and the resume is routed back through extraction with stricter prompts. If the second attempt also fails, the extraction is refused entirely and logged as a skill-defect event for @employment-counsel review.
Step 7 — Emit candidate disclosure template per resume
Per
hr-ai-governance Section 9.1, the candidate must see what the skill extracted. The output includes, per resume, the exact disclosure block the deployer's ATS is expected to surface to the candidate. The disclosure template is the same for every candidate, with the extraction output inlined. See the "Candidate Disclosure Template" section below.
Step 8 — Emit audit log entry per resume
Per
hr-ai-governance Section 4.1, every extraction run emits a full audit log record. For this skill, the per-resume log entry includes:
,run_id
,timestamp
,skill_name: "resume-summarizer"
,skill_version: "1.0.0"governance_pack_version: "1.0.1"jurisdiction
(tenant identifier)deployer_org
— SHA-256 of the canonicalized resume content after redaction (NEVER raw PII)inputs_hashinput_type: "resume"
— full list from Step 4redactions_applied
— full list from Step 3proxies_detected
— authenticated identity (NEVER free text)hitl_reviewer
,hitl_decision
,hitl_timestamp
— populated at Step 9 HITL gatehitl_rationale
— structured diff if reviewer modified the extractionmodifications_diff
— set post-HITLdownstream_action
—adverse_impact_check
by default for this skill (single-resume extraction does not support adverse impact testing; the counterfactual fixture run is the fairness gate, run on skill ship + every 180 days, not per resume)cannot_audit
— model name + version used for extractionmodel_provenance
—candidate_disclosure_confirmed
after the deployer confirms the candidate saw the disclosure per Section 9.1true
— the deployer attestation object (from Step 1) withdeployer_aiact_acknowledgment
,deployer_id
,acknowledgment_datere_attestation_due_date
—signoff_path
unless fallback invoked (standard
Section 11.3 — note:hr-ai-governance
shipping and its AEDT memo are NON-DELEGABLE under Section 11.3; fallback path does not apply)/resume-summarizer
— populated at Step 9; feeds the throughput tripwirehitl_review_duration_seconds
— computed from jurisdiction per Section 4.3retention_expiry
Refusal events (GINA, union, attestation lapse, semantic-intent validator failure) are also logged, with
downstream_action: "refused-routed-to-human" or "refused-deployer-attestation-lapsed" or "refused-skill-defect" as appropriate.
Step 9 — Block downstream consumption pending HITL review
Per
hr-ai-governance Section 10, HITL is a structured, blocking, logged gate. For /resume-summarizer specifically:
- Extraction output is placed in a "draft" state and is not available to the ATS, the hiring manager, or any downstream system
- The HITL reviewer (recruiter) is prompted with: the raw resume (pre-redaction), the extracted structured output, a checklist of items to verify, and a mandatory rationale field on modify/reject
- The reviewer must explicitly accept / modify / reject. No trailing "ok" button. No auto-accept.
is logged — the wall clock between prompt display and decision.hitl_review_duration_seconds- Throughput tripwire (
Section 10.2.1): any decision under the 30-second threshold forhr-ai-governance
is flagged in the quarterly review regardless of aggregate rate. Sub-10-second reviews are treated as presumptive AEDT-wall-risk failures and escalated to @chro immediately, because NYC DCWP may characterize sub-10s review as non-discretionary rubber-stamping, which flips the AEDT non-classification. Additional tripwires: same reviewer > 50 accepts/day with median <30s → flagged; reviewer identity with >30%/resume-summarizer
accepts/month → flagged for @people-analyst review of sampling strategy.cannot_audit - Only after reviewer signs off does the extraction move from "draft" to "acted" state and become available downstream.
There is NO bypass. Deployer attempts to disable HITL → the skill refuses to run. Deployer attempts to programmatically auto-accept → the audit log captures the identical reviewer identity across records and the quarterly review flags the pattern as HITL hygiene failure.
Canonical Extraction Schema
The ONLY fields this skill emits in extraction output. Anything not in this list is banned by the semantic-intent validator.
{ "candidate_id": "opaque hash (SHA-256 of canonicalized identifier; NEVER raw name/email)", "skills_set": ["unordered alphabetical list of skill strings from the resume"], "education": [ { "institution": "string (name only, no ranking data)", "degree": "string (e.g., 'B.Sc.')", "field": "string (e.g., 'Computer Science')", "year": 2020 } ], "experience": [ { "title": "string (role title as stated)", "organization": "string (employer name)", "start_date": "ISO 8601 (YYYY-MM)", "end_date": "ISO 8601 (YYYY-MM) or 'present'", "raw_description": "string — source-faithful extraction of the role's bullet points, no adjectives added by the model" } ], "experience_years": 12, "certifications": ["unordered list of certification strings"], "languages": [ { "language": "string", "self_reported_level": "string ONLY if candidate explicitly self-reported; never inferred" } ], "location": { "city": "string", "country": "string" }, "work_authorization": "string ONLY if candidate explicitly stated; never inferred", "proxies_redacted": ["count and category only — e.g., ['given_name', 'family_name', 'zip_code', 'military_service_country']"] }
Notes:
is a raw integer (Bright line #7). NEVER a label like "senior," "mid," "entry," "junior," "staff," or "principal."experience_years
is unordered (Bright line #2). Alphabetical sort for deterministic output.skills_set
is reverse-chronological — this is the candidate's own ordering of their career history, not an imposed importance ranking, so it does not violate Bright line #4.experience
is NEVER inferred. If the candidate did not state it, the field iswork_authorization
.null
is city + country only. Full address, street, neighborhood, zip code — all redacted per proxy register.location
records categories and counts, NOT content. The content never leaves the secured contact store.proxies_redacted
Banned Output Fields
The semantic-intent validator rejects any field semantically equivalent to the following (literal list from
hr-ai-governance Section 6.3, plus Phase 4A skill-specific additions):
score fit_score fit_rating match_score match_percentage candidate_score ranking rank tier category level seniority seniority_label experience_level strength strengths weakness weaknesses recommendation recommended_action suggested_action next_step interview_decision suitable_for best_match_role top_skills key_skills primary_skills core_skills summary evaluation assessment verdict notes flags concerns highlights culture_fit team_fit values_fit ranked_candidates candidate_comparison qualitative_evaluation overall_quality candidate_quality
Renaming (
candidateScore, fit-score, top_skills_ordered) is also banned. The validator checks by semantic intent, not literal string match. Any field a reasonable reader would interpret as rank, score, recommendation, seniority label, or qualitative evaluation is banned regardless of internal name.
Output Structure
Every batch produces:
1. Disclaimer + UPL guardrail block (top) — see top of file, reproduced on every run
2. AEDT non-classification memo block — see above, reproduced on every run
3. Positioning statement — literal verbatim sentence
4. Batch metadata block
- Batch ID, timestamp, jurisdiction, tenant, resume count, deployer attestation status (current / lapsed / missing), re-attestation due date
5. Per-resume extraction — one canonical JSON block per resume, plus its candidate disclosure template + its audit log entry
6. Batch-level summary
- Summary counts only, no ranking: successfully-extracted count, refused count (with refusal reasons — GINA / union / semantic-intent / deployer-attestation), aggregate
tally across the batch (categories only, no content)proxies_redacted - HITL-review-pending flag: every extraction is in "draft" state until reviewer signs off
7. ## Findings
section — numbered, per batch
## FindingsExamples: "Finding 1: GINA proxy detected in resume #7 — routed to human review, resume not processed." "Finding 2: Semantic-intent validator rejected field
candidate_quality attempted in resume #12 — retry with stricter prompt succeeded." "Finding 3: Union membership detected in resume #19 — routed to HR-only human handling per NLRA Section 7 strict protection."
8. ## Reviewer Checklist
— nine items
## Reviewer Checklist9. ## Cannot Assess Without
— what this skill deliberately does NOT opine on
## Cannot Assess Without10. ## Candidate Disclosure Template
— the verbatim block the deployer's ATS surfaces to candidates
## Candidate Disclosure Template11. ## Quality Gates
— 12 self-check items, pass/fail per batch
## Quality Gates12. ## Annual Review Cadence
— review date, triggers
## Annual Review Cadence13. ROI block (per roi-display.md
)
roi-display.mdQuality Gates (12 checks, per batch)
The skill runs a 12-item self-check BEFORE emitting output. If any check fails, the output does not publish — it produces a structural finding instead. More checks than other HR skills because this is first-of-type and carries the AEDT wall.
- Jurisdiction declared and matches a row in
Section 8.1hr-ai-governance - Deployer attestation current (≤
)re_attestation_due_date - AEDT memo visible in the output header (not just referenced by link)
- Positioning statement literal — the verbatim sentence appears at the top of every run
- Zero banned fields in any extraction (semantic-intent validator pass, all 45+ banned terms checked)
- Proxy scanner ran on every resume in the batch (no skipped resumes, no partial runs)
- GINA refusal applied where triggered (strict liability — no exceptions)
- Union refusal applied where triggered (NLRA Section 7 strict protection)
- HITL gate applied — every extraction is in "draft" state, no bypass
- Audit log entry emitted per resume — including refused resumes (they get refusal log entries)
- Candidate disclosure template emitted per successfully-extracted resume
- Counterfactual harness passed on skill ship — tolerance per
Section 5.3 field-type spec (structured enumerable = byte-identical; free-text summary = semantic-equivalent with Levenshtein < 0.15 after lowercase + stopword removal + whitespace collapse + punctuation strip). Harness re-runs on every model change, prompt revision, and every 180 days.hr-ai-governance
No "express mode." The 12 checks run on every invocation.
Reviewer Checklist (9 items — before HITL gate release)
- Jurisdiction confirmed against candidate residency and role location
- Deployer attestation currency confirmed (
+ annual re-attestation)acknowledgment_date - For every resume in the batch: raw resume read and compared against the extraction for accuracy
- No banned fields in any extraction (semantic-intent validator pass)
- GINA and union refusals reviewed and routed to HR-only paths
- Candidate disclosure template reviewed and confirmed suitable for ATS surfacing
- Adverse impact implications understood (single-resume runs do not support adverse impact testing; the counterfactual fixture run is the fairness gate)
- Did you spend less than 30 seconds on this review? If so, this is presumptive auto-acceptance and the AEDT wall is at risk. Sub-10-second reviews are escalated to @chro per
Section 10.2.1. Stop, re-read the raw resume, compare it to the extraction, and exercise actual discretion. HITL is non-discretionary review if you can't plausibly have read both in the time you took.hr-ai-governance - Employment counsel engaged for any "Cannot Assess Without" item that applies
Cannot Assess Without Licensed Counsel or Specialist
- Human evaluation of any kind — by design. This skill is descriptive, not evaluative. Ranking, scoring, comparison, fit, strengths/weaknesses, recommendation, next-step advice all require human reviewers.
- Actual hiring context — hiring manager input, team composition, budget, timeline, backfill vs new hire context, all out of scope.
- Reference checks, interview performance, assessment results — not in the resume, not in scope.
- Pay equity implications for the candidate pool — use
and its adverse impact check./comp-benchmark - Team composition and culture — out of scope by design. Culture fit is not a valid criterion under this governance pack.
- Individual capability differentiation between two candidates with similar resumes — hiring manager judgment, out of scope.
- Seniority classification ("is this person senior?") — explicitly banned by Bright line #7.
- Any ranking, scoring, or comparison between candidates — ABSOLUTELY NOT. AEDT territory. Routes to human review, third-party bias audit, and employment counsel.
- Ban-the-Box specific jurisdictional interpretations — the skill applies a pre-conditional-offer redaction in the 37+ Ban-the-Box jurisdictions, but the specific conditional-offer timing and exception handling require employment counsel in the target jurisdiction.
- GINA exposure forensics — when the skill refuses a resume on a detected GINA proxy, the downstream forensic review (how did the proxy get into the input, what's the exposure, what's the remediation) is @employment-counsel's job, not this skill's.
Candidate Disclosure Template
Per
hr-ai-governance Section 9.1, the deployer's ATS MUST surface the following block to the candidate at the point of application (pre-use notice in EU, NYC, CA, CO; on-request in other jurisdictions). The skill emits this template per successfully-extracted resume; the deployer is responsible for actually surfacing it to the candidate and confirming via candidate_disclosure_confirmed = true in the audit log.
NOTICE: AI-Assisted Structured Extraction Your application may be processed using an AI-assisted structured extraction tool that helps our hiring team review applications at scale. WHAT THIS TOOL DOES: - Extracts structured information from your resume: skills, education, experience, certifications, location, languages, and work authorization where you explicitly stated it. - Redacts protected-class proxies (names, photos, date of birth, etc.) before a human reviewer sees the extraction. WHAT THIS TOOL DOES NOT DO: - It does NOT score your application. - It does NOT rank you against other candidates. - It does NOT evaluate your fit for the role. - It does NOT recommend whether to interview, advance, or reject you. - It is deliberately not classified as an Automated Employment Decision Tool under NYC Local Law 144. HUMAN REVIEW: - A human recruiter reviews every application before any decision is made. - The recruiter sees both your raw resume and the extraction output. - Any hiring decision is made by human reviewers based on their own discretionary judgment, not by this tool. YOUR RIGHTS: - You can request a summary of the extraction output produced from your resume (contact: [deployer HR address]). - You can request correction of any extracted field that is inaccurate. - You can request a human-only review path that bypasses the extraction tool (contact: [deployer HR address]). - In EU / UK / California / Colorado, you have additional rights under GDPR / UK GDPR / CCPA / CPRA / Colorado AI Act. Contact the deployer's Data Protection Officer. Date: {application_date} Tool version: resume-summarizer v1.0.0 Governance pack: hr-ai-governance v1.0.1
The deployer is accountable for actually showing this block. The skill emits it; the ATS surfaces it; the candidate sees it; the deployer logs
candidate_disclosure_confirmed = true in the audit entry. Skipping this step → candidate_disclosure_confirmed = false, which in jurisdictions requiring pre-use notice is a blocking condition (the skill refuses downstream extraction release).
Related Skills and Hand-off
- Upstream: a separate "deployer attestation" flow is a future skill (TBD). For v1.0.0, deployer attestation is a file-based input; the flow to collect, sign, and refresh the attestation is out of scope for this skill.
- Downstream: the human reviewer. NOT another skill. This is the whole point — the extraction hands off to a human, not to a scorer or ranker. There is no "feed into the interview decision" pipeline. The interview decision is a separate human process.
- Governance pack:
v1.0.1 — proxy register, audit schema, AEDT memo, counterfactual fixture, HITL pattern, jurisdiction matrix, all inherited.hr-ai-governance - Counterfactual harness:
Section 5 specifies the 200-fixture counterfactual test. The fixture files live athr-ai-governance
(scheduled engineering task per Section 5.1). This skill references the harness; it does not author or own the harness. Harness runs: skill ship, every 180 days, every model change, every prompt revision.Extension Teams/hr-team/reference/test-fixtures/ - Sibling HR skills (same pack):
(v1.0.0, shipped 2026-04-11),/job-description-generator
(Phase 4A),/interview-guide
(Phase 4A). None of them score, rank, or compare candidates either — same governance pack, same AEDT wall, same disciplines./comp-benchmark
Annual Review Cadence
Per
hr-ai-governance Section 6.5 and Section 12.1:
- AEDT non-classification memo (Section 6): 12 months from publication → next review 2027-04-11
- Counterfactual fairness fixture set (Section 5): 180 days → next refresh 2026-10-08
- Jurisdiction matrix (Section 8): quarterly → next review 2026-07-11 (end of Q2 2026)
- Proxy register (Section 3): quarterly → next review 2026-07-11
- Deployer attestation per tenant: 365 days from
per each tenant individuallyacknowledgment_date - Full skill review: 12 months → next review 2027-04-11
Out-of-cycle triggers (immediate review, do not wait for calendar):
- NYC DCWP enforcement guidance update on Local Law 144
- CA CPRA ADMT final rules landing (expected 2027)
- Colorado AI Act implementing guidance from the Colorado AG
- EU AI Act harmonized standards or implementing acts
- Any case law flagged by @employment-counsel touching AEDT classification, automated resume processing, or adverse impact from AI-assisted hiring
- Any adverse impact or counterfactual fairness failure in production
- Any bright line violation in any released version (immediate pull)
Owner: @chro + @employment-counsel jointly.
Pack Inheritance
This skill inherits the following from
hr-ai-governance pack v1.0.1. Every inheritance is a contract; if the pack updates, the skill re-validates against the updated pack on next run.
| Section | What the skill inherits |
|---|---|
| 3.2 non-obvious proxy register | The full 9 standard + 27 non-obvious proxy list becomes the scanner input. The skill does NOT author its own proxy register. New proxies added to Section 3.2 automatically become new scanner triggers. |
| 3.3 mitigation patterns | The /// pattern map is inherited literally. GINA + union are . Criminal history + caregiver + credit + salary history + disability accommodation + religious hobbies + address details are . Military service + email domain + year-in-username + language phrasing + address format are . |
| 4.1 audit log schema | Every extraction and every refusal emits a log record conforming to the canonical schema, including , , , . |
| 5 counterfactual harness | The 200-fixture counterfactual test is run on skill ship, every 180 days, every model change, every prompt revision. Tolerance is per Section 5.3 field-type spec — structured enumerable = byte-identical; free-text summary = semantic-equivalent via normalized Levenshtein distance < 0.15 of the shorter string after lowercase + stopword removal + whitespace collapse + punctuation strip. |
| 6 AEDT non-classification memo | Section 6 is reproduced verbatim in this SKILL.md's header. 8 bright lines are the enforcement contract. 12-month review cadence. Deployer warning is runtime, jurisdiction-conditional. Annual re-attestation per Section 6.4. |
| 8.1 jurisdiction matrix | Retention periods, runtime warnings, candidate disclosure obligations, Ban-the-Box application are all pulled from Section 8.1. |
| 9.1 candidate disclosure | The candidate disclosure template is inherited. The skill emits it per resume; the deployer's ATS surfaces it; is required for downstream release in pre-use-notice jurisdictions. |
| 10.2.1 HITL throughput tripwire | <10s sub-AEDT-wall risk escalation, <30s flag, 50+ accepts/day same reviewer flag, 100% accept rate/quarter flag — all inherited as enforcement mechanisms. |
| 11.1 review cadence | First-of-type substantive review by @employment-counsel — 5 business days. This skill carries the first-of-type burden for the HR AI governance pack. The other three HR skills inherit subsequent-similar 72-hour SLA from this review. |
| 11.3 non-delegable fallback carve-out | The AEDT non-classification memo in this skill is NON-DELEGABLE. @compliance-officer fallback path does NOT apply to this skill or to Section 6 changes. If @employment-counsel is saturated, the AEDT memo waits. The skill does not ship with a fallback sign-off on Section 6. |
Delegation Patterns Available
Default: Pattern 1 Consultation
| Trigger | Spawn |
|---|---|
| Proxy register scanner logic, counterfactual harness validation, audit log emission | 📊 People Analyst (co-owner; default consultation) |
| AEDT classification edge case (e.g., new workflow pattern, new jurisdiction) | 👔 Employment Counsel |
| GINA exposure event forensics | 👔 Employment Counsel + 🔒 Privacy Counsel |
| Union detection false-positive review | 👔 Employment Counsel |
| Ban-the-Box jurisdictional interpretation | 👔 Employment Counsel |
| Counterfactual fairness failure investigation | 📊 People Analyst + 👔 Employment Counsel |
Consultations are attributed inline in the findings. Ownership of the extraction output stays with Recruiter (primary) and People Analyst (co-owner for proxy/harness/audit-log layers).
Pattern 5 Adversarial Review
Not applicable at v1.0.0. Adversarial Review (
delegation-protocol.md Pattern 5) is reserved for near-final deliverables with high-stakes, uncapped exposure. An extraction output is not "near-final" in the Pattern 5 sense — it is a single-pass descriptive artifact that goes through HITL before any consumption, and Pattern 5's fresh-context role-separation is already achieved by the HITL gate.
The pack itself (
hr-ai-governance) is the near-final deliverable that carries adversarial stress-testing, and it has already been through @employment-counsel review with 12 findings applied. This skill inherits that adversarial posture.
If a specific resume batch triggers novel risk (e.g., a batch from a regulated sector with unusual protected-class dimensions, or a first deployment in a new jurisdiction), the recruiter can manually escalate to
@employment-counsel for a Pattern 1 consultation BEFORE running the skill. This is manual escalation, not automatic.
ROI Framing
ROI for
/resume-summarizer is reported as "time saved on drafting and triage of structured resume extraction at volume" — NEVER "time saved on resume review," "time saved on hiring decisions," or "time saved on HR review." The framing matters for AEDT wall reasons: "time saved on review" implies the tool substitutes for human evaluation, which is exactly the claim we cannot make.
HR blended rate: $150/hr per
feedback_roi_rates.md.
Time-saved baseline for a batch of 25 resumes:
- Manual structured extraction of 25 resumes at ~6 minutes per resume = 2.5 hours
- Proxy redaction review at ~2 minutes per resume = 0.8 hours
- Structured schema normalization across heterogeneous formats = 0.5 hours
- Audit log entry construction per resume at ~1 minute = 0.4 hours
- Total manual baseline: ~4.2 hours per batch of 25
Complexity multipliers:
- Simple (homogeneous resume format, same jurisdiction, no GINA/union flags): 0.5× → ~2.1 hrs saved
- Standard (mixed format, single jurisdiction, occasional flags): 1.0× → ~4.2 hrs saved
- Complex (multi-jurisdiction batch, heterogeneous format, multiple flags, counterfactual re-verification): 1.5× → ~6.3 hrs saved
Example ROI line for a standard 25-resume batch:
⏱️ ~4.2 hrs saved on drafting and triage in 90s, 45k tkns ~$2.7 cost, Value ~$630
The ROI tracks ONLY the time the skill saves on drafting and triaging the structured artifact, NOT the substantive human review time. The human reviewer still reads every resume, compares it to the extraction, exercises discretion, and owns the outcome.
Attribution and Maintenance
Owner: 🎯 Recruiter. The skill's extraction logic, canonical schema, banned fields enforcement, AEDT wall, refusal conditions, and HITL gate integration are Recruiter's accountability.
Co-owner: 📊 People Analyst. Joint accountability on the proxy register scanner, the counterfactual harness invocation, and the audit log emission layer. Any change to the proxy scanning logic, the counterfactual tolerance spec, or the audit log schema integration requires sign-off from both owners.
Consumers:
ext-hr (HR Extension Team gateway; primary user). No downstream skill consumes /resume-summarizer — the hand-off is to human reviewers, by design. New consumers would require a frontmatter update AND a substantive re-review by @employment-counsel (not just 72-hour subsequent-similar; any new consumer potentially changes the workflow characterization and thus the AEDT wall).
Authoring: First-principles. This skill was authored from scratch during Phase 4A as the fourth and final HR skill under the
hr-ai-governance pack, and as the first-of-type for the pack's full substantive review. No vendor tool lifts — not from Textio, Gem, HireVue, Paradox, Greenhouse extract libraries, or any ATS extraction vendor. Academic citations used only as pointers: Schmidt & Hunter (1998) "The validity and utility of selection methods in personnel psychology" for the framing of why structured extraction is valuable at volume; Bertrand & Mullainathan (2004) "Are Emily and Greg More Employable Than Lakisha and Jamal?" for the proxy-bias empirical grounding — both are public academic references, not vendor content.
Dependency on the pack: The skill reads from
hr-ai-governance at every invocation. When Section 3.2 proxy register, Section 4.1 audit log schema, Section 5 counterfactual harness, Section 6 AEDT memo, Section 8.1 jurisdiction matrix, Section 9.1 candidate disclosure, or Section 10.2.1 HITL tripwire is updated, the skill picks up the new content on the next run. The pack version is recorded in every audit log entry.
Updates: Via the two-pass publication gate defined in
sensitive-skill-guardrails.md Section 4.
- Pass 1 scaffolding check: 📋 Director of HR, 15 minutes, binary GO / REWORK.
- Pass 2 substantive check: 👔 Employment Counsel, 5 business days first-of-type SLA for v1.0.0 (the skill and the pack carry first-of-type status together for HR AI governance). Subsequent skill revisions drop to 72-hour subsequent-similar SLA, EXCEPT any change to the AEDT non-classification memo (Section 6), the bright lines, the banned field list, the refusal conditions, or the HITL throughput tripwire — those revert to first-of-type 5-business-day SLA regardless of skill version.
- Non-delegable: The AEDT non-classification memo is NON-DELEGABLE per
Section 11.3. @compliance-officer fallback path does NOT apply.hr-ai-governance
Minor edits (typos, formatting, documentation links) can bypass Pass 2. Any edit touching: the 8 bright lines, the banned field list, the 6 refusal conditions, the HITL gate, the counterfactual harness tolerance, the candidate disclosure template, the deployer attestation logic, or the AEDT memo language — requires a full first-of-type Pass 2 substantive review by Employment Counsel.
Changelog: Maintained at the bottom of this file.
Example Invocation (standard path)
User: /resume-summarizer --batch ./candidate-batch-2026-Q2/ --jurisdiction US-NYC --role "Senior Backend Engineer" --deployer-attestation ./attestations/legionis-2026.json /resume-summarizer v1.0.0 — loading: - Batch: 25 resumes from ./candidate-batch-2026-Q2/ - Jurisdiction: US-NYC (AEDT deployer warning active; 1-year retention) - Deployer attestation: legionis-2026.json - acknowledgment_date: 2026-01-15 - re_attestation_due_date: 2027-01-15 - STATUS: current (as of 2026-04-11) - Governance pack: hr-ai-governance v1.0.1 - Scanner source: Section 3.2 (9 standard + 27 non-obvious proxies) - Counterfactual harness: last run 2026-04-01 (PASS; next 2026-10-08) - Role context: "Senior Backend Engineer" (descriptive only; not used for evaluation) Running 9-step pipeline: Step 1: Deployer attestation: CURRENT (valid until 2027-01-15) Step 2: US-NYC rules loaded (AEDT deployer warning, 1-yr retention, Ban-the-Box pre-conditional-offer redaction active per NYC Fair Chance Act) Step 3: Proxy scanner ran on 25 resumes; 187 proxies detected total Step 4: Redaction/mitigation applied: - redact: 42 (given_name, family_name, zip, photo, DOB, salary_history, caregiver_lang, criminal_history) - normalize: 31 (military_service, email_domain, year_in_username, language_phrasing, address_format) - flag_for_human: 6 (culture-fit references in references section) - block (GINA): 1 resume → REFUSED, routed to HR-only human review - block (union): 1 resume → REFUSED, routed to HR-only human handling per NLRA Sec 7 Step 5: Canonical extraction on 23 remaining resumes Step 6: Semantic-intent validator pass - 22 extractions: clean - 1 extraction: field `skills_primary` rejected; retry with stricter prompt → clean Step 7: Candidate disclosure template emitted for 23 extractions Step 8: Audit log entries emitted for 25 (23 extractions + 2 refusals) Step 9: All 23 extractions in DRAFT state, HITL gate blocking downstream 12/12 quality gates passed. HITL review required before downstream consumption. Reviewer will see raw resume + extraction side-by-side with mandatory rationale field. Sub-30s reviews will be flagged; sub-10s reviews will be escalated to @chro. Producing batch output at: ./candidate-batch-2026-Q2/_resume-summarizer-output-2026-04-11.md ⏱️ ~4.2 hrs saved on drafting and triage in 90s, 45k tkns ~$2.7 cost, Value ~$630
Changelog
- 1.0.0 (2026-04-11) — Initial authoring. First-of-type HR skill under
pack for full substantive review. Fourth and final Phase 4A HR skill. Authored from first principles by 🎯 Recruiter (primary) and 📊 People Analyst (co-owner for proxy register, counterfactual harness, and audit log emission layers). No vendor tool lifts. Academic citations: Schmidt & Hunter (1998) for volume-extraction value framing; Bertrand & Mullainathan (2004) for proxy-bias empirical grounding. AEDT non-classification memo embedded verbatim fromhr-ai-governance
Section 6.2. Banned field list extendshr-ai-governance
Section 6.3 with Phase 4A additions (culture_fit, team_fit, values_fit, qualitative_evaluation, overall_quality, candidate_quality, seniority_label, experience_level). Counterfactual harness tolerance per Section 5.3 F7 field-type spec. HITL throughput tripwire per Section 10.2.1. Birth test athr-ai-governance
exercises GINA refusal, union refusal, IDF proxy normalization (no-country rule), Ban-the-Box criminal-history redaction, and semantic-intent validator catch of a plantedLegionis/Product/resume-summarizer-birth-test-2026-04-11.md
bad field. Ready for @employment-counsel 5-business-day first-of-type substantive review. Scaffolding pass by 📋 Director of HR required before substantive review.top_skills_ordered