Claude-Skills legal-red-team

install
source · Clone the upstream repo
git clone https://github.com/borghei/Claude-Skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/borghei/Claude-Skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/legal/legal-red-team" ~/.claude/skills/borghei-claude-skills-legal-red-team && rm -rf "$T"
manifest: legal/legal-red-team/SKILL.md
source content

⚠️ EXPERIMENTAL — This skill is provided for educational and informational purposes only. It does NOT constitute legal advice. All responsibility for usage rests with the user. Consult qualified legal professionals before acting on any output.

Legal Red Team

Production-ready adversarial verification framework for AI-generated legal content. Covers factual accuracy, citation validation, arithmetic checking, speculation detection, and distribution readiness scoring.


Table of Contents


Verification Categories

Every AI-generated legal document must be checked across 6 categories.

#CategoryWhat to CheckRed Flags
1Factual AccuracyDates, references, numbers, entity names, timelinesWrong effective dates, confused entity names, incorrect amounts
2Legal Authority CitationsPrimary/secondary sources, format, hierarchy, currencyNon-existent articles, wrong section numbers, outdated citations
3Arithmetic ValidationTimelines, percentages, financial calculations, deadlinesDate math errors, percentage miscalculations, compounding mistakes
4Source VerificationVerifiable claims, official sources, cross-referencingUnverifiable assertions stated as fact, single-source claims
5Speculation DetectionOpinion vs fact, uncertainty language, predictive claimsPredictions stated as certainty, guidance treated as binding law
6Disclaimer AdequacyLegal advice disclaimers, jurisdiction, date, professional consultationMissing disclaimers, overly broad claims, no jurisdiction limits

Tools

Legal Fact Checker

Scans legal text for verifiable claims and flags potential hallucination patterns.

# Check a legal document
python scripts/legal_fact_checker.py --input document.txt

# Check with JSON output
python scripts/legal_fact_checker.py --input memo.txt --json

# Check inline text
python scripts/legal_fact_checker.py --text "Under GDPR Article 83(5), fines can reach EUR 20 million..."

# Save verification report
python scripts/legal_fact_checker.py --input document.txt --output report.json

Legal Quality Scorer

Scores legal document quality across all 6 verification categories.

# Score a document
python scripts/legal_quality_scorer.py --input document.txt

# Score with JSON output
python scripts/legal_quality_scorer.py --input document.txt --json

# Score with detailed breakdown
python scripts/legal_quality_scorer.py --input document.txt --verbose

# Save quality assessment
python scripts/legal_quality_scorer.py --input document.txt --output assessment.json

Six-Step Methodology

Step 1: Initial Review

Read the entire document with an adversarial mindset. For each claim, ask:

  • Is this verifiable?
  • Does this sound too specific to be generated without a source?
  • Does this sound too confident for an uncertain area?

Mark every factual assertion, citation, date, number, and predictive statement.

Step 2: Source Verification (ALWAYS Web Search)

For every verifiable claim, attempt to verify against official sources.

Source TypeVerification MethodExamples
EU legislationEUR-Lex official databaseeur-lex.europa.eu
US federal lawcongress.gov, govinfo.govOfficial code and statutes
US regulationseCFR, Federal Registerecfr.gov
UK legislationlegislation.gov.ukOfficial statute database
Court decisionsCourt databases, Westlaw, LexisNexisOfficial reporters
Agency guidanceAgency official websiteDirect download from .gov/.europa.eu
International treatiesUN Treaty Collectiontreaties.un.org

Rule: If a claim cannot be verified from an official source, flag it. Do not assume accuracy.

Step 3: Arithmetic Verification

Check every calculation, date computation, and numerical claim.

Check TypeMethod
Timeline calculationsCount days/months/years between stated dates
Percentage calculationsRecalculate from base figures
Financial computationsVerify arithmetic and compounding
Deadline calculationsConfirm against statutory text
Penalty rangesCross-check against statute

Step 4: Citation Validation

For every legal citation, verify:

ElementCheck
Source existsDoes the cited statute/article/section actually exist?
Content matchesDoes the cited provision say what the document claims?
Citation formatIs the citation in correct format for the jurisdiction?
CurrencyIs this the current, in-force version?
Hierarchy correctIs the source characterized at the right authority level?

Step 5: Speculation Identification

Distinguish fact from opinion, certainty from prediction.

Language PatternClassificationAction
"The law requires..."Factual claimVerify against statutory text
"Courts will likely..."SpeculationFlag; add uncertainty qualifier
"It is recommended..."GuidanceVerify source; clarify if binding
"Best practice suggests..."OpinionLabel as opinion; cite source
"This means that..."InterpretationFlag if stated as fact without authority
"Companies must..."Obligation claimVerify statutory basis

Step 6: Disclaimer Review

Every AI-generated legal document must include:

Required ElementDescription
Not legal adviceClear statement that content is informational only
Jurisdiction limitationsWhich jurisdictions are and are not covered
Date of preparationWhen the content was prepared (law changes)
Professional consultationRecommendation to consult qualified legal counsel
AI-generated disclosureStatement that content was generated or assisted by AI
Accuracy limitationsAcknowledgment that verification is recommended

Severity Taxonomy

SeverityDefinitionExamplesAction
CRITICALFactually wrong in a way that could cause legal harmWrong article number creating false obligation, incorrect penalty amount, non-existent legal requirementMust fix before any distribution
HIGHMaterially misleading or unverifiableGuidance stated as binding law, unverifiable timeline, confident but unsourced claimMust fix or add prominent caveat
MODERATEImprecise or potentially confusingAmbiguous language, minor date discrepancy, incomplete citationShould fix; acceptable with caveat
LOWStyle or formatting issueCitation format inconsistency, missing cross-reference, minor redundancyFix if time permits

Quality Score

ScoreRatingDistribution StatusCriteria
5/5Distribution ReadySafe to distributeZero CRITICAL/HIGH issues; all citations verified; disclaimers complete
4/5Minor RevisionsSafe after small fixesZero CRITICAL; 1-2 HIGH issues with clear fixes; most citations verified
3/5Moderate RevisionsNeeds work before distributionZero CRITICAL; 3+ HIGH issues; some unverified citations
2/5Major RevisionsNot safe to distribute1+ CRITICAL issues; multiple HIGH issues; significant unverified content
1/5Not Distribution ReadyRequires complete reworkMultiple CRITICAL issues; pervasive inaccuracies; unreliable throughout

Known Hallucination Patterns

AI models exhibit 5 recurring patterns when generating legal content.

#PatternDescriptionDetection Technique
1Plausible but wrong article numbersAI generates article/section numbers that sound correct but do not exist (e.g., "Article 42(5)" when only 42(1)-(4) exist)Cross-reference every article number against official statute text
2Confident but incorrect datesImplementation timelines, effective dates, or deadlines stated with false confidence (off by weeks or months)Verify every date against official timeline from the statute or implementing body
3Mixing guidance and legal requirementsTreating non-binding recommendations as binding obligations (e.g., stating ENISA recommendations as NIS2 requirements)Check whether cited source is binding legislation vs guidance; verify authority level
4Outdated legal referencesCiting superseded or repealed provisions without noting they are no longer in forceVerify currency of every cited provision; check for amendments and repeals
5Arithmetic errors in timeline calculationsMiscounting days, months, or years between dates; wrong deadline calculationsIndependently calculate every timeline; do not trust AI date math

See

references/hallucination_patterns.md
for detailed examples and prevention strategies.


Reference Guides

GuidePathDescription
Verification Methodology
references/verification_methodology.md
Complete 6-step methodology with source hierarchy and citation formats
Hallucination Patterns
references/hallucination_patterns.md
5 patterns with examples, detection, and prevention strategies

Workflows

Workflow 1: Full Adversarial Review

  1. Run
    scripts/legal_fact_checker.py
    on the document.
  2. Review flagged items and verify each against official sources.
  3. Run
    scripts/legal_quality_scorer.py
    for category scores.
  4. For each CRITICAL/HIGH finding, document: the error, the correct information, and the source.
  5. Produce a verification report with findings by severity.
  6. Assign quality score and distribution readiness assessment.
  7. Validation: Every verifiable claim checked, score assigned, recommendations provided.

Workflow 2: Quick Citation Check

  1. Run
    scripts/legal_fact_checker.py
    on the document.
  2. Focus on citation extraction results.
  3. Verify each extracted citation against official source.
  4. Flag any citation that cannot be verified.
  5. Validation: All citations verified or flagged.

Workflow 3: Pre-Distribution Gate

  1. Run
    scripts/legal_quality_scorer.py
    on the final document.
  2. Review composite score.
  3. If score < 4/5, document must not be distributed.
  4. If score >= 4/5, verify CRITICAL count is zero.
  5. Confirm all disclaimers are present and adequate.
  6. Validation: Quality score >= 4/5, zero CRITICAL issues, disclaimers complete.

Troubleshooting

ProblemLikely CauseResolution
Too many false positivesRegex patterns matching non-legal textNarrow input to legal content only; use context-aware review
Cannot verify citationSource not freely accessibleNote as "unverifiable from public sources"; do not assume correct
AI-generated text has no citationsContent is entirely unsourcedFlag entire document as unverified; score as 2/5 or lower
Hallucination pattern detectedAI confabulation of legal detailsReplace with verified information from official source
Document mixes jurisdictionsNo clear jurisdiction scopeFlag as HIGH; recommend splitting by jurisdiction
Quality score seems too highAutomated scoring has limitsAlways supplement automated scoring with manual review

Success Criteria

CriterionTarget
Citations verified100% of legal citations checked against official sources
Hallucination patterns scannedAll 5 known patterns checked
Arithmetic validatedEvery calculation independently verified
Severity assignedEvery finding classified CRITICAL/HIGH/MODERATE/LOW
Quality score calculatedComposite score with per-category breakdown
Disclaimers verifiedAll 6 required disclaimer elements present
Distribution decisionClear go/no-go recommendation with rationale

Scope & Limitations

In scope: Verifying factual claims in legal text, validating citations, detecting hallucination patterns, scoring document quality, assessing distribution readiness.

Out of scope: Verifying legal conclusions or interpretations, assessing litigation strategy, replacing professional legal review, accessing paid legal databases (Westlaw, LexisNexis).

Disclaimer: This skill provides a structured adversarial verification methodology. It catches common AI errors but cannot guarantee complete accuracy. Professional legal review remains essential for high-stakes documents.


Anti-Patterns

Anti-PatternWhy It FailsBetter Approach
Trusting AI-generated citations without verificationAI models routinely generate plausible but non-existent legal citations; unverified citations in distributed documents create serious credibility and legal riskVerify every citation against official sources; assume wrong until proven right
Relying solely on automated checkingAutomated tools catch patterns but miss contextual errors, mischaracterizations, and subtle hallucinationsUse automated tools for first pass, then conduct manual review of all flagged items and a sample of unflagged items
Skipping the "adversarial mindset"Confirmation bias leads reviewers to accept plausible-sounding content; legal text that "sounds right" may still be wrongActively seek to disprove every claim; assume error until verified; question every specific number, date, and citation
Distributing with score 3/5 or belowMODERATE and HIGH issues in distributed documents undermine credibility and may cause legal harmSet a firm distribution threshold at 4/5; no exceptions without documented risk acceptance by a qualified reviewer

Tool Reference

ToolInputOutputUse Case
legal_fact_checker.py
Legal document textVerification report with flagged claims, citations, dates, hallucination alertsFirst-pass automated scanning of legal content
legal_quality_scorer.py
Legal document textQuality score (1-5) with per-category breakdown and severity-classified findingsPre-distribution quality gate