Claude-skill-registry-data math-review

install
source · Clone the upstream repo
git clone https://github.com/majiayu000/claude-skill-registry-data
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry-data "$T" && mkdir -p ~/.claude/skills && cp -r "$T/data/math-review" ~/.claude/skills/majiayu000-claude-skill-registry-data-math-review && rm -rf "$T"
manifest: data/math-review/SKILL.md
source content

Table of Contents

Mathematical Algorithm Review

Intensive analysis ensuring numerical stability and alignment with standards.

Quick Start

/math-review

Verification: Run the command with

--help
flag to verify availability.

When to Use

  • Changes to mathematical models or algorithms
  • Statistical routines or probabilistic logic
  • Numerical integration or optimization
  • Scientific computing code
  • ML/AI model implementations
  • Safety-critical calculations

Required TodoWrite Items

  1. math-review:context-synced
  2. math-review:requirements-mapped
  3. math-review:derivations-verified
  4. math-review:stability-assessed
  5. math-review:evidence-logged

Core Workflow

1. Context Sync

pwd && git status -sb && git diff --stat origin/main..HEAD

Verification: Run

git status
to confirm working tree state. Enumerate math-heavy files (source, tests, docs, notebooks). Classify risk: safety-critical, financial, ML fairness.

2. Requirements Mapping

Translate requirements → mathematical invariants. Document pre/post conditions, conservation laws, bounds. Load:

modules/requirements-mapping.md

3. Derivation Verification

Re-derive formulas using CAS. Challenge approximations. Cite authoritative standards (NASA-STD-7009, ASME VVUQ). Load:

modules/derivation-verification.md

4. Stability Assessment

Evaluate conditioning, precision, scaling, randomness. Compare complexity. Quantify uncertainty. Load:

modules/numerical-stability.md

5. Evidence Logging

pytest tests/math/ --benchmark
jupyter nbconvert --execute derivation.ipynb

Verification: Run

pytest -v tests/math/
to verify. Log deviations, recommend: Approve / Approve with actions / Block. Load:
modules/testing-strategies.md

Progressive Loading

Default (200 tokens): Core workflow, checklists +Requirements (+300 tokens): Invariants, pre/post conditions, coverage analysis +Derivation (+350 tokens): CAS verification, standards, citations +Stability (+400 tokens): Numerical properties, precision, complexity +Testing (+350 tokens): Edge cases, benchmarks, reproducibility

Total with all modules: ~1600 tokens

Essential Checklist

Correctness: Formulas match spec | Edge cases handled | Units consistent | Domain enforced Stability: Condition number OK | Precision sufficient | No cancellation | Overflow prevented Verification: Derivations documented | References cited | Tests cover invariants | Benchmarks reproducible Documentation: Assumptions stated | Limitations documented | Error bounds specified | References linked

Output Format

## Summary
[Brief findings]

## Context
Files | Risk classification | Standards

## Requirements Analysis
| Invariant | Verified | Evidence |

## Derivation Review
[Status and conflicts]

## Stability Analysis
Condition number | Precision | Risks

## Issues
[M1] [Title]: Location | Issue | Fix

## Recommendation
Approve / Approve with actions / Block

Verification: Run the command with

--help
flag to verify availability.

Exit Criteria

  • Context synced, requirements mapped, derivations verified, stability assessed, evidence logged with citations

Troubleshooting

Common Issues

Command not found Ensure all dependencies are installed and in PATH

Permission errors Check file permissions and run with appropriate privileges

Unexpected behavior Enable verbose logging with

--verbose
flag