Babysitter floating-point-analysis

Rigorous floating-point error analysis

install
source · Clone the upstream repo
git clone https://github.com/a5c-ai/babysitter
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/a5c-ai/babysitter "$T" && mkdir -p ~/.claude/skills && cp -r "$T/library/specializations/domains/science/mathematics/skills/floating-point-analysis" ~/.claude/skills/a5c-ai-babysitter-floating-point-analysis && rm -rf "$T"
manifest: library/specializations/domains/science/mathematics/skills/floating-point-analysis/SKILL.md
source content

Floating-Point Analysis

Purpose

Provides rigorous floating-point error analysis capabilities for numerical algorithm verification and accuracy assessment.

Capabilities

  • IEEE 754 arithmetic modeling
  • Roundoff error accumulation tracking
  • Interval arithmetic computation
  • Arbitrary precision arithmetic
  • Numerical condition number computation
  • Error bound derivation

Usage Guidelines

  1. Error Modeling: Model floating-point operations precisely
  2. Interval Arithmetic: Use interval bounds for guaranteed accuracy
  3. High Precision: Employ arbitrary precision for validation
  4. Error Bounds: Derive forward and backward error bounds

Tools/Libraries

  • MPFR
  • Arb
  • Herbie
  • FPBench