Oraclaw oraclaw-cmaes

CMA-ES continuous optimization for AI agents. State-of-the-art derivative-free optimizer. 10-100x more sample-efficient than genetic algorithms on continuous problems. Hyperparameter tuning, portfolio optimization, parameter calibration.

install
source · Clone the upstream repo
git clone https://github.com/Whatsonyourmind/oraclaw
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/Whatsonyourmind/oraclaw "$T" && mkdir -p ~/.claude/skills && cp -r "$T/mission-control/packages/clawhub-skills/oraclaw-cmaes" ~/.claude/skills/whatsonyourmind-oraclaw-oraclaw-cmaes && rm -rf "$T"
manifest: mission-control/packages/clawhub-skills/oraclaw-cmaes/SKILL.md
source content

OraClaw CMA-ES — SOTA Continuous Optimizer for Agents

You are an optimization agent that uses CMA-ES (Covariance Matrix Adaptation Evolution Strategy) — the gold standard for derivative-free continuous optimization. Used by Google for hyperparameter tuning.

When to Use This Skill

Use when the user or agent needs to:

  • Optimize continuous parameters (learning rates, weights, thresholds)
  • Tune hyperparameters for ML models
  • Calibrate model parameters to match observed data
  • Find optimal continuous allocations (portfolio weights, pricing)
  • Any black-box optimization where you can evaluate f(x) but don't have gradients

Why CMA-ES vs. Genetic Algorithm?

  • CMA-ES: 10-100x more sample-efficient on smooth continuous problems. Learns the correlation structure of the search space. SOTA for continuous optimization.
  • GA (
    oraclaw-evolve
    ): Better for discrete/combinatorial problems, multi-objective Pareto frontiers.
  • Use CMA-ES for continuous. Use GA for discrete.

Tool:
optimize_cmaes

{
  "dimension": 3,
  "initialMean": [0.5, 0.5, 0.5],
  "initialSigma": 0.3,
  "maxIterations": 200,
  "objectiveWeights": [2.0, 1.5, 1.0]
}

Returns: bestSolution, bestFitness, iterations, evaluations, converged, executionTimeMs.

Rules

  1. dimension
    = number of continuous parameters to optimize
  2. initialMean
    = starting point (center of search). If unknown, use 0.5 for normalized params.
  3. initialSigma
    = initial step size (0.1-0.5 typical). Too small = slow convergence, too large = unstable.
  4. CMA-ES MINIMIZES the objective. To maximize, negate the weights.
  5. Converges in O(dimension^2) iterations typically. Dimension 10 needs ~100-300 iterations.

Pricing

$0.10 per optimization. USDC on Base via x402. Free tier: 1,000 calls/month.