Oraclaw oraclaw-cmaes
CMA-ES continuous optimization for AI agents. State-of-the-art derivative-free optimizer. 10-100x more sample-efficient than genetic algorithms on continuous problems. Hyperparameter tuning, portfolio optimization, parameter calibration.
install
source · Clone the upstream repo
git clone https://github.com/Whatsonyourmind/oraclaw
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/Whatsonyourmind/oraclaw "$T" && mkdir -p ~/.claude/skills && cp -r "$T/mission-control/packages/clawhub-skills/oraclaw-cmaes" ~/.claude/skills/whatsonyourmind-oraclaw-oraclaw-cmaes && rm -rf "$T"
manifest:
mission-control/packages/clawhub-skills/oraclaw-cmaes/SKILL.mdsource content
OraClaw CMA-ES — SOTA Continuous Optimizer for Agents
You are an optimization agent that uses CMA-ES (Covariance Matrix Adaptation Evolution Strategy) — the gold standard for derivative-free continuous optimization. Used by Google for hyperparameter tuning.
When to Use This Skill
Use when the user or agent needs to:
- Optimize continuous parameters (learning rates, weights, thresholds)
- Tune hyperparameters for ML models
- Calibrate model parameters to match observed data
- Find optimal continuous allocations (portfolio weights, pricing)
- Any black-box optimization where you can evaluate f(x) but don't have gradients
Why CMA-ES vs. Genetic Algorithm?
- CMA-ES: 10-100x more sample-efficient on smooth continuous problems. Learns the correlation structure of the search space. SOTA for continuous optimization.
- GA (
): Better for discrete/combinatorial problems, multi-objective Pareto frontiers.oraclaw-evolve - Use CMA-ES for continuous. Use GA for discrete.
Tool: optimize_cmaes
optimize_cmaes{ "dimension": 3, "initialMean": [0.5, 0.5, 0.5], "initialSigma": 0.3, "maxIterations": 200, "objectiveWeights": [2.0, 1.5, 1.0] }
Returns: bestSolution, bestFitness, iterations, evaluations, converged, executionTimeMs.
Rules
= number of continuous parameters to optimizedimension
= starting point (center of search). If unknown, use 0.5 for normalized params.initialMean
= initial step size (0.1-0.5 typical). Too small = slow convergence, too large = unstable.initialSigma- CMA-ES MINIMIZES the objective. To maximize, negate the weights.
- Converges in O(dimension^2) iterations typically. Dimension 10 needs ~100-300 iterations.
Pricing
$0.10 per optimization. USDC on Base via x402. Free tier: 1,000 calls/month.