Skillsbench mpc-horizon-tuning
Selecting MPC prediction horizon and cost matrices for web handling.
install
source · Clone the upstream repo
git clone https://github.com/benchflow-ai/skillsbench
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/benchflow-ai/skillsbench "$T" && mkdir -p ~/.claude/skills && cp -r "$T/tasks/r2r-mpc-control/environment/skills/mpc-horizon-tuning" ~/.claude/skills/benchflow-ai-skillsbench-mpc-horizon-tuning && rm -rf "$T"
manifest:
tasks/r2r-mpc-control/environment/skills/mpc-horizon-tuning/SKILL.mdsource content
MPC Tuning for Tension Control
Prediction Horizon Selection
Horizon N affects performance and computation:
- Too short (N < 5): Poor disturbance rejection
- Too long (N > 20): Excessive computation
- Rule of thumb: N ≈ 2-3× settling time / dt
For R2R systems with dt=0.01s: N = 5-15 typical
Cost Matrix Design
State cost Q: Emphasize tension tracking
Q_tension = 100 / T_ref² # High weight on tensions Q_velocity = 0.1 / v_ref² # Lower weight on velocities Q = diag([Q_tension × 6, Q_velocity × 6])
Control cost R: Penalize actuator effort
R = 0.01-0.1 × eye(n_u) # Smaller = more aggressive
Trade-offs
| Higher Q | Effect |
|---|---|
| Faster tracking | More control effort |
| Lower steady-state error | More aggressive transients |
| Higher R | Effect |
|---|---|
| Smoother control | Slower response |
| Less actuator wear | Higher tracking error |
Terminal Cost
Use LQR solution for terminal cost to guarantee stability:
P = solve_continuous_are(A, B, Q, R)