Materials-simulation-skills simulation-orchestrator
install
source · Clone the upstream repo
git clone https://github.com/HeshamFS/materials-simulation-skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/HeshamFS/materials-simulation-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/simulation-workflow/simulation-orchestrator" ~/.claude/skills/heshamfs-materials-simulation-skills-simulation-orchestrator && rm -rf "$T"
manifest:
skills/simulation-workflow/simulation-orchestrator/SKILL.mdsource content
Simulation Orchestrator
Goal
Provide tools to manage multi-simulation campaigns: generate parameter sweeps, track job execution status, and aggregate results from completed runs.
Requirements
- Python 3.10+
- No external dependencies (uses Python standard library only)
- Works on Linux, macOS, and Windows
Inputs to Gather
Before running orchestration scripts, collect from the user:
| Input | Description | Example |
|---|---|---|
| Base config | Template simulation configuration | |
| Parameter ranges | Parameters to sweep with bounds | |
| Sweep method | How to sample parameter space | , , |
| Output directory | Where to store campaign files | |
| Simulation command | Command to run each simulation | |
Decision Guidance
Choosing a Sweep Method
Need every combination (full factorial)? ├── YES → Use grid (warning: exponential growth with parameters) └── NO → Is space-filling coverage needed? ├── YES → Use lhs (Latin Hypercube Sampling) └── NO → Use linspace for uniform sampling per parameter
| Method | Best For | Sample Count |
|---|---|---|
| Low dimensions (1-3), need exact corners | n^d (exponential) |
| 1D sweeps, uniform spacing | n per parameter |
| High dimensions, space-filling | user-specified budget |
Campaign Size Guidelines
| Parameters | Grid Points Each | Total Runs | Recommendation |
|---|---|---|---|
| 1 | 10 | 10 | Grid is fine |
| 2 | 10 | 100 | Grid acceptable |
| 3 | 10 | 1,000 | Consider LHS |
| 4+ | 10 | 10,000+ | Use LHS or DOE |
Script Outputs (JSON Fields)
| Script | Output Fields |
|---|---|
| , , , |
| , , , |
| , , , , |
| , , , |
Workflow
Step 1: Generate Parameter Sweep
Create configurations for all parameter combinations:
python3 scripts/sweep_generator.py \ --base-config base_config.json \ --params "dt:1e-4:1e-2:5,kappa:0.1:1.0:3" \ --method linspace \ --output-dir ./campaign_001 \ --json
Step 2: Initialize Campaign
Create campaign tracking structure:
python3 scripts/campaign_manager.py \ --action init \ --config-dir ./campaign_001 \ --command "python sim.py --config {config}" \ --json
Step 3: Track Job Status
Monitor running jobs:
python3 scripts/job_tracker.py \ --campaign-dir ./campaign_001 \ --update \ --json
Step 4: Aggregate Results
Combine results from completed runs:
python3 scripts/result_aggregator.py \ --campaign-dir ./campaign_001 \ --metric objective_value \ --json
CLI Examples
# Generate 5x3=15 runs varying dt (5 values) and kappa (3 values) python3 scripts/sweep_generator.py \ --base-config sim.json \ --params "dt:1e-4:1e-2:5,kappa:0.1:1.0:3" \ --method linspace \ --output-dir ./sweep_001 \ --json # Generate LHS samples for 4 parameters with budget of 20 runs python3 scripts/sweep_generator.py \ --base-config sim.json \ --params "dt:1e-4:1e-2,kappa:0.1:1.0,M:1e-6:1e-4,W:0.5:2.0" \ --method lhs \ --samples 20 \ --output-dir ./lhs_001 \ --json # Check campaign status python3 scripts/campaign_manager.py \ --action status \ --config-dir ./sweep_001 \ --json # Get summary statistics from completed runs python3 scripts/result_aggregator.py \ --campaign-dir ./sweep_001 \ --metric final_energy \ --json
Conversational Workflow Example
User: I want to run a parameter sweep on dt and kappa for my phase-field simulation. I want to try 5 values of dt between 1e-4 and 1e-2, and 4 values of kappa between 0.1 and 1.0.
Agent workflow:
- Calculate total runs: 5 x 4 = 20 runs
- Generate sweep configurations:
python3 scripts/sweep_generator.py \ --base-config simulation.json \ --params "dt:1e-4:1e-2:5,kappa:0.1:1.0:4" \ --method linspace \ --output-dir ./dt_kappa_sweep \ --json - Initialize campaign:
python3 scripts/campaign_manager.py \ --action init \ --config-dir ./dt_kappa_sweep \ --command "python phase_field.py --config {config}" \ --json - After user runs simulations, aggregate results:
python3 scripts/result_aggregator.py \ --campaign-dir ./dt_kappa_sweep \ --metric interface_width \ --json
Error Handling
| Error | Cause | Resolution |
|---|---|---|
| Invalid file path | Verify base config file exists |
| Malformed param string | Use format or |
| Would overwrite | Use or choose new directory |
| No results to aggregate | Wait for jobs to complete or check for failures |
| Result files missing field | Verify metric name in result JSON |
Integration with Other Skills
The simulation-orchestrator works with other simulation-workflow skills:
parameter-optimization simulation-orchestrator │ │ │ DOE samples ────────────────>│ Generate configs │ │ │ │ Run simulations │ │ │<──────────────────────────── │ Aggregate results │ │ │ Sensitivity analysis │ │ Optimizer selection │
Typical Combined Workflow
- Use
to get sample pointsparameter-optimization/doe_generator.py - Use
to create configssimulation-orchestrator/sweep_generator.py - Run simulations (user's responsibility)
- Use
to collect resultssimulation-orchestrator/result_aggregator.py - Use
to analyzeparameter-optimization/sensitivity_summary.py
Security
Input Validation
- Metric names are validated against
to prevent traversal or injection via crafted keys[a-zA-Z_][a-zA-Z0-9_.]*
validates command templates to reject shell chaining operators (campaign_manager.py
,;
,|
, backticks,&
)$
format strings are parsed and validated (--params
with finite numeric bounds and positive integer counts)name:min:max:count
is validated against a fixed allowlist (--method
,grid
,linspace
)lhs
is validated as a positive integer with an upper bound--samples
is validated against a fixed allowlist (--action
,init
)status
File Access
reads a single base config file (JSON) specified bysweep_generator.py
and writes generated configs to--base-config--output-dir
enforces a 10 MB file-size limit per result file, maximum JSON nesting depth, and strict numeric type checking (rejectsresult_aggregator.py
,bool
,NaN
)Inf- All string values from result files are sanitized (truncated, control characters stripped) before surfacing them
- Config paths interpolated into shell commands are validated against a safe-character allowlist and escaped with
shlex.quote()
Tool Restrictions
- Read: Used to inspect script source, references, base configs, and campaign status files
- Write: Used to save generated sweep configs, campaign manifests, and aggregated results; writes are scoped to the user's working directory
- Grep/Glob: Used to locate campaign files, result files, and search references
- The skill's
excludesallowed-tools
to prevent the agent from executing arbitrary commands when processing untrusted simulation outputsBash
Safety Measures
- No
,eval()
, or dynamic code generationexec() - All subprocess calls use explicit argument lists (no
)shell=True - Reduced tool surface (no Bash) limits the agent to read/write operations only
- Command templates are validated but never executed by the skill itself; execution is the user's responsibility
Limitations
- Not a job scheduler: Does not submit jobs to SLURM/PBS; generates configs and tracks status
- No parallel execution: User must run simulations externally (can use GNU parallel, SLURM, etc.)
- File-based tracking: Status tracked via files; no database or real-time monitoring
- Local filesystem: Assumes all files accessible from local machine
References
- Common campaign structuresreferences/campaign_patterns.md
- Parameter sweep design guidancereferences/sweep_strategies.md
- Result aggregation techniquesreferences/aggregation_methods.md
Version History
- v1.0.0 (2024-12-24): Initial release with sweep, campaign, tracking, and aggregation