Skills code-refactor-for-reproducibility
Use when refactoring research code for publication, adding documentation to existing analysis scripts, creating reproducible computational workflows, or preparing code for sharing with collaborators. Transforms research code into publication-ready, reproducible workflows. Adds documentation, implements error handling, creates environment specifications, and ensures computational reproducibility for scientific publications.
git clone https://github.com/openclaw/skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/aipoch-ai/code-refactor-for-reproducibility-1" ~/.claude/skills/openclaw-skills-code-refactor-for-reproducibility && rm -rf "$T"
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.openclaw/skills && cp -r "$T/skills/aipoch-ai/code-refactor-for-reproducibility-1" ~/.openclaw/skills/openclaw-skills-code-refactor-for-reproducibility && rm -rf "$T"
skills/aipoch-ai/code-refactor-for-reproducibility-1/SKILL.mdResearch Code Reproducibility Refactoring Tool
When to Use
- Use this skill when the task needs Use when refactoring research code for publication, adding documentation to existing analysis scripts, creating reproducible computational workflows, or preparing code for sharing with collaborators. Transforms research code into publication-ready, reproducible workflows. Adds documentation, implements error handling, creates environment specifications, and ensures computational reproducibility for scientific publications.
- Use this skill for data analysis tasks that require explicit assumptions, bounded scope, and a reproducible output format.
- Use this skill when you need a documented fallback path for missing inputs, execution errors, or partial evidence.
Key Features
- Scope-focused workflow aligned to: Use when refactoring research code for publication, adding documentation to existing analysis scripts, creating reproducible computational workflows, or preparing code for sharing with collaborators. Transforms research code into publication-ready, reproducible workflows. Adds documentation, implements error handling, creates environment specifications, and ensures computational reproducibility for scientific publications.
- Packaged executable path(s):
.scripts/main.py - Structured execution path designed to keep outputs consistent and reviewable.
Dependencies
:Python
. Repository baseline for current packaged skills.3.10+
:numpy
. Declared inunspecified
.requirements.txt
:pandas
. Declared inunspecified
.requirements.txt
:pytest
. Declared inunspecified
.requirements.txt
:scipy
. Declared inunspecified
.requirements.txt
:src
. Declared inunspecified
.requirements.txt
Example Usage
cd "20260318/scientific-skills/Data Analytics/code-refactor-for-reproducibility" python -m py_compile scripts/main.py python scripts/main.py --help
Example run plan:
- Confirm the user input, output path, and any required config values.
- Edit the in-file
block or documented parameters if the script uses fixed settings.CONFIG - Run
with the validated inputs.python scripts/main.py - Review the generated output and return the final artifact with any assumptions called out.
Implementation Details
See
## Workflow above for related details.
- Execution model: validate the request, choose the packaged workflow, and produce a bounded deliverable.
- Input controls: confirm the source files, scope limits, output format, and acceptance criteria before running any script.
- Primary implementation surface:
.scripts/main.py - Parameters to clarify first: input path, output path, scope filters, thresholds, and any domain-specific constraints.
- Output discipline: keep results reproducible, identify assumptions explicitly, and avoid undocumented side effects.
Quick Check
Use this command to verify that the packaged script entry point can be parsed before deeper execution.
python -m py_compile scripts/main.py
Audit-Ready Commands
Use these concrete commands for validation. They are intentionally self-contained and avoid placeholder paths.
python -m py_compile scripts/main.py python scripts/main.py --help
Workflow
- Confirm the user objective, required inputs, and non-negotiable constraints before doing detailed work.
- Validate that the request matches the documented scope and stop early if the task would require unsupported assumptions.
- Use the packaged script path or the documented reasoning path with only the inputs that are actually available.
- Return a structured result that separates assumptions, deliverables, risks, and unresolved items.
- If execution fails or inputs are incomplete, switch to the fallback path and state exactly what blocked full completion.
Workflow Overview
Follow this sequence when refactoring a research codebase:
- Analyze — identify reproducibility issues in existing code
- Refactor — apply documentation, parameterization, and error handling
- Specify environment — pin dependencies and create environment files
- Validate — run tests and verify behaviour is unchanged
Step 1: Analyze Code for Reproducibility Issues
Read each source file and check for the following problems. Document findings before making any changes.
Checklist: missing docstrings · hardcoded absolute paths · missing random seeds · bare
except: clauses · unpinned imports · unexplained magic numbers
Example — detecting issues manually:
import ast, pathlib def find_hardcoded_paths(source: str) -> list[str]: """Return string literals that look like absolute paths.""" tree = ast.parse(source) return [ node.s for node in ast.walk(tree) if isinstance(node, ast.Constant) and isinstance(node.s, str) and node.s.startswith("/") ] source = pathlib.Path("analysis.py").read_text() print(find_hardcoded_paths(source))
Step 2: Refactor for Best Practices
Apply improvements in place. Always back up originals first.
2a. Add docstrings
# Before def load_data(path): import pandas as pd return pd.read_csv(path) # After def load_data(path: str) -> "pd.DataFrame": """Load a CSV dataset from disk. Parameters ---------- path : str Path to the CSV file (relative to project root). Returns ------- pd.DataFrame Raw dataset with original column names preserved. """ import pandas as pd return pd.read_csv(path)
2b. Parameterize hardcoded values
from pathlib import Path import argparse def parse_args(): parser = argparse.ArgumentParser() parser.add_argument("--data", type=Path, default=Path("data/raw.csv")) parser.add_argument("--output", type=Path, default=Path("results/")) return parser.parse_args() args = parse_args() df = pd.read_csv(args.data) args.output.mkdir(parents=True, exist_ok=True)
2c. Set random seeds
import random import numpy as np SEED = 42 # document this constant at module level random.seed(SEED) np.random.seed(SEED) # scikit-learn from sklearn.ensemble import RandomForestClassifier clf = RandomForestClassifier(random_state=SEED) # PyTorch import torch torch.manual_seed(SEED) torch.backends.cudnn.deterministic = True
2d. Add error handling and logging
import logging from pathlib import Path logging.basicConfig(level=logging.INFO, format="%(asctime)s [%(levelname)s] %(message)s") logger = logging.getLogger(__name__) def load_data(path: Path) -> "pd.DataFrame": """Load dataset with validation.""" import pandas as pd if not path.exists(): raise FileNotFoundError(f"Data file not found: {path}") logger.info("Loading data from %s", path) df = pd.read_csv(path) if df.empty: raise ValueError(f"Loaded dataframe is empty: {path}") logger.info("Loaded %d rows, %d columns", *df.shape) return df
Step 3: Generate Environment Specifications
See
references/environment-setup.md for full Dockerfile and Conda environment templates.
requirements.txt (pip)
pip install pipreqs pipreqs src/ --output requirements.txt --force
Verify resolution:
python -m venv .venv_test && source .venv_test/bin/activate pip install -r requirements.txt python -c "import pandas, numpy, sklearn" deactivate && rm -rf .venv_test
environment.yml (Conda)
name: my-research-env channels: - conda-forge - defaults dependencies: - python=3.9 - numpy=1.24.3 - pandas=2.0.1 - scikit-learn=1.2.2 - matplotlib=3.7.1 - pip: - some-pip-only-package==0.5.0
conda env create -f environment.yml conda activate my-research-env
Step 4: Create Documentation
README structure
Generate a
README.md containing at minimum:
## Requirements <!-- List Python version and key packages with versions --> ## Installation ```text conda env create -f environment.yml conda activate my-research-env
Data
<!-- Describe input data format, source, and where to place files -->Running the Analysis
python main.py --data data/raw.csv --output results/
Expected Outputs
<!-- Describe files created and how to interpret them -->Reproducing Results
- Random seed: 42 (set in
)config.py - Hardware: results validated on CPU; GPU results may differ slightly
--- ## Step 5: Validate Reproducibility After all changes, verify that behaviour is unchanged: ```text # 1. Run the full pipeline and capture output checksums python main.py --data data/raw.csv --output results/ md5sum results/*.csv > checksums_refactored.md5 diff checksums_original.md5 checksums_refactored.md5 # 2. Run unit tests pytest tests/ -v --tb=short # 3. Confirm determinism across two clean runs python main.py --output results_run1/ python main.py --output results_run2/ diff -r results_run1/ results_run2/
Reproducibility verification checklist:
- Output checksums match pre-refactor baseline
- All tests pass
- Pipeline runs twice and produces identical outputs
-
/requirements.txt
installs cleanly in a fresh environmentenvironment.yml - No absolute paths remain in source files
- Random seeds are set and documented
- All public functions have docstrings
- README contains complete reproduction instructions
Best Practices Summary
| Practice |
|---|
| Relative paths only |
| Pin dependency versions |
| Set random seeds |
| Docstrings on all public functions |
| Validate outputs against a baseline |
| Automate environment setup |
References
— Comprehensive user guidereferences/guide.md
— Dockerfile and full environment templatesreferences/environment-setup.md
— Working code examplesreferences/examples/
— Complete API documentationreferences/api-docs/
Skill ID: 455 | Version: 1.0 | License: MIT
Output Requirements
Every final response should make these items explicit when they are relevant:
- Objective or requested deliverable
- Inputs used and assumptions introduced
- Workflow or decision path
- Core result, recommendation, or artifact
- Constraints, risks, caveats, or validation needs
- Unresolved items and next-step checks
Error Handling
- If required inputs are missing, state exactly which fields are missing and request only the minimum additional information.
- If the task goes outside the documented scope, stop instead of guessing or silently widening the assignment.
- If
fails, report the failure point, summarize what still can be completed safely, and provide a manual fallback.scripts/main.py - Do not fabricate files, citations, data, search results, or execution outcomes.
Input Validation
This skill accepts requests that match the documented purpose of
code-refactor-for-reproducibility and include enough context to complete the workflow safely.
Do not continue the workflow when the request is out of scope, missing a critical input, or would require unsupported assumptions. Instead respond:
only handles its documented workflow. Please provide the missing required inputs or switch to a more suitable skill.code-refactor-for-reproducibility
Response Template
Use the following fixed structure for non-trivial requests:
- Objective
- Inputs Received
- Assumptions
- Workflow
- Deliverable
- Risks and Limits
- Next Checks
If the request is simple, you may compress the structure, but still keep assumptions and limits explicit when they affect correctness.