install
source · Clone the upstream repo
git clone https://github.com/MacPhobos/research-mind
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/MacPhobos/research-mind "$T" && mkdir -p ~/.claude/skills && cp -r "$T/.claude/skills/universal-data-reporting-pipelines" ~/.claude/skills/macphobos-research-mind-universal-data-reporting-pipelines && rm -rf "$T"
manifest:
.claude/skills/universal-data-reporting-pipelines/skill.mdsource content
Reporting Pipelines
Overview
Your reporting pattern is consistent across repos: run a CLI or script that emits structured data, then export CSV/JSON/markdown reports with timestamped filenames into
reports/ or tests/results/.
GitFlow Analytics Pattern
# Basic run gitflow-analytics -c config.yaml --weeks 8 --output ./reports # Explicit analyze + CSV gitflow-analytics analyze -c config.yaml --weeks 12 --output ./reports --generate-csv
Outputs include CSV + markdown narrative reports with date suffixes.
EDGAR CSV Export Pattern
edgar/scripts/create_csv_reports.py reads a JSON results file and emits:
executive_compensation_<timestamp>.csvtop_25_executives_<timestamp>.csvcompany_summary_<timestamp>.csv
This script uses pandas for sorting and percentile calculations.
Standard Pipeline Steps
- Collect base data (CLI or JSON artifacts)
- Normalize into rows/records
- Export CSV/JSON/markdown with timestamp suffixes
- Summarize key metrics in stdout
- Store outputs in
orreports/tests/results/
Naming Conventions
- Use
orYYYYMMDD
suffixesYYYYMMDD_HHMMSS - Keep one output directory per repo (
orreports/
)tests/results/ - Prefer explicit prefixes (e.g.,
,narrative_report_
)comprehensive_export_
Troubleshooting
- Missing output: ensure output directory exists and is writable.
- Large CSVs: filter or aggregate before export; keep summary CSVs for quick review.
Related Skills
universal/data/sec-edgar-pipelinetoolchains/universal/infrastructure/github-actions