Skillsbench sampling-and-indexing
Standardize video sampling and frame indexing so interval instructions and mask frames stay aligned with a valid key/index scheme.
install
source · Clone the upstream repo
git clone https://github.com/benchflow-ai/skillsbench
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/benchflow-ai/skillsbench "$T" && mkdir -p ~/.claude/skills && cp -r "$T/tasks/dynamic-object-aware-egomotion/environment/skills/sampling-and-indexing" ~/.claude/skills/benchflow-ai-skillsbench-sampling-and-indexing && rm -rf "$T"
manifest:
tasks/dynamic-object-aware-egomotion/environment/skills/sampling-and-indexing/SKILL.mdsource content
When to use
- You need to decide a sampling stride/FPS and ensure all downstream outputs (interval instructions, per-frame artifacts, etc.) cover the same frame range with consistent indices.
Core steps
- Read video metadata: frame count, fps, resolution.
- Choose a sampling strategy (e.g., every 10 frames or target ~10–15 fps) to produce
.sample_ids - Only produce instructions and masks for
; the max index must besample_ids
.< total_frames - Use a strict interval key format such as
(integers only). Decide (and document) whether"{start}->{end}"
is inclusive or exclusive, and be consistent.end
Pseudocode
import cv2 VIDEO_PATH = "<path/to/video>" cap=cv2.VideoCapture(VIDEO_PATH) n=int(cap.get(cv2.CAP_PROP_FRAME_COUNT)) fps=cap.get(cv2.CAP_PROP_FPS) step=10 # example sample_ids=list(range(0, n, step)) if sample_ids[-1] != n-1: sample_ids.append(n-1) # Generate all downstream outputs only for sample_ids
Self-check list
-
strictly increasing, all < total frame count.sample_ids - Output coverage max index matches
(or matches your documented sampling policy).sample_ids[-1] - JSON keys are plain
, no extra text.start->end - Any per-frame artifact store (e.g., NPZ) contains exactly the sampled frames and no extras.