Openclaw-config openai-whisper
Local speech-to-text with the Whisper CLI (no API key); Don't use if you need cloud transcription with no local model; prefer openai-whisper-api.
install
source · Clone the upstream repo
git clone https://github.com/unisone/openclaw-config
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/unisone/openclaw-config "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/openai-whisper" ~/.claude/skills/unisone-openclaw-config-openai-whisper && rm -rf "$T"
OpenClaw · Install into ~/.openclaw/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/unisone/openclaw-config "$T" && mkdir -p ~/.openclaw/skills && cp -r "$T/skills/openai-whisper" ~/.openclaw/skills/unisone-openclaw-config-openai-whisper && rm -rf "$T"
manifest:
skills/openai-whisper/SKILL.mdsource content
Whisper (CLI)
Use
whisper to transcribe audio locally.
Routing
- ❌ Don't use if you need cloud transcription (no local model / no local compute).
- ✅ Prefer
for cloud transcription.openai-whisper-api
Quick start
whisper /path/audio.mp3 --model medium --output_format txt --output_dir .whisper /path/audio.m4a --task translate --output_format srt
Notes
- Models download to
on first run.~/.cache/whisper
defaults to--model
on this install.turbo- Use smaller models for speed, larger for accuracy.