Openclaw openai-whisper
Local speech-to-text with the Whisper CLI (no API key).
install
source · Clone the upstream repo
git clone https://github.com/openclaw/openclaw
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/openclaw "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/openai-whisper" ~/.claude/skills/openclaw-openclaw-openai-whisper && rm -rf "$T"
OpenClaw · Install into ~/.openclaw/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/openclaw "$T" && mkdir -p ~/.openclaw/skills && cp -r "$T/skills/openai-whisper" ~/.openclaw/skills/openclaw-openclaw-openai-whisper && rm -rf "$T"
manifest:
skills/openai-whisper/SKILL.mdsource content
Whisper (CLI)
Use
whisper to transcribe audio locally.
Quick start
whisper /path/audio.mp3 --model medium --output_format txt --output_dir .whisper /path/audio.m4a --task translate --output_format srt
Notes
- Models download to
on first run.~/.cache/whisper
defaults to--model
on this install.turbo- Use smaller models for speed, larger for accuracy.