Openclaw openai-whisper-api

Transcribe audio via OpenAI Audio Transcriptions API (Whisper).

install
source · Clone the upstream repo
git clone https://github.com/openclaw/openclaw
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/openclaw "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/openai-whisper-api" ~/.claude/skills/openclaw-openclaw-openai-whisper-api && rm -rf "$T"
OpenClaw · Install into ~/.openclaw/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/openclaw "$T" && mkdir -p ~/.openclaw/skills && cp -r "$T/skills/openai-whisper-api" ~/.openclaw/skills/openclaw-openclaw-openai-whisper-api && rm -rf "$T"
manifest: skills/openai-whisper-api/SKILL.md
source content

OpenAI Whisper API (curl)

Transcribe an audio file via OpenAI’s

/v1/audio/transcriptions
endpoint. Set
OPENAI_BASE_URL
to use an OpenAI-compatible proxy or local gateway.

Quick start

{baseDir}/scripts/transcribe.sh /path/to/audio.m4a

Defaults:

  • Model:
    whisper-1
  • Output:
    <input>.txt

Useful flags

{baseDir}/scripts/transcribe.sh /path/to/audio.ogg --model whisper-1 --out /tmp/transcript.txt
{baseDir}/scripts/transcribe.sh /path/to/audio.m4a --language en
{baseDir}/scripts/transcribe.sh /path/to/audio.m4a --prompt "Speaker names: Peter, Daniel"
{baseDir}/scripts/transcribe.sh /path/to/audio.m4a --json --out /tmp/transcript.json

API key

Set

OPENAI_API_KEY
, or configure it in the active OpenClaw config file (
$OPENCLAW_CONFIG_PATH
, default
~/.openclaw/openclaw.json
). Optionally set
OPENAI_BASE_URL
(for example
http://127.0.0.1:51805/v1
) to use an OpenAI-compatible proxy or local gateway:

{
  skills: {
    "openai-whisper-api": {
      apiKey: "OPENAI_KEY_HERE",
    },
  },
}