Skills youtube-transcription

install
source · Clone the upstream repo
git clone https://github.com/TerminalSkills/skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/TerminalSkills/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/youtube-transcription" ~/.claude/skills/terminalskills-skills-youtube-transcription && rm -rf "$T"
manifest: skills/youtube-transcription/SKILL.md
safety · automated scan (medium risk)
This is a pattern-based risk scan, not a security review. Our crawler flagged:
  • uses sudo
  • pip install
Always read a skill's source content before installing. Patterns alone don't mean the skill is malicious — but they warrant attention.
source content

YouTube Video Transcription

Transcribe YouTube videos to text using OpenAI Whisper and yt-dlp.

Overview

This skill downloads audio from YouTube videos using yt-dlp and transcribes it using OpenAI's Whisper model. Supports multiple output formats (txt, srt, vtt, json) and various model sizes for different accuracy/speed tradeoffs.

Instructions

1. Install dependencies

# Install whisper and yt-dlp
pip install openai-whisper yt-dlp

# Verify ffmpeg is installed (required for audio processing)
ffmpeg -version

If ffmpeg is missing:

2. Download audio from YouTube

# Download best audio quality as WAV
yt-dlp -x --audio-format wav -o "%(title)s.%(ext)s" "YOUTUBE_URL"

# Download as MP3 (smaller file)
yt-dlp -x --audio-format mp3 -o "%(title)s.%(ext)s" "YOUTUBE_URL"

# Download with video ID as filename (safer for special characters)
yt-dlp -x --audio-format wav -o "%(id)s.%(ext)s" "YOUTUBE_URL"

3. Choose Whisper model

ModelParametersVRAMRelative SpeedUse Case
tiny39M~1 GB~32xQuick drafts, testing
base74M~1 GB~16xFast transcription
small244M~2 GB~6xGood balance
medium769M~5 GB~2xHigh accuracy
large1550M~10 GB1xBest accuracy

English-only models (

tiny.en
,
base.en
,
small.en
,
medium.en
) are faster for English content.

4. Run transcription

CLI approach:

# Basic transcription (auto-detect language)
whisper audio.wav --model medium

# Specify language for better accuracy
whisper audio.wav --model medium --language en

# Output specific format
whisper audio.wav --model medium --output_format srt

# All formats at once
whisper audio.wav --model medium --output_format all

# Specify output directory
whisper audio.wav --model medium --output_dir ./transcripts

Python approach:

import whisper

# Load model (downloads on first run)
model = whisper.load_model("medium")

# Transcribe
result = model.transcribe("audio.wav", language="en")

# Get plain text
print(result["text"])

# Get segments with timestamps
for segment in result["segments"]:
    print(f"[{segment['start']:.2f} - {segment['end']:.2f}] {segment['text']}")

5. One-liner pipeline

Combine download and transcription:

# Download and transcribe in one command
yt-dlp -x --audio-format wav -o "audio.wav" "YOUTUBE_URL" && whisper audio.wav --model medium --output_format all

6. Alternative: yt-whisper tool

For simpler workflow, use the dedicated yt-whisper package:

# Install
pip install git+https://github.com/m1guelpf/yt-whisper.git

# Transcribe directly from URL
yt_whisper "https://www.youtube.com/watch?v=VIDEO_ID"

# With options
yt_whisper "YOUTUBE_URL" --model medium --language en --output_format srt

Output Formats

FormatExtensionDescription
txt.txtPlain text transcript
srt.srtSubRip subtitle format (with timestamps)
vtt.vttWebVTT subtitle format
tsv.tsvTab-separated values
json.jsonFull data with word-level timestamps

Examples

<example> User: Transcribe this YouTube video to text Steps: 1. yt-dlp -x --audio-format wav -o "video.wav" "https://www.youtube.com/watch?v=dQw4w9WgXcQ" 2. whisper video.wav --model medium --language en --output_format txt Output: video.txt with full transcript </example> <example> User: Generate SRT subtitles for a YouTube lecture Steps: 1. yt-dlp -x --audio-format wav -o "lecture.wav" "https://www.youtube.com/watch?v=LECTURE_ID" 2. whisper lecture.wav --model medium --output_format srt Output: lecture.srt with timestamped subtitles </example> <example> User: Transcribe a Spanish YouTube video Steps: 1. yt-dlp -x --audio-format wav -o "spanish.wav" "https://www.youtube.com/watch?v=VIDEO_ID" 2. whisper spanish.wav --model medium --language es --output_format all Output: spanish.txt, spanish.srt, spanish.vtt, spanish.json </example> <example> User: Quick transcription of a short video (speed over accuracy) Command: yt-dlp -x --audio-format mp3 -o "quick.mp3" "URL" && whisper quick.mp3 --model tiny.en </example> <example> User: Get transcript with timestamps in Python ```python import whisper model = whisper.load_model("medium") result = model.transcribe("audio.wav") for seg in result["segments"]: print(f"[{seg['start']:.1f}s] {seg['text']}") ``` </example>

Guidelines

  • Use
    --language
    flag when you know the spoken language for significantly better accuracy
  • For long videos (>1 hour), use
    small
    or
    medium
    model to balance speed and accuracy
  • English-only models (
    .en
    suffix) are faster and more accurate for English content
  • GPU with CUDA dramatically speeds up transcription; CPU works but is 5-10x slower
  • If transcription fails, ensure ffmpeg is properly installed and in PATH
  • For videos with background music, larger models (medium/large) handle it better
  • Clean up audio files after transcription to save disk space
  • Use
    --output_format all
    to get every format at once, then choose what you need