Skills generate-podcast-clips
Use this skill when the user wants to turn a long podcast, interview, webinar, or talking-head video into multiple short clips for TikTok, Reels, or YouTube Shorts. It wraps Subscut's podcast clipping API in a narrow CLI interface with explicit env requirements and predictable JSON output.
git clone https://github.com/openclaw/skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/arpittiwari24/podcast-clipper-subscut" ~/.claude/skills/clawdbot-skills-generate-podcast-clips && rm -rf "$T"
skills/arpittiwari24/podcast-clipper-subscut/SKILL.mdGenerate Podcast Clips
Use this skill to convert a long-form spoken video into multiple short clips through the Subscut
/podcast-to-clips API.
What This Skill Does
The skill is an opinionated wrapper around the API outcome:
- extracts up to 20 strong short clips from a long-form video
- favors viral and high-retention spoken moments
- adds captions with selectable styles
- supports two render formats:
(auto-reframing) anddynamic
(original frame + title card)hook_frame - returns titles, scores, and rendered clip URLs
Think in outcomes, not transport:
- Good framing: "Extract viral short-form content from this podcast"
- Bad framing: "Call some video API"
When To Use
Use this skill when:
- the input is a long podcast, interview, webinar, or talking-head video
- the user wants growth, repurposing, shorts, reels, or TikTok content
- the user wants minimal manual editing
Avoid this skill when:
- the source is already short-form
- the content is mostly non-speech
- the user wants manual, frame-by-frame editing decisions
Do not use it as a generic video editing tool.
Input Contract
Use this compact input shape when planning or explaining the tool call:
{ "video_url": "https://example.com/video.mp4", "max_clips": 5, "style": "viral", "format": "dynamic", "captions": true, "clip_duration": { "min": 20, "max": 60 } }
Field Reference
| Field | Type | Default | Notes |
|---|---|---|---|
| string | — | Required. Any HTTP/HTTPS URL. YouTube, direct MP4, Google Drive. |
| integer | | Range: 1–20. Short videos (≤3 min) are capped at 2 clips automatically. |
| string | | Caption style. See styles below. |
| string | | Render format. See formats below. |
| boolean | | Whether to burn in captions. |
| integer | | Minimum clip length in seconds. Floor: 10s. |
| integer | | Maximum clip length in seconds. Ceiling: 60s. Must be ≥ min. |
Caption Styles (style
)
style| Value | Description |
|---|---|
| Bold animated-word captions (MrBeast style). Default. |
| Alias for . |
| Alias for . Single highlighted word, clean font. |
| Single highlighted word, clean font. |
| Plain white subtitles, no animation. |
| Alias for . |
Render Formats (format
)
format| Value | Description |
|---|---|
| Auto-detects split-screen vs. solo framing, reframes to 9:16. Default. |
| Preserves original video frame, adds a title card at the top, captions at the bottom. |
Use
hook_frame when the video is already vertical or the user wants the title displayed prominently.
Use dynamic (default) for horizontal/landscape podcasts with one or two speakers.
Output Contract
Expect JSON in this shape:
{ "clips": [ { "video_url": "https://...", "title": "Why Most Founders Get This Wrong", "score": 0.92, "start": 142.5, "end": 198.3 } ] }
score is a 0–1 float representing clip virality confidence. Higher is better.
CLI Entry Point
Once the skill is installed, the agent runtime should invoke the bundled CLI wrapper:
npm --prefix .agents/skills/generate-podcast-clips run generate-podcast-clips -- \ --video-url "https://example.com/podcast.mp4" \ --max-clips 5 \ --clip-style viral \ --format dynamic \ --captions true \ --min-clip-duration 20 \ --max-clip-duration 60
Required environment variables:
orSUBSCUT_API_KEY--api-key
Default base URL:
https://subscut.com
Install Model
This skill is meant to be published once by Subscut and then installed by users through the marketplace.
End-user flow:
- Install the published skill from ClawHub / OpenClaw.
- Set
.SUBSCUT_API_KEY - Let the agent call the skill when it needs to turn a long-form spoken video into short clips.
The skill should not ask users to publish or package anything themselves.
Agent Workflow
- Confirm the source is a long-form spoken video.
- Prefer the CLI wrapper over hand-written
.curl - Keep the request simple unless the user asks for custom clip counts, durations, style, or format.
- Default
toformat
unless the user explicitly wants a title card (dynamic
).hook_frame - Return the resulting clip URLs, titles, scores, and timestamps.
- If the API fails, surface the status and response body clearly.
- Keep a human in the loop before final publishing if the downstream workflow is public.
Natural-Language Triggers
Likely user intents that should map to this skill:
- "Turn this podcast into reels"
- "Make shorts from this interview"
- "Repurpose my long video into clips"
- "Find the viral moments in this podcast"
- "Generate YouTube Shorts from this episode"
- "Make clips with a title card at the top"
- "Give me clean subtitle clips"
- "Extract 10 clips from this webinar"
Notes
- The API route is
.POST /podcast-to-clips
maps to the APIstyle
field.style
is used as the CLI flag name.clip-style
controls render layout:format
(default) ordynamic
.hook_frame
andclip_duration.min
let callers control clip length window.clip_duration.max- Short videos (≤3 min) are automatically capped at 2 clips regardless of
.max_clips - The script is executed through the skill-local
script.npm - The skill is intentionally opinionated and keeps the parameter surface small for better agent usage.
- This package is for installation and runtime usage, not for publisher-only deployment steps.