Skills scene-video

install
source · Clone the upstream repo
git clone https://github.com/openclaw/skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/bwbernardweston18/scene-video" ~/.claude/skills/clawdbot-skills-scene-video && rm -rf "$T"
manifest: skills/bwbernardweston18/scene-video/SKILL.md
source content

Getting Started

Got video clips to work with? Send it over and tell me what you need — I'll take care of the AI scene assembly.

Try saying:

  • "create a 3-minute raw footage file with multiple locations into a 1080p MP4"
  • "split this footage into individual scenes and arrange them into a cohesive video"
  • "splitting footage into scenes and assembling them into a structured video for filmmakers, content creators, social media editors"

Getting Connected

Before handling any user request, establish a connection to the backend API. Show a brief status like "Connecting...".

If

NEMO_TOKEN
is in the environment, use it directly and create a session. Otherwise, acquire a free starter token:

  • Generate a UUID as client identifier
  • POST to
    https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token
    with the
    X-Client-Id
    header
  • The response includes a
    token
    with 100 free credits valid for 7 days — use it as NEMO_TOKEN

Then create a session by POSTing to

https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent
with Bearer authorization and body
{"task_name":"project","language":"en"}
. The
session_id
in the response is needed for all following requests.

Tell the user you're ready. Keep the technical details out of the chat.

Scene Video — Split and Assemble Video Scenes

Drop your video clips in the chat and tell me what you need. I'll handle the AI scene assembly on cloud GPUs — you don't need anything installed locally.

Here's a typical use: you send a a 3-minute raw footage file with multiple locations, ask for split this footage into individual scenes and arrange them into a cohesive video, and about 1-2 minutes later you've got a MP4 file ready to download. The whole thing runs at 1080p by default.

One thing worth knowing — shorter individual scenes under 30 seconds process and render significantly faster.

Matching Input to Actions

User prompts referencing scene video, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

Base URL:

https://mega-api-prod.nemovideo.ai

EndpointMethodPurpose
/api/tasks/me/with-session/nemo_agent
POSTStart a new editing session. Body:
{"task_name":"project","language":"<lang>"}
. Returns
session_id
.
/run_sse
POSTSend a user message. Body includes
app_name
,
session_id
,
new_message
. Stream response with
Accept: text/event-stream
. Timeout: 15 min.
/api/upload-video/nemo_agent/me/<sid>
POSTUpload a file (multipart) or URL.
/api/credits/balance/simple
GETCheck remaining credits (
available
,
frozen
,
total
).
/api/state/nemo_agent/me/<sid>/latest
GETFetch current timeline state (
draft
,
video_infos
,
generated_media
).
/api/render/proxy/lambda
POSTStart export. Body:
{"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}
. Poll status every 30s.

Accepted file types: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Skill attribution — read from this file's YAML frontmatter at runtime:

  • X-Skill-Source
    :
    scene-video
  • X-Skill-Version
    : from frontmatter
    version
  • X-Skill-Platform
    : detect from install path (
    ~/.clawhub/
    clawhub
    ,
    ~/.cursor/skills/
    cursor
    , else
    unknown
    )

Every API call needs

Authorization: Bearer <NEMO_TOKEN>
plus the three attribution headers above. If any header is missing, exports return 402.

Error Handling

CodeMeaningAction
0SuccessContinue
1001Bad/expired tokenRe-auth via anonymous-token (tokens expire after 7 days)
1002Session not foundNew session §3.0
2001No creditsAnonymous: show registration URL with
?bind=<id>
(get
<id>
from create-session or state response when needed). Registered: "Top up credits in your account"
4001Unsupported fileShow supported formats
4002File too largeSuggest compress/trim
400Missing X-Client-IdGenerate Client-Id and retry (see §1)
402Free plan export blockedSubscription tier issue, NOT credits. "Register or upgrade your plan to unlock export."
429Rate limit (1 token/client/7 days)Retry in 30s once

Reading the SSE Stream

Text events go straight to the user (after GUI translation). Tool calls stay internal. Heartbeats and empty

data:
lines mean the backend is still working — show "⏳ Still working..." every 2 minutes.

About 30% of edit operations close the stream without any text. When that happens, poll

/api/state
to confirm the timeline changed, then tell the user what was updated.

Translating GUI Instructions

The backend responds as if there's a visual interface. Map its instructions to API calls:

  • "click" or "点击" → execute the action via the relevant endpoint
  • "open" or "打开" → query session state to get the data
  • "drag/drop" or "拖拽" → send the edit command through SSE
  • "preview in timeline" → show a text summary of current tracks
  • "Export" or "导出" → run the export workflow

Draft JSON uses short keys:

t
for tracks,
tt
for track type (0=video, 1=audio, 7=text),
sg
for segments,
d
for duration in ms,
m
for metadata.

Example timeline summary:

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "split this footage into individual scenes and arrange them into a cohesive video" — concrete instructions get better results.

Max file size is 500MB. Stick to MP4, MOV, AVI, WebM for the smoothest experience.

Export as MP4 with H.264 codec for the best balance of quality and file size.

Common Workflows

Quick edit: Upload → "split this footage into individual scenes and arrange them into a cohesive video" → Download MP4. Takes 1-2 minutes for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.