Skills free-video-generation-api

install
source · Clone the upstream repo
git clone https://github.com/openclaw/skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/bwbernardweston18/free-video-generation-api" ~/.claude/skills/clawdbot-skills-free-video-generation-api && rm -rf "$T"
manifest: skills/bwbernardweston18/free-video-generation-api/SKILL.md
source content

Getting Started

Share your text prompts or images and I'll get started on AI video generation. Or just tell me what you're thinking.

Try saying:

  • "generate my text prompts or images"
  • "export 1080p MP4"
  • "generate a short video clip from"

First-Time Connection

When a user first opens this skill, connect to the processing backend automatically. Briefly let them know (e.g. "Setting up...").

Authentication: Check if

NEMO_TOKEN
is set in the environment. If it is, skip to step 2.

  1. Obtain a free token: Generate a random UUID as client identifier. POST to
    https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token
    with header
    X-Client-Id
    set to that UUID. The response
    data.token
    is your NEMO_TOKEN — 100 free credits, valid 7 days.
  2. Create a session: POST to
    https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent
    with
    Authorization: Bearer <token>
    ,
    Content-Type: application/json
    , and body
    {"task_name":"project","language":"<detected>"}
    . Store the returned
    session_id
    for all subsequent requests.

Keep setup communication brief. Don't display raw API responses or token values to the user.

Free Video Generation API — Generate Videos via API

Drop your text prompts or images in the chat and tell me what you need. I'll handle the AI video generation on cloud GPUs — you don't need anything installed locally.

Here's a typical use: you send a a text prompt describing a 15-second product demo scene, ask for generate a short video clip from a text description using the free API tier, and about 30-90 seconds later you've got a MP4 file ready to download. The whole thing runs at 1080p by default.

One thing worth knowing — shorter prompts with clear scene descriptions produce more consistent results on the free tier.

Matching Input to Actions

User prompts referencing free video generation api, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

Skill attribution — read from this file's YAML frontmatter at runtime:

  • X-Skill-Source
    :
    free-video-generation-api
  • X-Skill-Version
    : from frontmatter
    version
  • X-Skill-Platform
    : detect from install path (
    ~/.clawhub/
    clawhub
    ,
    ~/.cursor/skills/
    cursor
    , else
    unknown
    )

Every API call needs

Authorization: Bearer <NEMO_TOKEN>
plus the three attribution headers above. If any header is missing, exports return 402.

API base:

https://mega-api-prod.nemovideo.ai

Create session: POST

/api/tasks/me/with-session/nemo_agent
— body
{"task_name":"project","language":"<lang>"}
— returns
task_id
,
session_id
.

Send message (SSE): POST

/run_sse
— body
{"app_name":"nemo_agent","user_id":"me","session_id":"<sid>","new_message":{"parts":[{"text":"<msg>"}]}}
with
Accept: text/event-stream
. Max timeout: 15 minutes.

Upload: POST

/api/upload-video/nemo_agent/me/<sid>
— file: multipart
-F "files=@/path"
, or URL:
{"urls":["<url>"],"source_type":"url"}

Credits: GET

/api/credits/balance/simple
— returns
available
,
frozen
,
total

Session state: GET

/api/state/nemo_agent/me/<sid>/latest
— key fields:
data.state.draft
,
data.state.video_infos
,
data.state.generated_media

Export (free, no credits): POST

/api/render/proxy/lambda
— body
{"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}
. Poll GET
/api/render/proxy/lambda/<id>
every 30s until
status
=
completed
. Download URL at
output.url
.

Supported formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Reading the SSE Stream

Text events go straight to the user (after GUI translation). Tool calls stay internal. Heartbeats and empty

data:
lines mean the backend is still working — show "⏳ Still working..." every 2 minutes.

About 30% of edit operations close the stream without any text. When that happens, poll

/api/state
to confirm the timeline changed, then tell the user what was updated.

Translating GUI Instructions

The backend responds as if there's a visual interface. Map its instructions to API calls:

  • "click" or "点击" → execute the action via the relevant endpoint
  • "open" or "打开" → query session state to get the data
  • "drag/drop" or "拖拽" → send the edit command through SSE
  • "preview in timeline" → show a text summary of current tracks
  • "Export" or "导出" → run the export workflow

Draft field mapping:

t
=tracks,
tt
=track type (0=video, 1=audio, 7=text),
sg
=segments,
d
=duration(ms),
m
=metadata.

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Error Codes

  • 0
    — success, continue normally
  • 1001
    — token expired or invalid; re-acquire via
    /api/auth/anonymous-token
  • 1002
    — session not found; create a new one
  • 2001
    — out of credits; anonymous users get a registration link with
    ?bind=<id>
    , registered users top up
  • 4001
    — unsupported file type; show accepted formats
  • 4002
    — file too large; suggest compressing or trimming
  • 400
    — missing
    X-Client-Id
    ; generate one and retry
  • 402
    — free plan export blocked; not a credit issue, subscription tier
  • 429
    — rate limited; wait 30s and retry once

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "generate a short video clip from a text description using the free API tier" — concrete instructions get better results.

Max file size is 200MB. Stick to MP4, MOV, WebM, GIF for the smoothest experience.

Export as MP4 for widest compatibility across web and mobile platforms.

Common Workflows

Quick edit: Upload → "generate a short video clip from a text description using the free API tier" → Download MP4. Takes 30-90 seconds for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.