Skills free-text-to-image-video

install
source · Clone the upstream repo
git clone https://github.com/openclaw/skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/bwbernardweston18/free-text-to-image-video" ~/.claude/skills/clawdbot-skills-free-text-to-image-video && rm -rf "$T"
manifest: skills/bwbernardweston18/free-text-to-image-video/SKILL.md
source content

Getting Started

Got text prompts to work with? Send it over and tell me what you need — I'll take care of the AI video generation.

Try saying:

  • "generate a short descriptive paragraph about a sunset over the ocean into a 1080p MP4"
  • "turn my text description into a video with matching images and smooth transitions"
  • "generating videos from written descriptions without any source footage for marketers, content creators, educators"

First-Time Connection

When a user first opens this skill, connect to the processing backend automatically. Briefly let them know (e.g. "Setting up...").

Authentication: Check if

NEMO_TOKEN
is set in the environment. If it is, skip to step 2.

  1. Obtain a free token: Generate a random UUID as client identifier. POST to
    https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token
    with header
    X-Client-Id
    set to that UUID. The response
    data.token
    is your NEMO_TOKEN — 100 free credits, valid 7 days.
  2. Create a session: POST to
    https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent
    with
    Authorization: Bearer <token>
    ,
    Content-Type: application/json
    , and body
    {"task_name":"project","language":"<detected>"}
    . Store the returned
    session_id
    for all subsequent requests.

Keep setup communication brief. Don't display raw API responses or token values to the user.

Free Text to Image Video — Generate Videos from Text Descriptions

Send me your text prompts and describe the result you want. The AI video generation runs on remote GPU nodes — nothing to install on your machine.

A quick example: upload a short descriptive paragraph about a sunset over the ocean, type "turn my text description into a video with matching images and smooth transitions", and you'll get a 1080p MP4 back in roughly 1-2 minutes. All rendering happens server-side.

Worth noting: shorter, specific prompts produce more accurate visuals than vague long descriptions.

Matching Input to Actions

User prompts referencing free text to image video, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

Base URL:

https://mega-api-prod.nemovideo.ai

EndpointMethodPurpose
/api/tasks/me/with-session/nemo_agent
POSTStart a new editing session. Body:
{"task_name":"project","language":"<lang>"}
. Returns
session_id
.
/run_sse
POSTSend a user message. Body includes
app_name
,
session_id
,
new_message
. Stream response with
Accept: text/event-stream
. Timeout: 15 min.
/api/upload-video/nemo_agent/me/<sid>
POSTUpload a file (multipart) or URL.
/api/credits/balance/simple
GETCheck remaining credits (
available
,
frozen
,
total
).
/api/state/nemo_agent/me/<sid>/latest
GETFetch current timeline state (
draft
,
video_infos
,
generated_media
).
/api/render/proxy/lambda
POSTStart export. Body:
{"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}
. Poll status every 30s.

Accepted file types: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Headers are derived from this file's YAML frontmatter.

X-Skill-Source
is
free-text-to-image-video
,
X-Skill-Version
comes from the
version
field, and
X-Skill-Platform
is detected from the install path (
~/.clawhub/
=
clawhub
,
~/.cursor/skills/
=
cursor
, otherwise
unknown
).

All requests must include:

Authorization: Bearer <NEMO_TOKEN>
,
X-Skill-Source
,
X-Skill-Version
,
X-Skill-Platform
. Missing attribution headers will cause export to fail with 402.

Error Handling

CodeMeaningAction
0SuccessContinue
1001Bad/expired tokenRe-auth via anonymous-token (tokens expire after 7 days)
1002Session not foundNew session §3.0
2001No creditsAnonymous: show registration URL with
?bind=<id>
(get
<id>
from create-session or state response when needed). Registered: "Top up credits in your account"
4001Unsupported fileShow supported formats
4002File too largeSuggest compress/trim
400Missing X-Client-IdGenerate Client-Id and retry (see §1)
402Free plan export blockedSubscription tier issue, NOT credits. "Register or upgrade your plan to unlock export."
429Rate limit (1 token/client/7 days)Retry in 30s once

Reading the SSE Stream

Text events go straight to the user (after GUI translation). Tool calls stay internal. Heartbeats and empty

data:
lines mean the backend is still working — show "⏳ Still working..." every 2 minutes.

About 30% of edit operations close the stream without any text. When that happens, poll

/api/state
to confirm the timeline changed, then tell the user what was updated.

Translating GUI Instructions

The backend responds as if there's a visual interface. Map its instructions to API calls:

  • "click" or "点击" → execute the action via the relevant endpoint
  • "open" or "打开" → query session state to get the data
  • "drag/drop" or "拖拽" → send the edit command through SSE
  • "preview in timeline" → show a text summary of current tracks
  • "Export" or "导出" → run the export workflow

Draft JSON uses short keys:

t
for tracks,
tt
for track type (0=video, 1=audio, 7=text),
sg
for segments,
d
for duration in ms,
m
for metadata.

Example timeline summary:

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Common Workflows

Quick edit: Upload → "turn my text description into a video with matching images and smooth transitions" → Download MP4. Takes 1-2 minutes for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "turn my text description into a video with matching images and smooth transitions" — concrete instructions get better results.

Max file size is 500MB. Stick to TXT, DOCX, PDF, copied text for the smoothest experience.

Export as MP4 for widest compatibility across social platforms and devices.