GB-Power-Market-JJ IMA AI Video Generator
install
source · Clone the upstream repo
git clone https://github.com/GeorgeDoors888/GB-Power-Market-JJ
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/GeorgeDoors888/GB-Power-Market-JJ "$T" && mkdir -p ~/.claude/skills && cp -r "$T/openclaw-skills/skills/allenfancy-gan/ima-video-ai" ~/.claude/skills/georgedoors888-gb-power-market-jj-ima-ai-video-generator && rm -rf "$T"
OpenClaw · Install into ~/.openclaw/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/GeorgeDoors888/GB-Power-Market-JJ "$T" && mkdir -p ~/.openclaw/skills && cp -r "$T/openclaw-skills/skills/allenfancy-gan/ima-video-ai" ~/.openclaw/skills/georgedoors888-gb-power-market-jj-ima-ai-video-generator && rm -rf "$T"
manifest:
openclaw-skills/skills/allenfancy-gan/ima-video-ai/SKILL.mdsource content
IMA Video AI — Video Generator
For complete API documentation, security details, all parameters, error tables, and Python examples, read
.SKILL-DETAIL.md
Model ID Reference (CRITICAL)
Use exact model_id for the active task_type (t2v vs i2v differ for some models). Do NOT infer from friendly names.
| Friendly Name | model_id (t2v) | model_id (i2v) | Notes |
|---|---|---|---|
| Wan 2.6 | | | ⚠️ -t2v / -i2v suffix |
| IMA Video Pro (Sevio 1.0) | | | IMA native quality |
| IMA Video Pro Fast | | | Faster iteration |
| Kling O1 | | | ⚠️ video- prefix |
| Kling 2.6 | | | ⚠️ v prefix |
| Hailuo 2.3 | | | ⚠️ MiniMax- prefix |
| Hailuo 2.0 | | | ⚠️ 02 not 2.0 |
| Vidu Q2 | | | ⚠️ i2v often -pro |
| Google Veo 3.1 | | | ⚠️ -generate-preview |
| Sora 2 Pro | | | Content policy strict |
| Pixverse | | | Version via product list |
| SeeDance 1.5 Pro | | | ⚠️ doubao- prefix |
Aliases: 万/Wan → Wan 2.6 · 可灵O1 →
kling-video-o1 · 海螺2.3 → MiniMax-Hailuo-2.3 · Veo → veo-3.1-generate-preview · Ima Sevio 1.0 → ima-pro · Ima Sevio 1.0-Fast → ima-pro-fast
Use
--list-models --task-type <text_to_video|image_to_video|...> when unsure.
Video Modes (task_type)
| User intent | task_type |
|---|---|
| Text only | |
| Image becomes frame 1 | |
| Image is visual reference (not frame 1) | |
| Two images: first + last frame | |
If ima-knowledge-ai is installed, read
references/video-modes.md and visual-consistency.md when user needs continuity across shots or references a previous image.
Visual Consistency (IMPORTANT)
- Text-only generation cannot reliably keep the same character/scene across runs.
- For “同一个角色 / 续集 / 分镜”: use image modes with the prior result (or reference image), not
alone.text_to_video
Model Selection Priority
- User explicit preference (saved in
only when user clearly picks a model)ima_prefs.json - ima-knowledge-ai (if installed)
- Fallback defaults (see SKILL-DETAIL.md for full table)
| Task | Default (fallback) | model_id |
|---|---|---|
| text_to_video | Wan 2.6 | |
| image_to_video | Wan 2.6 | |
| first_last_frame_to_video | Kling O1 | |
| reference_image_to_video | Kling O1 | |
Script Usage
# Text to video python3 {baseDir}/scripts/ima_video_create.py \ --api-key $IMA_API_KEY \ --task-type text_to_video \ --model-id wan2.6-t2v \ --prompt "a puppy runs across a sunny meadow, cinematic" \ --user-id {user_id} \ --output-json # Image to video (URLs or local paths; script uploads locals) python3 {baseDir}/scripts/ima_video_create.py \ --api-key $IMA_API_KEY \ --task-type image_to_video \ --model-id wan2.6-i2v \ --prompt "camera slowly zooms in" \ --input-images https://example.com/photo.jpg \ --user-id {user_id} \ --output-json # First–last frame python3 {baseDir}/scripts/ima_video_create.py \ --api-key $IMA_API_KEY \ --task-type first_last_frame_to_video \ --model-id kling-video-o1 \ --prompt "smooth transition" \ --input-images https://example.com/first.jpg https://example.com/last.jpg \ --user-id {user_id} \ --output-json
Sending Results to User
video_url = json_output["url"] message(action="send", media=video_url, caption="✅ 视频生成成功!\n• 模型:[Name]\n• 耗时:[X]s\n• 积分:[N pts]\n\n🔗 原始链接:[url]")
Never download to a local path for
media — use the HTTPS URL from the API.
UX Protocol (Brief)
- Pre-generation: model name · estimated time range · credits
- Progress: poll ~8s; update user every 30–60s; cap % at 95 until done
- Success: send
, then optional text with link for copy/sharemedia=video_url - Failure: plain-language reason + 1–2 alternate models — never raw API errors. Full error table in SKILL-DETAIL.md.
Never say to users: script names, endpoints,
attribute_id, internal field names.
Environment
Base URL:
https://api.imastudio.comHeaders:
Authorization: Bearer $IMA_API_KEY · x-app-source: ima_skills · x_app_language: enImage upload (when needed):
imapi.liveme.com (same provider; see SKILL-DETAIL.md).
Core Flow
→GET /open/v1/product/list?app=ima&platform=web&category=<task_type>
,attribute_id
,credit
,model_versionform_config- Image tasks: ensure public HTTPS URLs (script handles local upload)
→POST /open/v1/tasks/createtask_id
→ poll every 8s, timeout up to ~40 min as documented in detail filePOST /open/v1/tasks/detail
MANDATORY: Always query product list first; wrong or stale
attribute_id causes create failures.
User Preference Memory
Path:
~/.openclaw/memory/ima_prefs.jsonSave when user explicitly chooses a default model; clear when they ask for “推荐 / 自动 / 最好的”. Do not save auto-picked models as preference.
Polling & Timing (summary)
| Kind | Poll interval | Typical wait |
|---|---|---|
| Most models | 8s | ~1–6 min |
| Heavy models (e.g. Kling O1, Sora Pro, Veo) | 8s | longer; see SKILL-DETAIL.md table |
Sora 2 Pro (brief)
Strict safety: avoid people, celebrities, and IP in prompts; prefer landscapes/abstract/safe subjects — details in SKILL-DETAIL.md.