git clone https://github.com/GeorgeDoors888/GB-Power-Market-JJ
T=$(mktemp -d) && git clone --depth=1 https://github.com/GeorgeDoors888/GB-Power-Market-JJ "$T" && mkdir -p ~/.claude/skills && cp -r "$T/openclaw-skills/skills/albertsalgueda/krea" ~/.claude/skills/georgedoors888-gb-power-market-jj-krea && rm -rf "$T"
T=$(mktemp -d) && git clone --depth=1 https://github.com/GeorgeDoors888/GB-Power-Market-JJ "$T" && mkdir -p ~/.openclaw/skills && cp -r "$T/openclaw-skills/skills/albertsalgueda/krea" ~/.openclaw/skills/georgedoors888-gb-power-market-jj-krea && rm -rf "$T"
openclaw-skills/skills/albertsalgueda/krea/SKILL.mdKrea AI — Image, Video & Enhancement Generation
Generate images, videos, upscale/enhance images, and train LoRA styles using the Krea.ai API. Supports 20+ image models (Flux, Imagen, GPT Image, Ideogram, Seedream...), 7 video models (Kling, Veo, Hailuo, Wan), and 3 upscalers (Topaz up to 22K).
IMPORTANT: Do NOT invent model names. Before generating, run
list_models.py to get the live list of available models from the Krea API. The tables below are a reference, but the API may have newer models. Scripts also accept full endpoint paths from list_models.py --json output (e.g. --model /generate/image/google/imagen-4-ultra).
Usage
Run scripts using absolute path (do NOT cd to skill directory first):
Generate image:
uv run ~/.codex/skills/krea/scripts/generate_image.py --prompt "your description" --filename "output.png" [--model flux] [--width 1024] [--height 1024] [--api-key KEY]
Generate video:
uv run ~/.codex/skills/krea/scripts/generate_video.py --prompt "your description" --filename "output.mp4" [--model kling-2.5] [--duration 5] [--aspect-ratio 16:9] [--api-key KEY]
Enhance/upscale image:
uv run ~/.codex/skills/krea/scripts/enhance_image.py --image-url "https://..." --filename "upscaled.png" --width 4096 --height 4096 [--enhancer topaz] [--api-key KEY]
List available models:
uv run ~/.codex/skills/krea/scripts/list_models.py [--type image|video|enhance]
Run a multi-step pipeline:
uv run ~/.codex/skills/krea/scripts/pipeline.py --pipeline pipeline.json [--api-key KEY]
Check job status:
uv run ~/.codex/skills/krea/scripts/get_job.py --job-id "uuid" [--api-key KEY]
Important: Always run from the user's current working directory so files are saved where the user is working.
Default Workflow (draft → iterate → final)
Goal: fast iteration without burning CU on expensive models until the prompt is right.
-
Draft (cheap/fast): use
or--model z-image
(3-5 CU, ~5s)--model fluxuv run ~/.codex/skills/krea/scripts/generate_image.py --prompt "<draft prompt>" --filename "yyyy-mm-dd-hh-mm-ss-draft.png" --model flux -
Iterate: adjust prompt, keep trying with cheap models
-
Final (high quality): switch to
or--model gpt-image--model nano-banana-prouv run ~/.codex/skills/krea/scripts/generate_image.py --prompt "<final prompt>" --filename "yyyy-mm-dd-hh-mm-ss-final.png" --model nano-banana-pro
Image Models (sorted by cost)
| Model | CU | Time | Best for |
|---|---|---|---|
| 3 | ~5s | Fastest, cheapest |
| 5 | ~5s | Fast, LoRA support (default) |
| 9 | ~5s | Context-aware editing |
| 9 | ~15s | Good quality, cheap |
| 16 | ~17s | Fast Google model |
| 20 | ~8s | Good typography |
| 24 | ~20s | Photorealistic |
| 28 | ~20s | Latest Seedream |
| 31 | ~11s | High quality Flux |
| 32 | ~10s | Balanced |
| 32 | ~32s | Google Imagen 3 |
| 32 | ~32s | Google Imagen 4 |
| 40 | ~60s | Needs reference images |
| 47 | ~18s | Best Flux |
| 47 | ~30s | Best Imagen |
| 48 | ~15s | Fast photorealism |
| 54 | ~18s | Best text rendering |
| 119 | ~30s | Superior photorealism |
| 184 | ~60s | Highest quality overall |
Map user requests:
- "fast", "quick", "cheap" →
orfluxz-image - "high quality", "best" →
ornano-banana-progpt-image - "text in image", "typography" →
ideogram-3 - "photorealistic" →
orseedream-4nano-banana-pro - No preference →
flux
Video Models
| Model | CU | Best for |
|---|---|---|
| 282 | Camera control, 5-10s |
| varies | Complex scenes |
| varies | Realistic physics (default) |
| 608-1281 | Can generate audio |
| varies | Cinematic quality |
| varies | Fast, smooth motion |
| 569 | Style control |
Map user requests:
- "fast" →
hailuo-2.3 - "cinematic", "high quality" →
veo-3.1 - "with sound", "with audio" →
withveo-3--generate-audio - No preference →
kling-2.5
Enhancers
| Enhancer | CU | Max Resolution | Best for |
|---|---|---|---|
| 51 | 22K | Faithful upscaling (default) |
| 137 | 16K | Creative enhancement |
| 256 | 10K | Adding creative details |
Image Generation Parameters
| Param | Description | Default |
|---|---|---|
| Model ID from table above | |
| Text description (required) | — |
| Output filename (required) | — |
| Width in pixels (512-4096) | 1024 |
| Height in pixels (512-4096) | 1024 |
| Aspect ratio (1:1, 16:9, 9:16, 4:3, 3:2, etc.) | — |
| 1K, 2K, 4K (nano-banana models) | — |
| Seed for reproducibility | — |
| Input image URL for image-to-image | — |
| LoRA style ID to apply | — |
| LoRA strength (-2 to 2) | 1.0 |
| Number of images (1-4) | 1 |
| Inference steps, 1-100 (flux models) | 25 |
| Guidance scale, 0-24 (flux models) | 3 |
| low/medium/high/auto (gpt-image) | auto |
| Krea API token | — |
Video Generation Parameters
| Param | Description | Default |
|---|---|---|
| Model ID from table above | |
| Text description (required) | — |
| Output filename (required) | — |
| Duration in seconds | 5 |
| 16:9, 9:16, 1:1 | 16:9 |
| URL for image-to-video | — |
| End frame URL (kling only) | — |
| 720p, 1080p (veo only) | 720p |
| std, pro (kling only) | std |
| Generate audio (veo-3 only) | false |
| Krea API token | — |
Enhancement Parameters
| Param | Description | Default |
|---|---|---|
| topaz, topaz-generative, topaz-bloom | |
| Source image URL (required) | — |
| Output filename (required) | — |
| Target width (required) | — |
| Target height (required) | — |
| Sub-model variant | Standard V2 |
| 1-6 (generative) or 1-9 (bloom) | — |
| Enable face enhancement | false |
| Sharpening 0-1 | — |
| Denoising 0-1 | — |
| Krea API token | — |
API Key
Scripts check for API key in this order:
argument (use if user provided key in chat)--api-key
environment variableKREA_API_TOKEN
If neither is available, the script exits with an error message.
Preflight + Common Failures
Preflight:
(must exist)command -v uv
(or passtest -n "$KREA_API_TOKEN"
)--api-key
Common failures:
→ setError: No API key
or passKREA_API_TOKEN--api-key
→ top up compute units at https://krea.ai/settings/billing402 Insufficient credits
→ model needs a paid plan upgrade at https://krea.ai/settings/billing402 This model requires a higher plan
→ concurrent job limit reached; scripts auto-retry up to 3 times with backoff429 Too many requests
→ check prompt for content moderation issues, try different wordingJob failed
Filename Generation
Generate filenames with the pattern:
yyyy-mm-dd-hh-mm-ss-name.ext
- Timestamp: current date/time in
(24h format)yyyy-mm-dd-hh-mm-ss - Name: descriptive lowercase text with hyphens (1-5 words)
- Extension:
for images,.png
for videos.mp4
Examples:
- Prompt "A cyberpunk cat" →
2025-11-23-14-23-05-cyberpunk-cat.png - Prompt "waves on a beach" →
2025-11-23-15-30-12-beach-waves.mp4
Prompt Handling
For generation: Pass user's description as-is to
--prompt. Only rework if clearly insufficient.
For image-to-image: Use
--image-url with the source image and describe the desired transformation in --prompt.
For video from image: Use
--start-image with the source image and describe the desired motion/action in --prompt.
Preserve user's creative intent in all cases.
Output
- Scripts download the result and save it to the current directory
- Script outputs the full path to the generated file
- Do not read the image/video back — just inform the user of the saved path
- If
> 1, files are saved as--batch-size
,name-1.png
, etc.name-2.png
Examples
Quick draft image:
uv run ~/.codex/skills/krea/scripts/generate_image.py --prompt "A serene Japanese garden with cherry blossoms" --filename "2025-11-23-14-23-05-japanese-garden.png" --model flux
High quality final:
uv run ~/.codex/skills/krea/scripts/generate_image.py --prompt "A serene Japanese garden with cherry blossoms, golden hour lighting" --filename "2025-11-23-14-25-30-japanese-garden-final.png" --model nano-banana-pro --resolution 4K
Image-to-image edit:
uv run ~/.codex/skills/krea/scripts/generate_image.py --prompt "transform to watercolor painting style" --filename "2025-11-23-14-30-00-watercolor.png" --image-url "https://example.com/photo.jpg" --model nano-banana-pro
Generate video:
uv run ~/.codex/skills/krea/scripts/generate_video.py --prompt "A majestic eagle soaring over snow-capped mountains at sunrise" --filename "2025-11-23-15-00-00-eagle-mountains.mp4" --model veo-3 --duration 8 --generate-audio
Upscale image to 4K:
uv run ~/.codex/skills/krea/scripts/enhance_image.py --image-url "https://example.com/photo.jpg" --filename "2025-11-23-15-10-00-upscaled.png" --width 4096 --height 4096 --enhancer topaz
List models:
uv run ~/.codex/skills/krea/scripts/list_models.py --type image
Pipelines (Multi-Step Workflows)
Use
pipeline.py to chain multiple steps automatically. Each step's output feeds into the next. Write a JSON pipeline and run it in one command.
Pipeline JSON Format
{ "steps": [ { "action": "generate_image | generate_video | enhance | fan_out", "model": "model-id", "prompt": "...", "filename": "base-name", "use_previous": true, "...": "any other model-specific params" } ] }
Actions:
— generate an imagegenerate_image
— generate a videogenerate_video
— upscale/enhance an imageenhance
— run a sub-step for EACH result from the previous step (branching)fan_out
Special fields:
— use the output URL(s) from the previous step as inputuse_previous: true
has afan_out
field containing the template to run per source URLstep- In fan_out prompts,
is replaced with the iteration number (1, 2, 3...){i}
Example: Generate → 4 Angles → 4 Videos
Generate a concept image, create 4 angle variations, then animate each one:
{ "steps": [ { "action": "generate_image", "model": "flux", "prompt": "a red sports car on an empty highway, golden hour, cinematic", "filename": "car-concept" }, { "action": "fan_out", "use_previous": true, "step": { "action": "generate_image", "model": "gpt-image", "prompt": "same red sports car, angle {i} of 4: front three-quarter view at angle {i}, professional automotive photography, studio lighting, white background", "filename": "car-angle-{i}" } }, { "action": "fan_out", "use_previous": true, "step": { "action": "generate_video", "model": "kling-2.5", "prompt": "the red sports car slowly rotates on a turntable, smooth motion, studio lighting", "duration": 5, "filename": "car-spin-{i}" } } ] }
Run it:
uv run ~/.codex/skills/krea/scripts/pipeline.py --pipeline car-pipeline.json
Example: Generate → Upscale → Animate with Audio
{ "steps": [ { "action": "generate_image", "model": "nano-banana-pro", "prompt": "a majestic dragon perched on a cliff overlooking a stormy ocean", "filename": "dragon" }, { "action": "enhance", "use_previous": true, "enhancer": "topaz-generative", "width": 4096, "height": 4096, "creativity": 3, "filename": "dragon-4k" }, { "action": "generate_video", "use_previous": true, "model": "veo-3", "prompt": "the dragon spreads its wings and roars, lightning strikes, waves crash below, epic cinematic", "duration": 8, "generateAudio": true, "filename": "dragon-epic" } ] }
Example: Product Photography Pipeline
Generate hero shot → 4 style variations → upscale all:
{ "steps": [ { "action": "generate_image", "model": "gpt-image", "prompt": "minimalist perfume bottle, frosted glass, on marble surface, soft studio lighting, product photography", "quality": "high", "filename": "perfume-hero" }, { "action": "fan_out", "use_previous": true, "step": { "action": "generate_image", "model": "gpt-image", "prompt": "same perfume bottle, variation {i}: 1=morning light with flowers, 2=dark moody with smoke, 3=underwater with bubbles, 4=floating in clouds", "filename": "perfume-mood-{i}" } }, { "action": "fan_out", "use_previous": true, "step": { "action": "enhance", "enhancer": "topaz", "width": 4096, "height": 4096, "filename": "perfume-final-{i}" } } ] }
Example: Inline Pipeline (no JSON file needed)
For quick pipelines, pass JSON directly:
uv run ~/.codex/skills/krea/scripts/pipeline.py --pipeline '{"steps":[{"action":"generate_image","model":"flux","prompt":"a cat astronaut","filename":"cat"},{"action":"enhance","use_previous":true,"enhancer":"topaz","width":4096,"height":4096,"filename":"cat-4k"}]}'
Building Pipelines for Users
When a user asks for something complex like "generate a product shot from 4 angles and make videos of each":
- Write a pipeline JSON with the right steps
- Save it to a
file in the current directory.json - Run it with
pipeline.py --pipeline file.json - Show the user the saved file paths when done
Tips:
- Use
to branch — it runs the sub-step once per result from the previous stepfan_out
chains steps automaticallyuse_previous: true- Start cheap (
) for concept, switch to quality (flux
,gpt-image
) for finalsnano-banana-pro - Upscale as the last step before video — cheaper and better quality
- Use
in prompts inside{i}
to vary the prompt per iterationfan_out
Pipeline Advanced Features
Template variables — Use
{{variable}} in pipeline JSON and pass values at runtime:
uv run ~/.codex/skills/krea/scripts/pipeline.py --pipeline template.json --var subject="red sports car" --var style="cinematic"
Pipeline JSON can then use
{{subject}} and {{style}} anywhere in prompts, filenames, etc. All variables must be provided or the pipeline exits with an error.
Parallel fan_out — Add
"parallel": true to a fan_out step to run all sub-jobs concurrently:
{ "action": "fan_out", "use_previous": true, "parallel": true, "step": { "action": "enhance", "enhancer": "topaz", "width": 4096, "height": 4096, "filename": "upscaled-{i}" } }
Control concurrency with
--max-parallel N (default: 3).
Resume interrupted pipelines — Use
--resume to skip already-completed steps. The pipeline saves a .pipeline-state.json manifest after each step, recording result URLs. On resume, use_previous chains are correctly restored from the manifest.
Dry-run — Use
--dry-run to estimate CU cost without executing.
Notifications — Use
--notify to get a desktop notification when a pipeline finishes (Linux/macOS).