Skills generate-image
Generate article companion images for the content factory pipeline. Use when Codex needs article images, infographic cards, inline visuals, or a PNG exported from an article markdown draft before preview or publishing.
git clone https://github.com/openclaw/skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/abigale-cyber/content-system-generate-image" ~/.claude/skills/openclaw-skills-generate-image && rm -rf "$T"
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.openclaw/skills && cp -r "$T/skills/abigale-cyber/content-system-generate-image" ~/.openclaw/skills/openclaw-skills-generate-image && rm -rf "$T"
skills/abigale-cyber/content-system-generate-image/SKILL.mdGenerate Image
Generate the image asset for an article draft as an independent executor. On ClawHub, this skill is published as
content-system-generate-image.
Quick Start
Run the default command:
.venv/bin/python -m skill_runtime.cli run-skill generate-image --input content-production/drafts/ai-content-system-article.md
Prepare Source Article
Start from an article markdown draft that already has a stable title, structure, and core message. Use the article content to infer the primary visual theme and the most useful image role, such as header art, infographic card, or inline supporting visual.
The default remote image backend is:
provider: openai api base: https://new.suxi.ai/v1 model: nano-nx
This is meant for the 香蕉画图 endpoint, which is treated as an OpenAI-compatible image API.
Follow Generation Workflow
- Read the article draft and extract the topic, tone, and the strongest visualizable idea.
- Generate the preferred image through the shared runtime in
.skills/generate-image/runtime.py - Fall back to the local infographic renderer when external generation fails or is unavailable.
- Write the exported PNG to the pipeline output path. If
is involved, let the workbench decide whether and how to ingest the result.wechat-studio
Per-article overrides are supported through Markdown frontmatter fields:
image_providerimage_api_baseimage_model
Write Output
Write the primary exported file to:
content-production/ready/<slug>-img-1.png
Respect Constraints
- External image generation may fail because of network or API issues
- The skill injects its own provider, base URL, and default model at runtime instead of changing the global
configmd2wechat - Users with an existing 香蕉制作平台 can use it directly
- Users without one can open job.suxi.ai, generate an
, place it into the token field, and log inSK - When fallback is used, the PNG is still valid but is a local placeholder-style information card
- Treat
as the executor's exported artifact; any workbench copy should be managed bycontent-production/ready/*.png
, not by this skillwechat-studio
Read Related Files
- Shared runtime:
skills/generate-image/runtime.py - Pipeline entry:
skill_runtime/engine.py - Visual workbench:
skills/wechat-studio/frontend/server.py - Execution guide:
docs/generate-image-execution-spec.md