Higgsfield-ai-prompt-skill higgsfield
git clone https://github.com/OSideMedia/higgsfield-ai-prompt-skill
git clone --depth=1 https://github.com/OSideMedia/higgsfield-ai-prompt-skill ~/.claude/skills/osidemedia-higgsfield-ai-prompt-skill-higgsfield
SKILL.mdHiggsfield AI Prompt Skill
Language rule: Reply in whatever language the user writes in.
MANDATORY WORKFLOW — do these in order, every time
For ANY Higgsfield-related request, follow this contract:
- Route the request. Match the user's ask to the routing table in the "Route to the Right Skill" section below, and open the matching sub-skill file(s) with the read tool BEFORE writing any prompt or advice. Do not rely on prior knowledge — platform vocabulary, preset names, and model parameters must be read fresh from the skill files, because this platform's lineup changes between releases.
- Apply MCSLA (Model · Camera · Subject · Look · Action) to every video prompt unless the user explicitly opts out. Full definition lives in
.skills/higgsfield-prompt/SKILL.md - Use named platform vocabulary only. Camera names, motion presets, and style tokens must come from
or the relevant sub-skill. Do not invent terms the platform doesn't recognize — invented names silently degrade generations without warning.vocab.md - Append shared negative constraints from
before delivering any prompt.skills/shared/negative-constraints.md - On the first Higgsfield response in a conversation, state ONE short line naming which sub-skill you're routing to. Example: "Routing to higgsfield-prompt + higgsfield-camera for a cinematic chase." One line only, then proceed with the work.
HARD RULES (do not skip under any circumstances)
- NEVER write a Higgsfield prompt without reading at least
first in the current conversation.skills/higgsfield-prompt/SKILL.md - NEVER substitute generic video-prompt vocabulary for named Higgsfield presets.
- NEVER skip MCSLA structure on video prompts.
- NEVER invent model versions, camera presets, or motion preset names. If the user names one you don't see in the skill files, say so and ask for clarification.
- If you find yourself thinking "I already know how to do this from training" — stop. Read the file. Your training data for this platform is stale.
What Is Higgsfield?
Higgsfield is a cinematic AI video and image generation platform built for filmmakers and creators. Unlike single-model tools, Higgsfield hosts multiple generation engines on one platform — Kling 3.0/3.0 Omni/3.0 Motion Control, Sora 2, Google Veo 3.1/3.1 Lite, Wan 2.7/2.6/2.5, Seedance 2.0/Pro, Minimax Hailuo 2.3/02, Higgsfield DoP (Lite/Standard/Turbo) for video; Soul 2.0, Soul Cinema Preview, Soul Cast, Nano Banana Pro/2, Kling Image 3.0/Omni, Seedream 4.0, GPT Image 1.5, Flux 2/Kontext for images — plus a library of 100+ named Motion Presets, a Soul ID character consistency system, Cinema Studio 2.5 and Cinema Studio 3.0 (Business/Team plan) with Soul Cast AI actors, native dual-channel stereo audio, and 80+ one-click Apps.
Workflow
Fast Path — Simple Creative Requests
If the user provides a clear creative intent ("write me a prompt for a car chase at night") with no specific constraints, generate immediately using these sensible defaults:
Fast Path still requires reading
first — Fast Path means skip clarifying questions, NOT skip the file read.skills/higgsfield-prompt/SKILL.md
| Parameter | Default |
|---|---|
| Aspect ratio | 16:9 |
| Duration | 8s |
| Style | Cinematic |
| Video model | Kling 3.0 (character-focused) or Sora 2 (action/scale) |
| Image model | Soul 2.0 (portrait) or Nano Banana 2 (everything else) |
Do not ask clarifying questions. Deliver a ready-to-paste prompt. Mention the defaults used so the user can adjust if they want something different.
If you did not read
earlier in this conversation, read it now before writing the prompt.skills/higgsfield-prompt/SKILL.md
Full Path — Production Requests
When the user signals production-grade intent (Cinema Studio, multi-shot, specific model, budget constraints, client work), confirm before generating:
Required:
- Generation type: Image / Video / App (one-click)
- Video duration: 5s / 10s (image-to-video clips are 3–5s; text-to-video up to 10s+)
- Aspect ratio: 16:9 / 9:16 / 1:1 / 4:5 / 4:3 / 2.35:1 (default: 16:9)
- Model preference (or ask Claude to recommend — see
)skills/higgsfield-models/SKILL.md
Optional (skip if user already provided):
- Visual style: Cinematic / VHS / Super 8MM / Anamorphic / Abstract
- Soul ID character reference (if character consistency needed)
- Reference image for image-to-video
- Motion preset preference
Ask everything in one message — do not split across multiple rounds.
Route to the Right Skill
| User wants | Route to |
|---|---|
| User unsure which workspace/tool fits, or asks "what should I use for X" | |
| Write or improve a prompt | + relevant sub-skills |
| Cinematic still image prompt (shot framing, angles) | |
| Choose the right model | |
| Camera movement guidance (video) | |
| Named motion preset (Explosion, Werewolf, etc.) | |
| Visual style selection | |
| Character consistency across shots | |
| VFX presets (Air Bending, Plasma, etc.) | |
| One-click App workflow | |
| Genre recipe (action, horror, ad, etc.) | |
| Fix a failing generation | |
| Moodboard, style direction, Soul Hex color | |
| Visual consistency across a project | |
| Mixed Media presets (Noir, Sketch, Particles, etc.) | |
| Artistic style transformation, preset stacking | |
| Higgsfield Assist (GPT-5 copilot) | |
| Credit optimization, plan selection, budget strategy | |
| Cinema Studio 2.5 / Cinema Studio 3.0 (Business/Team) / multi-shot sequence workflow / Soul Cast | |
| Optical physics, camera bodies, lenses, Hero Frame | |
| Elements system (@Characters/@Locations/@Props) | |
| Director Panel, Speed Ramp, shot modes, Popcorn | |
| Cinema Studio 3.0 Smart mode, @ references, native audio | |
| Multi-shot workflow, chaining tools, full production pipeline | |
| Short film, branded content, Popcorn → video → assembly | |
| Vibe Motion, motion graphics, kinetic typography, brand animation | |
| Animated text, logo animation, Remotion-based output | |
| Pre-generation memory check, apply past failure fixes | |
| Audio design, dialogue cues, SFX, ambient sound | |
| Seedance 2.0 / Pro prompt, flagged prompt, credit waste on Seedance | |
Check Templates for Genre Match
Before writing a prompt from scratch, check if the user's request matches a common genre pattern. The
templates/ folder contains 10 annotated example templates with line-by-line
breakdowns, recommended models, negative constraints, and variations.
| User request matches | Check template |
|---|---|
| Chase, pursuit, action, parkour | |
| Product, commercial, ad, UGC | |
| Horror, scary, creepy, dread | |
| Fashion, editorial, lookbook | |
| Sci-fi, cyberpunk, VFX, space | |
| Portrait, character intro, close-up | |
| Landscape, nature, establishing shot | |
| Comedy, social media, TikTok, skit | |
| Romance, intimate, couple, wedding | |
| Dance, music, performance, concert | |
Use the template as a starting point — adapt the example prompt to the user's specific request. The annotations explain WHY each element works, helping you make informed substitutions.
Build the Prompt Using the MCSLA Formula
Full MCSLA definition and prompt structure →
skills/higgsfield-prompt/SKILL.md
Quick summary — five layers, every prompt:
| M | C | S | L | A |
|---|---|---|---|---|
| Model | Camera | Subject | Look | Action |
Core rules:
- Be specific — name camera presets, describe VFX concretely
- Keep prompts under 200 words
- Subject → Action → Camera → Style is the most reliable order
Output Format
Single prompt:
**Model**: [model name] **Aspect ratio**: [ratio] **Duration**: [Xs] **Style**: [style] [Prompt] **Camera**: [camera control name] **Motion preset** (if used): [preset name]
Two versions (when style varies):
### Version 1 — [Style Name] [Prompt] --- ### Version 2 — [Style Name] [Prompt]
Output rules:
- Output a clean, ready-to-paste prompt — no meta-commentary after
- Do not explain what every line does unless the user asks
- Always name the camera control and motion preset explicitly
@ Reference Rules
- User uploads image: use
or describe it as "the provided reference"[reference image] - For Soul ID character: note "using Soul ID character reference" in the prompt
- For video extension: note "extend from [reference video], continue with..."
- For style transfer: note "match the visual style of [reference image]"
Shared Resources
| Resource | What it contains | When to use |
|---|---|---|
| All generation artifacts + prevention phrases, by category | Check before every prompt — append relevant constraints |
| 10 annotated genre templates with examples, models, annotations, variations | When user request matches a common genre — use as starting point |
Sub-Skills (auto-loaded as needed)
| Skill | Trigger |
|---|---|
| User is choosing a workspace / asking "what should I use for X" / hasn't picked a tool yet |
| Any prompt writing or refinement request |
| Cinematic image prompts — shot framing, angles, composition |
| "Which model should I use?" / model comparison |
| Camera movement questions (video) |
| Named preset requests (Explosion, Werewolf, VFX, etc.) |
| Visual style / aesthetic questions |
| Character consistency / Soul ID |
| One-click app recommendations |
| Genre scene templates |
| Failed generations / quality issues |
| Moodboard / Soul Hex / project-level style consistency |
| Artistic preset overlays (Noir, Sketch, Particles, etc.) |
| Higgsfield Assist copilot / credit optimization / plan selection |
| Cinema Studio 2.5 + 3.0 (Business/Team) / Soul Cast / color grading / optical physics / multi-shot / Elements / Smart mode / @ references |
| Multi-shot workflow / tool chaining / full production pipeline |
| Vibe Motion / motion graphics / kinetic typography / brand animation |
| Pre-generation memory check / apply past failure fixes |
| Audio design, dialogue, SFX, ambient sound for audio-capable models |
| Seedance 2.0 / Pro prompt director + content-filter preflight linter |
Full vocabulary in
Full motion preset library invocab.mdModel comparison inskills/higgsfield-motion/SKILL.mdExample prompts inmodel-guide.mdShared negative constraints inprompt-examples.mdGenre-specific annotated templates inskills/shared/negative-constraints.mdtemplates/