one-of-a-kind-design
git clone https://github.com/srinitude/one-of-a-kind-design-skill
T=$(mktemp -d) && git clone --depth=1 https://github.com/srinitude/one-of-a-kind-design-skill "$T" && mkdir -p ~/.claude/skills && cp -r "$T/.claude/skills/one-of-a-kind-design" ~/.claude/skills/srinitude-one-of-a-kind-design-skill-one-of-a-kind-design && rm -rf "$T"
.claude/skills/one-of-a-kind-design/SKILL.mdOne-of-a-Kind Design
Instructions
Step 0: Setup (first run only)
CRITICAL: On first invocation, install dependencies before anything else:
bun run .claude/skills/one-of-a-kind-design/scripts/setup.ts
This installs all runtime dependencies (Mastra, fal.ai, E2B, Effect, pixelmatch, etc.) and creates a
.env template. The user must fill in API keys before generation will work.
Skip this step if
node_modules/@mastra/core already exists.
Step 1: Parse the Request
The user invokes the skill with
/one-of-a-kind-design followed by their request.
Interactive mode (default):
/one-of-a-kind-design Design a site for my omakase restaurant
Headless mode (no human input, for CI/CD or batch processing):
/one-of-a-kind-design --print Design a site for my omakase restaurant
The Mastra workflow automatically:
- Extracts output type, industry, mood, audience from the intent
- Computes specificity (0-7). If below 3, ask the user 1-3 clarifying questions
- Resolves style from taxonomy with convention-breaking detection
Expected output: streaming progress events for each pipeline step.
Step 2: Review Pipeline Output
The workflow streams progress for each step:
- resolve-style -- Maps intent to style config from 65+ styles in the taxonomy
- select-models -- Picks optimal fal.ai endpoints based on style affinity
- craft-prompt -- Agent-powered prompt crafting with style tokens, palette hex codes, composition directives
- generate-artifact -- fal.ai generation (Flux Pro, Seedance 2.0, Kling, Recraft, etc.)
- post-process -- E2B sandbox processing (sharp, potrace, SVGO optimization)
- verify -- 4-layer verification (pixelmatch, SSIM, pHash, uniqueness)
- score-quality -- LLaVA 13B vision scoring, 9 weighted sub-scores, composite 0-10
Hero asset archetypes are auto-selected based on style motion_signature:
| Archetype | When | Pipeline |
|---|---|---|
| Panning Scene | cinematic, editorial | Video camera choreography |
| Parallax Depth Stack | layered, atmospheric | Image generation plus depth estimation |
| Generative Canvas | algorithmic styles | Image generation |
| 3D Object Showcase | product, isometric | 3D mesh generation |
| Typographic Statement | editorial, swiss | Image generation |
| Photographic Drama | double-exposure, infrared | Image generation |
| SVG Vector Graphic | logos, icons, decorative | QuiverAI Arrow vectorization |
Step 3: Quality Gate
Composite score is computed from 9 weighted sub-scores:
| Sub-score | Weight |
|---|---|
| Anti-slop gate | 0.15 |
| Prompt-artifact alignment | 0.15 |
| Aesthetic | 0.13 |
| Style fidelity | 0.13 |
| Distinctiveness | 0.13 |
| Asset quality | 0.12 |
| Hierarchy | 0.06 |
| Convention break adherence | 0.05 |
| Color harmony | 0.05 |
| Code standards | 0.03 |
Minimum: 7.0/10. If below, the workflow suspends in interactive mode. Present the scores to the user and ask: accept, retry with feedback, or adjust dials. In headless mode, auto-retries up to 3 times with seed bumps.
Step 4: Deliver
Share the generated artifact with the user. The artifact includes:
- Generated hero asset (image, video, SVG, or 3D mesh)
- Full website/app code using resolved style's Tailwind v4 preset
- Quality score card with all 9 sub-scores
- Style metadata (ID, palette, motion signature, premium patterns)
Headless / CI Mode
For automated pipelines without human interaction, use the
--print flag:
/one-of-a-kind-design --print Album cover for a jazz trio
In
--print mode:
- No suspend, no quality gate prompts, no human-in-the-loop
- Auto-retries up to 3 times with seed bumps if quality is below 7.0
- Outputs the artifact URL and composite score when done
- Exit code 0 for success, 1 for failure
For CI/CD scripts:
bun run .claude/skills/one-of-a-kind-design/scripts/mastra/modes/ci.ts '{"userIntent":"...","outputType":"..."}'
MCP Integration
Tools are exposed via MCP for any MCP-compatible client:
-- Full pipeline executiondesign-generate
-- Style resolution onlydesign-resolve-style
-- Quality scoring onlydesign-score
-- Verification onlydesign-verify
Anti-Slop Rules
Full reference:
references/ANTI-SLOP.md. Top rules:
- NO Inter/Roboto/Open Sans -- use style-specific typography
- NO purple-to-blue gradients -- derive palette from style
- NO default shadows -- use style's shadow model
- NO hero/features/testimonials/CTA skeleton -- restructure per style
- NO linear easing -- spring physics or style curve
- NO #000000 body text -- off-black (#111, #1a1a1a, #2F3437)
- NO Lorem Ipsum/Acme Corp -- realistic content
- NO "Elevate/Seamless/Unleash" -- specific, human copy
- NO identical card grids -- vary sizes for hierarchy
- NO simultaneous element mount -- stagger entry (80ms delay)
Environment
Requires environment variables:
FAL_KEY, E2B_API_KEY, QUIVERAI_API_KEY.
All scripts use Bun-only APIs, Effect-native TypeScript, max nesting depth 3.
Examples
Example 1: Restaurant Website
User says: "/one-of-a-kind-design Design a website for my omakase restaurant in Brooklyn. No food photos."
Actions:
- Resolve style: wabi-sabi (food + warm + intimate), variance 7, convention break: no food photography
- Select model: Flux Pro 1.1 (cinematic affinity), chain: t2i
- Craft prompt: 280 chars, subject-first, hex palette #8B7355 #D4C5A9 #F5F0E8
- Generate hero image via fal.ai, seed 42, E2B converts to WebP
- Verify: pHash stored, uniqueness confirmed (hamming 42 to nearest)
- Score: LLaVA 13B composite 7.9/10 PASS
Result: Full website with hand-textured hero, no food photography, seasonal palette system.
Example 2: Album Cover
User says: "/one-of-a-kind-design Album cover for a jazz trio's debut. Smoky, intimate, blue."
Actions:
- Resolve style: cinematic (jazz + intimate via compound map)
- Select model: Flux Pro 1.1, chain: t2i
- Craft prompt: atmospheric lighting, rich shadows, palette #1A1A2E #0F3460 #E94560
- Generate, E2B post-process, verify uniqueness
- Score: LLaVA 13B composite 7.7/10 PASS
Result: Cinematic album artwork with atmospheric depth and jazz-appropriate color theory.
Example 3: Video Trailer
User says: "/one-of-a-kind-design 15-second trailer for a contemporary opera staging deconstructed Madama Butterfly"
Actions:
- Resolve style: deconstructivism, chain: t2i-i2v (keyframe then animate)
- Generate keyframe via Flux Pro, animate via Seedance 2.0 (duration "15", aspect "21:9")
- E2B extracts first frame for quality verification
- Score: LLaVA 13B composite 7.2/10 PASS
Result: 15s cinematic video with fractured visual language and camera choreography mirroring emotional arc.
Example 4: SVG Logo
User says: "/one-of-a-kind-design Logo for a sustainable fashion brand called Thread"
Actions:
- Resolve style: editorial-minimalism (fashion + minimal)
- Route to SVG pipeline: Recraft V3 vector illustration mode
- E2B runs SVGO optimization, tests at 16px and 4K
- Score: LLaVA 13B composite 7.5/10 PASS
Result: Single-color vector logo with real SVG paths, tested across sizes.
Example 5: Mobile App
User says: "/one-of-a-kind-design Onboarding screens for a language learning app targeting adults over 40"
Actions:
- Resolve style: material-design (education + clean)
- Generate hero texture, E2B creates phone-frame mockup
- Verify uniqueness, score quality
- Score: LLaVA 13B composite 7.4/10 PASS
Result: Accessible onboarding screens with warm typography and generous touch targets.
Example 6: Headless / CI
User says: "/one-of-a-kind-design --print Event poster for a warehouse techno party in Berlin"
Actions:
- Headless mode: no suspend, no human input
- Resolve style: glitch (techno + underground), generate via Flux Pro
- Auto-score: composite 6.8 (below 7.0), auto-retry with seed 43
- Re-score: composite 7.3/10 PASS on second attempt
Result: Poster with neon typography, scan lines, RGB split. Delivered with score and URL, exit code 0.
Troubleshooting
Error: fal.ai returns 404 Cause: Model endpoint deprecated Solution: Run
bun run scripts/mastra/tools/index.test.ts to verify working endpoints
Error: Quality score below 7.0 Cause: Prompt does not match the resolved style well enough Solution: In interactive mode, provide feedback. In headless mode, auto-retries 3 times with different seeds.
Error: Pixelmatch shows 0% diff after refinement Cause: Generator did not apply feedback Solution: Pipeline auto-escalates to higher-tier model with explicit fix instructions
Error: Alignment below 5.0 after 3 retries Cause: Prompt-model mismatch or overly abstract prompt Solution: Escalate to higher-tier model. If still failing, flag for manual review.
Error: Style conflict detected Cause: Two incompatible styles combined Solution: Check
references/CONFLICT-MAP.md for resolution patterns.
Performance Notes
- Take your time with style resolution. Read the full taxonomy entry before resolving.
- Do not skip validation steps even when confident. Every generation gets alignment-checked.
- Quality over speed. A 7.0 composite reached in 3 attempts beats an 8.0 achieved by luck.
- Fix one quality dimension at a time.
- After 2 failed regenerations at the same tier, escalate exactly one tier.
- Maximum 15 fal.ai calls per pipeline run.
- Never fabricate alignment scores or quality sub-scores.
Video Generation Reference
For video output types, consult these reference documents:
— Complete scene ontology (entities, composition, motion, lighting, materials, atmosphere, audio, narrative, editing), JSON/YAML schemas, prompt templates for Runway/Sora/Pika styles, identity locks, continuity locks, motion realism blocks, anti-failure blocks, and evaluation rubric.references/VIDEO-SCENE-SCHEMA.md
— Reusable element templates with shared enums (realism levels, framing layers, camera angles, lens types, motion types, materials, surface finishes), validation primitives, reference image roles, and clip assembly templates for multi-shot continuity.references/VIDEO-ELEMENT-TEMPLATES.yaml
When generating video:
- Build a scene schema using the ontology (entities, composition, lighting, motion)
- Use the appropriate prompt template variant (Runway-style, Sora-style, etc.) based on the selected model
- Apply identity lock and continuity lock blocks for multi-shot consistency
- Include the anti-failure block in negative prompts
- Evaluate output against the rubric in section 9
Surgical Image Editing (Nano Banana Pro Edit)
When brief compliance check finds missing elements, or the user requests a specific edit to a generated image, use Nano Banana Pro Edit for surgical, localized changes that preserve the existing composition.
bun run scripts/edit-with-nano-banana.ts --image <url> --edit "Add three nail holes in the plaster surface, catching amber side-light"
The edit pipeline:
- Enhances the edit description into a precise Nano Banana prompt (enforcing spatial locality, dimensional integrity, photometric consistency)
- Executes the edit via
withfal-ai/nano-banana-pro/edit
arrayimage_urls - Returns the edited image URL
Use this INSTEAD of image-to-image for adding missing elements. Nano Banana preserves the base image exactly — i2i re-generates too much.
Endpoint:
fal-ai/nano-banana-pro/edit
Input: { prompt: string, image_urls: string[], seed: number }