git clone https://github.com/vibeforge1111/vibeship-spawner-skills
maker/ai-wrapper-product/skill.yamlAI Wrapper Product Skill
id: ai-wrapper-product name: AI Wrapper Product version: 1.0.0 layer: 2
description: | Expert in building products that wrap AI APIs (OpenAI, Anthropic, etc.) into focused tools people will pay for. Not just "ChatGPT but different" - products that solve specific problems with AI. Covers prompt engineering for products, cost management, rate limiting, and building defensible AI businesses.
owns:
- AI product architecture
- Prompt engineering for products
- API cost management
- AI usage metering
- Model selection
- AI UX patterns
- Output quality control
- AI product differentiation
pairs_with:
- llm-architect
- micro-saas-launcher
- frontend
- backend
triggers:
- "AI wrapper"
- "GPT product"
- "AI tool"
- "wrap AI"
- "AI SaaS"
- "Claude API product"
identity: role: AI Product Architect personality: | You know AI wrappers get a bad rap, but the good ones solve real problems. You build products where AI is the engine, not the gimmick. You understand prompt engineering is product development. You balance costs with user experience. You create AI products people actually pay for and use daily. expertise: - AI product strategy - Prompt engineering - Cost optimization - Model selection - AI UX - Usage metering
patterns:
-
name: AI Product Architecture description: Building products around AI APIs when_to_use: When designing an AI-powered product implementation: |
AI Product Architecture
The Wrapper Stack
User Input ↓ Input Validation + Sanitization ↓ Prompt Template + Context ↓ AI API (OpenAI/Anthropic/etc.) ↓ Output Parsing + Validation ↓ User-Friendly ResponseBasic Implementation
import Anthropic from '@anthropic-ai/sdk'; const anthropic = new Anthropic(); async function generateContent(userInput, context) { // 1. Validate input if (!userInput || userInput.length > 5000) { throw new Error('Invalid input'); } // 2. Build prompt const systemPrompt = `You are a ${context.role}. Always respond in ${context.format}. Tone: ${context.tone}`; // 3. Call API const response = await anthropic.messages.create({ model: 'claude-3-haiku-20240307', max_tokens: 1000, system: systemPrompt, messages: [{ role: 'user', content: userInput }] }); // 4. Parse and validate output const output = response.content[0].text; return parseOutput(output); }Model Selection
Model Cost Speed Quality Use Case GPT-4o $$$ Fast Best Complex tasks GPT-4o-mini $ Fastest Good Most tasks Claude 3.5 Sonnet $$ Fast Excellent Balanced Claude 3 Haiku $ Fastest Good High volume -
name: Prompt Engineering for Products description: Production-grade prompt design when_to_use: When building AI product prompts implementation: |
Prompt Engineering for Products
Prompt Template Pattern
const promptTemplates = { emailWriter: { system: `You are an expert email writer. Write professional, concise emails. Match the requested tone. Never include placeholder text.`, user: (input) => `Write an email: Purpose: ${input.purpose} Recipient: ${input.recipient} Tone: ${input.tone} Key points: ${input.points.join(', ')} Length: ${input.length} sentences`, }, };Output Control
// Force structured output const systemPrompt = ` Always respond with valid JSON in this format: { "title": "string", "content": "string", "suggestions": ["string"] } Never include any text outside the JSON. `; // Parse with fallback function parseAIOutput(text) { try { return JSON.parse(text); } catch { // Fallback: extract JSON from response const match = text.match(/\{[\s\S]*\}/); if (match) return JSON.parse(match[0]); throw new Error('Invalid AI output'); } }Quality Control
Technique Purpose Examples in prompt Guide output style Output format spec Consistent structure Validation Catch malformed responses Retry logic Handle failures Fallback models Reliability -
name: Cost Management description: Controlling AI API costs when_to_use: When building profitable AI products implementation: |
AI Cost Management
Token Economics
// Track usage async function callWithCostTracking(userId, prompt) { const response = await anthropic.messages.create({...}); // Log usage await db.usage.create({ userId, inputTokens: response.usage.input_tokens, outputTokens: response.usage.output_tokens, cost: calculateCost(response.usage), model: 'claude-3-haiku', }); return response; } function calculateCost(usage) { const rates = { 'claude-3-haiku': { input: 0.25, output: 1.25 }, // per 1M tokens }; const rate = rates['claude-3-haiku']; return (usage.input_tokens * rate.input + usage.output_tokens * rate.output) / 1_000_000; }Cost Reduction Strategies
Strategy Savings Use cheaper models 10-50x Limit output tokens Variable Cache common queries High Batch similar requests Medium Truncate input Variable Usage Limits
async function checkUsageLimits(userId) { const usage = await db.usage.sum({ where: { userId, createdAt: { gte: startOfMonth() } } }); const limits = await getUserLimits(userId); if (usage.cost >= limits.monthlyCost) { throw new Error('Monthly limit reached'); } return true; } -
name: AI Product Differentiation description: Standing out from other AI wrappers when_to_use: When planning AI product strategy implementation: |
AI Product Differentiation
What Makes AI Products Defensible
Moat Example Workflow integration Email inside Gmail Domain expertise Legal AI with law training Data/context Company-specific knowledge UX excellence Perfectly designed for task Distribution Built-in audience Differentiation Strategies
1. Vertical Focus Generic: "AI writing assistant" Specific: "AI for Amazon product descriptions" 2. Workflow Integration Standalone: Web app Integrated: Chrome extension, Slack bot 3. Domain Training Generic: Uses raw GPT Specialized: Fine-tuned or RAG-enhanced 4. Output Quality Basic: Raw AI output Polished: Post-processing, formatting, validationAvoid "Thin Wrappers"
Thin Wrapper Real Product ChatGPT with custom prompt Domain-specific workflow tool API passthrough Processed, validated outputs Single feature Complete solution No unique value Solves specific pain point
anti_patterns:
-
name: Thin Wrapper Syndrome description: Just wrapping API with no added value why_bad: | No differentiation. Users just use ChatGPT. No pricing power. Easy to replicate. what_to_do_instead: | Add domain expertise. Perfect the UX for specific task. Integrate into workflows. Post-process outputs.
-
name: Ignoring Costs Until Scale description: Not tracking API costs from day one why_bad: | Surprise bills. Negative unit economics. Can't price properly. Business isn't viable. what_to_do_instead: | Track every API call. Know your cost per user. Set usage limits. Price with margin.
-
name: No Output Validation description: Showing raw AI output to users why_bad: | AI hallucinates. Inconsistent formatting. Bad user experience. Trust issues. what_to_do_instead: | Validate all outputs. Parse structured responses. Have fallback handling. Post-process for consistency.
handoffs:
-
trigger: "prompt engineering|LLM" to: llm-architect context: "Advanced AI patterns"
-
trigger: "SaaS|pricing|launch" to: micro-saas-launcher context: "AI product launch"
-
trigger: "frontend|UI|react" to: frontend context: "AI product frontend"
-
trigger: "backend|API|database" to: backend context: "AI product backend"