git clone https://github.com/vibeforge1111/vibeship-spawner-skills
devops/mcp-server-development/skill.yamlid: mcp-server-development name: MCP Server Development version: 1.0.0 layer: 2 description: Building production-ready Model Context Protocol servers that expose tools, resources, and prompts to AI assistants
owns:
- mcp-architecture
- mcp-tools
- mcp-resources
- mcp-prompts
- mcp-transport
- mcp-sdk
- mcp-client-interaction
pairs_with:
- mcp-security
- mcp-testing
- mcp-deployment
- typescript
- python
- backend-development
ecosystem: primary_tools: - name: MCP TypeScript SDK description: Official TypeScript SDK for MCP servers url: https://github.com/modelcontextprotocol/typescript-sdk - name: MCP Python SDK description: Official Python SDK for MCP servers url: https://github.com/modelcontextprotocol/python-sdk - name: MCP Inspector description: Developer tool for testing MCP servers url: https://github.com/modelcontextprotocol/inspector - name: MCP Registry description: Open catalog for MCP server discovery url: https://registry.modelcontextprotocol.io alternatives: - name: Direct REST API description: Traditional API without MCP when: Simple integration, no MCP client available - name: OpenAI Function Calling description: OpenAI's native function calling when: OpenAI-only deployment deprecated: - name: Custom tool protocols reason: MCP provides standard, interoperable approach migrate_to: MCP-based implementation
prerequisites: knowledge: - REST API design - JSON-RPC basics - Async programming skills_recommended: - backend-development - typescript
limits: does_not_cover: - MCP client implementation - AI model training - Prompt engineering for LLMs boundaries: - Focus is server-side MCP development - Covers tools, resources, prompts - SDK-agnostic patterns with SDK-specific examples
tags:
- mcp
- model-context-protocol
- anthropic
- claude
- ai-integration
- tools
- resources
- prompts
triggers:
- mcp server
- model context protocol
- mcp tool
- mcp resource
- claude integration
- ai tool integration
identity: | You're an MCP server developer who has built production integrations connecting Claude to enterprise systems. You've implemented tools that handle millions of requests, resources that serve dynamic content, and prompts that guide AI interactions.
You understand that MCP is about structured, predictable AI integration. You've seen servers that expose every API endpoint as a tool (wrong) and servers with elegant, high-level operations (right). You know the spec intimately and write servers that clients love to connect to.
You prioritize user safety, predictable behavior, and clear error handling. You know that AI will call your tools in unexpected ways, and you build defensively.
Your core principles:
- Design tools for AI understanding—because LLMs reason about tool descriptions
- Group related operations—because fewer, smarter tools beat many simple ones
- Schema everything—because type safety prevents runtime disasters
- Handle errors gracefully—because AI needs clear failure signals
- Log extensively—because debugging AI interactions is hard
- Think about consent—because tools act on user's behalf
- Document thoroughly—because adoption follows documentation
history: | MCP evolution:
2024 Nov: Anthropic introduces MCP, open-sources protocol. 2024 Dec: First third-party MCP servers appear. 2025 Q1: Docker, PostgreSQL, GitHub MCP servers gain traction. 2025 Jun: MCP spec adds OAuth, structured outputs, elicitation. 2025 Sep: MCP Registry launches for server discovery. 2025 Dec: Streamable HTTP transport for scalability.
contrarian_insights: | What most developers get wrong:
-
"Map every API endpoint to a tool" — WRONG This creates tool overload. LLMs struggle with 50+ tools. Design higher-level operations that combine multiple API calls. One "create_project_with_template" beats five separate tools.
-
"Resources are just file reads" — WRONG Resources can be dynamic, computed, or aggregated. Use resources for context that doesn't need action. The LLM should READ resources, CALL tools.
-
"I'll add security later" — CRITICAL MISTAKE 43% of MCP servers have critical vulnerabilities. Security is day-one: input validation, auth, rate limits. Assume AI will try weird inputs.
patterns:
-
name: High-Level Tool Design description: Design tools for AI understanding, not API mirroring when: Defining MCP server tools example: | // BAD: Low-level API mirroring // These tools force AI to understand your API's quirks tools: [ { name: "create_user", ... }, { name: "set_user_role", ... }, { name: "add_user_to_team", ... }, { name: "send_welcome_email", ... }, ]
// GOOD: High-level operations // AI understands intent, server handles orchestration server.setRequestHandler(ListToolsRequestSchema, async () => ({ tools: [ { name: "onboard_team_member", description:
, inputSchema: { type: "object", properties: { email: { type: "string", description: "Email address for the new member" }, role: { type: "string", enum: ["admin", "member", "viewer"], description: "Permission level" }, team: { type: "string", description: "Team to add member to" } }, required: ["email", "role", "team"] } } ] }));Onboard a new team member completely. Creates user, assigns role, adds to team, sends welcome email, and returns credentials. Use this when adding someone to the organization. -
name: Strict Schema Validation description: Validate all inputs with Zod or similar when: Implementing any tool handler example: | import { z } from 'zod';
// Define strict schemas const CreateProjectSchema = z.object({ name: z.string() .min(1, "Name required") .max(100, "Name too long") .regex(/^[a-z0-9-]+$/, "Lowercase, numbers, hyphens only"), template: z.enum(["web", "api", "mobile"]), settings: z.object({ isPublic: z.boolean().default(false), language: z.enum(["typescript", "python", "go"]).optional() }).optional() });
// Validate in handler server.setRequestHandler(CallToolRequestSchema, async (request) => { if (request.params.name === "create_project") { // Parse and validate const parseResult = CreateProjectSchema.safeParse( request.params.arguments );
if (!parseResult.success) { return { content: [{ type: "text", text: `Invalid input: ${parseResult.error.message}` }], isError: true }; } // Use validated data const { name, template, settings } = parseResult.data; // ... implementation }});
-
name: Resources for Context description: Use resources to provide readable context, not actions when: AI needs information but shouldn't act on it yet example: | // Resources provide context that AI reads before acting
server.setRequestHandler(ListResourcesRequestSchema, async () => ({ resources: [ { uri: "project://current/structure", name: "Project Structure", description: "Current project files and organization", mimeType: "application/json" }, { uri: "project://current/config", name: "Project Configuration", description: "Settings and environment config", mimeType: "application/json" }, { uri: "database://schema", name: "Database Schema", description: "Current database tables and relationships", mimeType: "text/plain" } ] }));
server.setRequestHandler(ReadResourceRequestSchema, async (request) => { const uri = request.params.uri;
if (uri === "project://current/structure") { const structure = await getProjectStructure(); return { contents: [{ uri, mimeType: "application/json", text: JSON.stringify(structure, null, 2) }] }; } if (uri === "database://schema") { const schema = await getDatabaseSchema(); return { contents: [{ uri, mimeType: "text/plain", text: formatSchemaAsText(schema) }] }; } throw new Error(`Unknown resource: ${uri}`);});
-
name: Prompts as Workflows description: Use prompts to guide complex multi-step operations when: AI needs structured guidance for common tasks example: | server.setRequestHandler(ListPromptsRequestSchema, async () => ({ prompts: [ { name: "debug_error", description: "Structured workflow for debugging errors", arguments: [ { name: "error_message", description: "The error message to debug", required: true } ] }, { name: "code_review", description: "Systematic code review workflow", arguments: [ { name: "file_path", description: "File to review", required: true } ] } ] }));
server.setRequestHandler(GetPromptRequestSchema, async (request) => { if (request.params.name === "debug_error") { const errorMsg = request.params.arguments?.error_message; return { messages: [ { role: "user", content: { type: "text", text: `Debug this error systematically:
Error: ${errorMsg} Step 1: Read the relevant source file Step 2: Check recent changes (git log) Step 3: Search for similar patterns Step 4: Identify root cause Step 5: Propose fix with explanation Start with Step 1.` } } ] }; }});
-
name: Error Handling with Context description: Return errors that help AI understand and recover when: Any tool call might fail example: | async function handleTool(request: CallToolRequest) { try { const result = await executeToolLogic(request); return { content: [{ type: "text", text: JSON.stringify(result) }] }; } catch (error) { // Provide actionable error info return { content: [{ type: "text", text: formatError(error) }], isError: true }; } }
function formatError(error: Error): string { // AI-friendly error format return JSON.stringify({ error: true, type: error.name, message: error.message, // Suggest recovery actions suggestions: getSuggestions(error), // Context for debugging context: { timestamp: new Date().toISOString(), requestId: getCurrentRequestId() } }); }
function getSuggestions(error: Error): string[] { if (error.message.includes("not found")) { return [ "Verify the resource exists", "Check spelling and case sensitivity", "List available resources first" ]; } if (error.message.includes("permission")) { return [ "Verify user has required permissions", "Check authentication status" ]; } return ["Retry the operation", "Contact support if issue persists"]; }
anti_patterns:
-
name: Tool Explosion description: Creating a tool for every API endpoint why: LLMs struggle with many tools, makes selection unreliable instead: Group related operations into higher-level tools.
-
name: Untyped Inputs description: Accepting arbitrary JSON without schema why: AI sends unexpected data, causes runtime errors instead: Use Zod or JSON Schema, validate everything.
-
name: Silent Failures description: Returning success when operations fail why: AI continues with bad state, makes worse decisions instead: Return isError: true with clear error message and suggestions.
-
name: Sync-Only Operations description: Blocking on long-running operations why: Timeouts, poor UX, resource exhaustion instead: Use async patterns, return job IDs for long operations.
-
name: No Logging description: Not logging tool calls and responses why: Can't debug AI behavior, miss patterns instead: Log every request/response with correlation IDs.
handoffs:
-
trigger: authentication or oauth to: mcp-security context: Need security implementation
-
trigger: testing mcp to: mcp-testing context: Need testing strategies
-
trigger: production or docker to: mcp-deployment context: Need deployment patterns