Skilllibrary mcp-development
End-to-end MCP server development lifecycle — from API analysis through implementation, testing, and deployment. Use when building a multi-tool MCP server that wraps an external API or service, planning the tool/resource/prompt surface for a new MCP project, or iterating on an existing MCP server's design.
install
source · Clone the upstream repo
git clone https://github.com/merceralex397-collab/skilllibrary
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/merceralex397-collab/skilllibrary "$T" && mkdir -p ~/.claude/skills && cp -r "$T/07-mcp/mcp-development" ~/.claude/skills/merceralex397-collab-skilllibrary-mcp-development && rm -rf "$T"
manifest:
07-mcp/mcp-development/SKILL.mdsource content
Purpose
Guide the full development lifecycle of an MCP server — from understanding the target API, through designing the tool/resource/prompt surface, to implementation, testing, and deployment. This is the hub skill for MCP server projects that span multiple phases.
When to use this skill
- Building a new MCP server that wraps an external API or service
- Planning which tools, resources, and prompts an MCP server should expose
- Iterating on an existing MCP server (adding tools, improving schemas, fixing patterns)
- Need a structured development workflow rather than ad hoc implementation
Do not use this skill when
- Scaffolding a first-ever MCP server → use
get-started - The task is narrowly about one tool's schema design → use
mcp-tool-design - The task is only about testing → use
mcp-testing-evals - Following a specific SDK's patterns → use
ormcp-typescript-sdkmcp-python-fastmcp
Operating procedure
Phase 1 — Research and plan
- Study the target API. Read its documentation. Identify key endpoints, auth model, data shapes, rate limits, and pagination patterns.
- Map endpoints to MCP primitives:
- Tools — for actions the LLM should invoke (queries, mutations, computations)
- Resources — for read-only context data the host can present (schemas, configs, documentation)
- Prompts — for reusable interaction templates (e.g., "analyze this data using the available tools")
- Decide coverage strategy. Prioritize comprehensive API coverage over bespoke workflow tools. Comprehensive coverage lets agents compose operations flexibly.
- Choose stack. TypeScript (recommended for broadest compatibility) or Python (FastMCP for rapid development). See
ormcp-typescript-sdk
.mcp-python-fastmcp
Phase 2 — Implement core infrastructure
- API client wrapper — centralized auth, error handling, retry logic
- Response formatting — consistent JSON or Markdown output across tools
- Pagination helper — many APIs and MCP
methods use cursor-based pagination*/list - Error mapping — translate API errors to MCP error format:
- Protocol errors: standard JSON-RPC error codes (-32600 to -32603)
- Tool execution errors:
{ isError: true, content: [{ type: "text", text: "..." }] }
Phase 3 — Implement primitives
For each tool:
- Define
using Zod (TS) or Pydantic/type hints (Python)inputSchema - Write a clear
— this is what the LLM reads to decide when to use the tooldescription - Set
:annotations
,readOnlyHint
,destructiveHint
,idempotentHintopenWorldHint - Implement the handler with async I/O, proper error handling, and pagination support
- Optionally define
for structured responsesoutputSchema
For each resource:
- Choose a URI scheme (e.g.,
,file://
, or custom)https:// - Implement
andresources/listresources/read - Support
if the data changes over timeresources/subscribe
For each prompt:
- Define the prompt name, description, and arguments
- Return structured
array withmessages
androlecontent
Phase 4 — Declare capabilities
In the
initialize response, declare what the server supports:
{ "capabilities": { "tools": { "listChanged": true }, "resources": { "subscribe": true, "listChanged": true }, "prompts": { "listChanged": true }, "logging": {} } }
Only declare capabilities you actually implement.
Phase 5 — Test and validate
- Build:
(TS) ornpm run build
(Python)python -m py_compile - Inspector:
— verify all tools list, call correctly, return expected schemasnpx @modelcontextprotocol/inspector - Host test: Connect to a real host (Claude Desktop, VS Code, Codex) and verify end-to-end
- Edge cases: Test with invalid inputs, missing auth, rate-limited APIs, empty results
Phase 6 — Ship
- Choose transport (stdio for local, Streamable HTTP for remote)
- Add auth if remote (see
)mcp-auth-transports - Write README with installation and configuration instructions
- Publish to npm/PyPI or register in an MCP registry
Decision rules
- Tool naming: Use
prefix pattern (e.g.,<domain>_<action>
,github_create_issue
) for discoverabilitygithub_list_repos - Tool count: Start with the most common 10-15 operations. Expand based on usage, not speculation.
- Resource vs Tool: If the LLM needs to take action → tool. If the host needs context → resource.
- Error messages: Include what went wrong AND what to try next. "API returned 403: check that the API key has write permissions for this repository."
- DRY: Extract shared API client, pagination, and formatting into utility modules
Output requirements
- Working MCP server with declared capabilities
- All tools have
,inputSchema
, anddescriptionannotations - Passes MCP Inspector verification for all primitives
- README with host configuration examples
Related skills
— deep dive on individual tool designmcp-tool-design
— resource and prompt implementation patternsmcp-resources-prompts
— testing strategy and evaluation creationmcp-testing-evals
— reference implementation guide from Anthropicmcp-builder
Failure handling
- If the target API has no documentation, use the API's OpenAPI/Swagger spec if available; otherwise explore endpoints manually and document as you go
- If a tool is too complex for a single call, split into a read step and a write step rather than one tool that does both
- If tool count exceeds 30, consider splitting into multiple focused MCP servers rather than one monolith