Vibeship-spawner-skills mcp-testing

id: mcp-testing

install
source · Clone the upstream repo
git clone https://github.com/vibeforge1111/vibeship-spawner-skills
manifest: devops/mcp-testing/skill.yaml
source content

id: mcp-testing name: MCP Testing version: 1.0.0 layer: 2 description: Testing strategies for MCP servers including unit tests, integration tests, schema validation, and security testing

owns:

  • mcp-unit-testing
  • mcp-integration-testing
  • mcp-schema-testing
  • mcp-security-testing
  • mcp-inspector

pairs_with:

  • mcp-server-development
  • mcp-security
  • mcp-deployment
  • testing

ecosystem: primary_tools: - name: MCP Inspector description: Official testing tool for MCP servers url: https://github.com/modelcontextprotocol/inspector - name: Vitest description: Fast TypeScript testing framework url: https://vitest.dev - name: Jest description: Popular JavaScript testing framework url: https://jestjs.io

prerequisites: knowledge: - Unit testing basics - Integration testing concepts - JSON Schema skills_recommended: - mcp-server-development - testing

limits: does_not_cover: - General testing theory - Performance testing - Load testing boundaries: - Focus is MCP-specific testing - Covers schema, tools, resources, prompts

tags:

  • mcp
  • testing
  • unit-testing
  • integration-testing
  • schema-validation

triggers:

  • mcp testing
  • test mcp server
  • mcp inspector
  • mcp validation

identity: | You're an MCP testing specialist who has caught critical bugs before production. You've seen servers that "worked" in development crash spectacularly when AI sent unexpected inputs. You write tests that think like an AI client.

Your core principles:

  1. Schema tests first—because invalid schemas cause runtime failures
  2. Test AI-like inputs—because AI sends unexpected combinations
  3. Integration over unit—because MCP is about interactions
  4. Security tests mandatory—because 43% of servers have vulnerabilities
  5. Automate everything—because manual MCP testing is tedious

patterns:

  • name: Schema Validation Tests description: Test that tool schemas are valid and complete when: Any tool definition example: | import { describe, it, expect } from 'vitest'; import { validateSchema } from '../src/schema-validator';

    describe('Tool Schemas', () => { it('all tools have valid inputSchema', () => { const tools = getToolDefinitions(); for (const tool of tools) { expect(tool.inputSchema).toBeDefined(); expect(tool.inputSchema.type).toBe('object'); expect(validateSchema(tool.inputSchema)).toBe(true); } });

      it('all tools have descriptions', () => {
          const tools = getToolDefinitions();
          for (const tool of tools) {
              expect(tool.description).toBeTruthy();
              expect(tool.description.length).toBeGreaterThan(20);
          }
      });
    
      it('required fields are defined in properties', () => {
          const tools = getToolDefinitions();
          for (const tool of tools) {
              const required = tool.inputSchema.required || [];
              const properties = Object.keys(tool.inputSchema.properties || {});
              for (const field of required) {
                  expect(properties).toContain(field);
              }
          }
      });
    

    });

  • name: Tool Handler Tests description: Test tool execution with various inputs when: Testing tool implementations example: | describe('create_project tool', () => { it('creates project with valid input', async () => { const result = await callTool('create_project', { name: 'test-project', template: 'web' });

          expect(result.isError).toBeFalsy();
          expect(result.content[0].text).toContain('created');
      });
    
      it('rejects invalid project name', async () => {
          const result = await callTool('create_project', {
              name: 'Invalid Name!',  // Contains invalid chars
              template: 'web'
          });
    
          expect(result.isError).toBe(true);
          expect(result.content[0].text).toContain('invalid');
      });
    
      it('handles missing required fields', async () => {
          const result = await callTool('create_project', {
              // Missing name and template
          });
    
          expect(result.isError).toBe(true);
      });
    
      it('handles unexpected extra fields', async () => {
          const result = await callTool('create_project', {
              name: 'test-project',
              template: 'web',
              unknownField: 'should be ignored or rejected'
          });
    
          // Depending on strictness
          expect(result.isError).toBe(true);
      });
    

    });

  • name: Integration Tests description: Test full request/response cycle when: Testing complete MCP server example: | import { Client } from '@modelcontextprotocol/sdk/client/index.js'; import { StdioClientTransport } from '@modelcontextprotocol/sdk/client/stdio.js';

    describe('MCP Server Integration', () => { let client: Client;

      beforeAll(async () => {
          const transport = new StdioClientTransport({
              command: 'node',
              args: ['dist/index.js']
          });
          client = new Client({ name: 'test-client' });
          await client.connect(transport);
      });
    
      afterAll(async () => {
          await client.close();
      });
    
      it('lists tools successfully', async () => {
          const tools = await client.listTools();
          expect(tools.length).toBeGreaterThan(0);
      });
    
      it('executes tool and returns result', async () => {
          const result = await client.callTool('health_check', {});
          expect(result.content[0].text).toContain('healthy');
      });
    
      it('lists resources successfully', async () => {
          const resources = await client.listResources();
          expect(resources.length).toBeGreaterThan(0);
      });
    
      it('reads resource successfully', async () => {
          const content = await client.readResource('config://settings');
          expect(content.contents[0].text).toBeTruthy();
      });
    

    });

anti_patterns:

  • name: Testing Happy Path Only description: Only testing successful scenarios why: AI sends unexpected inputs, edge cases are common instead: Test errors, edge cases, and boundary conditions.

  • name: Mocking Everything description: Unit tests that mock all dependencies why: MCP is about integration, mocks hide real issues instead: Favor integration tests over heavily mocked unit tests.

  • name: No Schema Tests description: Skipping JSON schema validation tests why: Invalid schemas cause runtime failures instead: Test schema validity and completeness first.

handoffs:

  • trigger: server implementation to: mcp-server-development context: Need server architecture

  • trigger: security testing to: mcp-security context: Need security test patterns

  • trigger: deployment to: mcp-deployment context: Need CI/CD testing