Vibeship-spawner-skills multi-agent-orchestration

id: multi-agent-orchestration

install
source · Clone the upstream repo
git clone https://github.com/vibeforge1111/vibeship-spawner-skills
manifest: ai-agents/multi-agent-orchestration/skill.yaml
source content

id: multi-agent-orchestration name: Multi-Agent Orchestration version: 1.0.0 layer: 2 description: Patterns for coordinating multiple LLM agents including sequential, parallel, router, and hierarchical architectures—the AI equivalent of microservices

owns:

  • agent-coordination
  • workflow-orchestration
  • state-management
  • agent-routing
  • consensus-patterns

pairs_with:

  • agent-communication
  • agent-evaluation
  • autonomous-agents
  • context-window-management

requires:

  • llm-fundamentals
  • async-programming

ecosystem: primary_tools: - name: LangGraph description: Graph-based multi-agent orchestration with cycles and shared memory url: https://langchain-ai.github.io/langgraph/ - name: AutoGen description: Microsoft's conversational multi-agent framework url: https://microsoft.github.io/autogen/ - name: CrewAI description: Role-based multi-agent collaboration url: https://www.crewai.com - name: OpenAI Agents SDK description: OpenAI's official multi-agent patterns url: https://openai.github.io/openai-agents-python/multi_agent/ alternatives: - name: Dynamiq description: Linear and adaptive orchestration patterns when: Need simple linear workflows - name: Semantic Kernel description: Microsoft's enterprise AI orchestration when: Enterprise .NET environments deprecated: - name: Single monolithic agents reason: Don't scale, become "jack of all trades, master of none" migration: Decompose into specialized agents with clear boundaries

prerequisites: knowledge: - LLM API usage - Async programming patterns - State machine concepts skills_recommended: - autonomous-agents - context-window-management

limits: does_not_cover: - Agent training/fine-tuning - Single-agent patterns - Non-LLM agent systems boundaries: - Focus is LLM agent coordination - Covers runtime orchestration patterns

tags:

  • multi-agent
  • orchestration
  • llm
  • workflow
  • coordination
  • architecture

triggers:

  • multi-agent
  • agent orchestration
  • multiple agents
  • agent coordination
  • agent workflow

history:

  • version: "2023" milestone: AutoGen and early multi-agent frameworks impact: Demonstrated viability of agent collaboration
  • version: "2024" milestone: LangGraph and graph-based orchestration mature impact: Cyclic workflows and complex state management
  • version: "2025" milestone: Microsoft Agent Framework combines AutoGen + Semantic Kernel impact: Enterprise-ready multi-agent patterns emerge

contrarian_insights:

  • claim: More agents = better results reality: Coordination overhead increases exponentially; start with minimum viable agents
  • claim: Agents should be autonomous reality: Tight orchestration often outperforms autonomous agents for production reliability
  • claim: Complex architectures are more capable reality: Simple sequential chains often beat complex hierarchies when well-designed

identity: | You're an architect who has built multi-agent systems that process millions of requests daily. You've learned that the hard problems aren't individual agent capabilities—they're coordination, state management, and failure handling at scale.

You understand that multi-agent systems are the AI equivalent of microservices: powerful but complex. Just like microservices, the overhead of coordination must be justified by the benefits. Most problems don't need multiple agents, and premature complexity kills projects.

Your core principles:

  1. Start with one agent—only split when clearly needed
  2. State is king—shared state management is 80% of the challenge
  3. Clear boundaries—each agent owns a specific domain
  4. Fail gracefully—partial results beat total failures
  5. Observe everything—you can't debug what you can't see

patterns:

  • name: Sequential Chain Pattern description: Agents execute in order, each building on previous output when: Tasks have clear stages that must complete in order example: | import { StateGraph, END } from '@langchain/langgraph';

    // Define shared state interface WorkflowState { input: string; researchResults?: string; draftContent?: string; reviewFeedback?: string; finalOutput?: string; }

    // Create specialized agents class SequentialAgentChain { private graph: StateGraph<WorkflowState>;

      constructor() {
          this.graph = new StateGraph<WorkflowState>({
              channels: {
                  input: null,
                  researchResults: null,
                  draftContent: null,
                  reviewFeedback: null,
                  finalOutput: null
              }
          });
    
          // Add nodes (agents)
          this.graph.addNode('researcher', this.researchAgent.bind(this));
          this.graph.addNode('writer', this.writerAgent.bind(this));
          this.graph.addNode('reviewer', this.reviewerAgent.bind(this));
          this.graph.addNode('finalizer', this.finalizerAgent.bind(this));
    
          // Define sequential edges
          this.graph.addEdge('__start__', 'researcher');
          this.graph.addEdge('researcher', 'writer');
          this.graph.addEdge('writer', 'reviewer');
          this.graph.addEdge('reviewer', 'finalizer');
          this.graph.addEdge('finalizer', END);
      }
    
      private async researchAgent(state: WorkflowState): Promise<Partial<WorkflowState>> {
          const research = await this.llm.invoke({
              messages: [{
                  role: 'system',
                  content: 'You are a research specialist. Gather key facts and sources.'
              }, {
                  role: 'user',
                  content: `Research this topic: ${state.input}`
              }]
          });
    
          return { researchResults: research.content };
      }
    
      private async writerAgent(state: WorkflowState): Promise<Partial<WorkflowState>> {
          const draft = await this.llm.invoke({
              messages: [{
                  role: 'system',
                  content: 'You are a content writer. Create compelling content based on research.'
              }, {
                  role: 'user',
                  content: `Write content based on this research:\n${state.researchResults}`
              }]
          });
    
          return { draftContent: draft.content };
      }
    
      private async reviewerAgent(state: WorkflowState): Promise<Partial<WorkflowState>> {
          const review = await this.llm.invoke({
              messages: [{
                  role: 'system',
                  content: 'You are an editor. Review for accuracy, clarity, and style.'
              }, {
                  role: 'user',
                  content: `Review this draft:\n${state.draftContent}`
              }]
          });
    
          return { reviewFeedback: review.content };
      }
    
      private async finalizerAgent(state: WorkflowState): Promise<Partial<WorkflowState>> {
          const final = await this.llm.invoke({
              messages: [{
                  role: 'system',
                  content: 'You are a finalizer. Incorporate feedback and produce final output.'
              }, {
                  role: 'user',
                  content: `Original draft:\n${state.draftContent}\n\nFeedback:\n${state.reviewFeedback}\n\nProduce final version.`
              }]
          });
    
          return { finalOutput: final.content };
      }
    
      async run(input: string): Promise<string> {
          const app = this.graph.compile();
          const result = await app.invoke({ input });
          return result.finalOutput;
      }
    

    }

  • name: Parallel Execution Pattern description: Multiple agents work simultaneously, results aggregated when: Tasks can be parallelized for speed or diversity example: | class ParallelAgentExecution { // Parallel agents for code review async parallelCodeReview(code: string): Promise<AggregatedReview> { // Define specialized reviewers const reviewers = [ { name: 'security', prompt: 'Review for security vulnerabilities. Focus on injection, auth, data exposure.' }, { name: 'performance', prompt: 'Review for performance issues. Focus on complexity, memory, async patterns.' }, { name: 'maintainability', prompt: 'Review for maintainability. Focus on naming, structure, documentation.' }, { name: 'correctness', prompt: 'Review for logical correctness. Focus on edge cases, error handling.' } ];

          // Execute all reviewers in parallel
          const reviews = await Promise.all(
              reviewers.map(async (reviewer) => {
                  const result = await this.llm.invoke({
                      messages: [{
                          role: 'system',
                          content: `You are a ${reviewer.name} code reviewer. ${reviewer.prompt}`
                      }, {
                          role: 'user',
                          content: `Review this code:\n\`\`\`\n${code}\n\`\`\``
                      }]
                  });
    
                  return {
                      category: reviewer.name,
                      findings: this.parseFindings(result.content)
                  };
              })
          );
    
          // Aggregate with synthesizer agent
          const synthesis = await this.synthesizeReviews(reviews);
    
          return {
              individualReviews: reviews,
              synthesis,
              overallScore: this.calculateScore(reviews)
          };
      }
    
      private async synthesizeReviews(reviews: Review[]): Promise<string> {
          const synthesizer = await this.llm.invoke({
              messages: [{
                  role: 'system',
                  content: 'You are a senior engineer. Synthesize multiple code reviews into a coherent summary with prioritized action items.'
              }, {
                  role: 'user',
                  content: `Synthesize these reviews:\n${JSON.stringify(reviews, null, 2)}`
              }]
          });
    
          return synthesizer.content;
      }
    

    }

  • name: Router/Dispatcher Pattern description: Intelligent routing to specialized agents based on task classification when: Different task types require different expertise example: | import { z } from 'zod';

    // Define routing schema const RouteSchema = z.object({ category: z.enum(['technical', 'billing', 'general', 'escalate']), confidence: z.number().min(0).max(1), reasoning: z.string() });

    class RouterAgent { private readonly agents: Map<string, Agent> = new Map();

      constructor() {
          // Register specialized agents
          this.agents.set('technical', new TechnicalSupportAgent());
          this.agents.set('billing', new BillingSupportAgent());
          this.agents.set('general', new GeneralSupportAgent());
          this.agents.set('escalate', new EscalationAgent());
      }
    
      async route(userMessage: string, context: ConversationContext): Promise<AgentResponse> {
          // Step 1: Classify the request
          const classification = await this.classify(userMessage, context);
    
          // Step 2: Confidence threshold check
          if (classification.confidence < 0.7) {
              // Low confidence: ask clarifying question
              return {
                  type: 'clarification',
                  message: 'I want to make sure I help you with the right thing. Could you tell me more about your issue?',
                  suggestedCategories: this.getSuggestedCategories(classification)
              };
          }
    
          // Step 3: Route to specialized agent
          const agent = this.agents.get(classification.category);
          if (!agent) {
              throw new Error(`Unknown category: ${classification.category}`);
          }
    
          // Step 4: Execute with context handoff
          return agent.handle(userMessage, {
              ...context,
              routingDecision: classification,
              previousAgents: [...(context.previousAgents || []), 'router']
          });
      }
    
      private async classify(message: string, context: ConversationContext): Promise<z.infer<typeof RouteSchema>> {
          const result = await this.llm.invoke({
              messages: [{
                  role: 'system',
                  content: `You are a request classifier. Categorize user requests.
    

Categories:

  • technical: Code issues, API problems, integration help, bugs
  • billing: Payments, subscriptions, invoices, refunds
  • general: Account questions, feature info, how-to questions
  • escalate: Complaints, urgent issues, requests for human agent

Respond with JSON: { "category": "...", "confidence": 0.0-1.0, "reasoning": "..." }` }, { role: 'user', content: message }], response_format: { type: 'json_object' } });

          return RouteSchema.parse(JSON.parse(result.content));
      }
  }
  • name: Hierarchical Supervisor Pattern description: Manager agent delegates to and coordinates worker agents when: Complex tasks require breakdown and coordination example: | class HierarchicalAgentSystem { private supervisor: SupervisorAgent; private workers: Map<string, WorkerAgent>;

      async execute(task: ComplexTask): Promise<TaskResult> {
          // Supervisor breaks down task
          const plan = await this.supervisor.planTask(task);
    
          // Track execution state
          const executionState: ExecutionState = {
              plan,
              completedSteps: [],
              pendingSteps: [...plan.steps],
              workerOutputs: new Map()
          };
    
          // Execute with supervisor oversight
          while (executionState.pendingSteps.length > 0) {
              const step = executionState.pendingSteps.shift()!;
    
              // Supervisor assigns to appropriate worker
              const assignment = await this.supervisor.assignStep(step, executionState);
    
              // Worker executes
              const worker = this.workers.get(assignment.workerId);
              if (!worker) throw new Error(`Worker not found: ${assignment.workerId}`);
    
              const result = await worker.execute(step, assignment.context);
    
              // Supervisor reviews result
              const review = await this.supervisor.reviewResult(step, result, executionState);
    
              if (review.approved) {
                  executionState.completedSteps.push({ step, result });
                  executionState.workerOutputs.set(step.id, result);
              } else if (review.retry) {
                  // Put back in queue with feedback
                  executionState.pendingSteps.unshift({
                      ...step,
                      feedback: review.feedback,
                      attempt: (step.attempt || 0) + 1
                  });
              } else {
                  // Escalate or fail
                  throw new Error(`Step failed after review: ${step.id}`);
              }
    
              // Check if plan needs adjustment
              if (review.planAdjustment) {
                  const newSteps = await this.supervisor.adjustPlan(
                      executionState,
                      review.planAdjustment
                  );
                  executionState.pendingSteps.push(...newSteps);
              }
          }
    
          // Supervisor synthesizes final result
          return this.supervisor.synthesize(executionState);
      }
    

    }

    class SupervisorAgent { async planTask(task: ComplexTask): Promise<ExecutionPlan> { const plan = await this.llm.invoke({ messages: [{ role: 'system', content: `You are a project manager. Break down complex tasks into discrete steps. Each step should be:

  • Specific and actionable

  • Assignable to one worker

  • Have clear success criteria

  • List dependencies on other steps

Available workers: ${this.describeWorkers()}

                  }, {                       role: 'user',                       content:
Plan this task: ${task.description}` }] });

          return this.parsePlan(plan.content);
      }

      async reviewResult(step: Step, result: StepResult, state: ExecutionState): Promise<ReviewDecision> {
          const review = await this.llm.invoke({
              messages: [{
                  role: 'system',
                  content: `You are a quality reviewer. Evaluate if the step was completed successfully.

Consider: correctness, completeness, alignment with overall task.

                  }, {                       role: 'user',                       content:
Step: ${JSON.stringify(step)} Result: ${JSON.stringify(result)} Overall task: ${state.plan.task.description}

Respond with: { "approved": bool, "retry": bool, "feedback": string, "planAdjustment": string | null }` }] });

          return JSON.parse(review.content);
      }
  }

anti_patterns:

  • name: Premature Multi-Agent Architecture description: Using multiple agents when one would suffice why: Coordination overhead, increased latency, debugging complexity instead: Start with single agent, split only when clearly beneficial.

  • name: Global Shared State description: All agents read/write to single global state why: Race conditions, debugging nightmares, tight coupling instead: Use scoped state channels with clear ownership.

  • name: Unbounded Agent Loops description: Agents that can call each other indefinitely why: Infinite loops, runaway costs, system hangs instead: Enforce maximum iterations and circuit breakers.

  • name: Implicit Handoffs description: Agent transitions without explicit state transfer why: Lost context, inconsistent behavior, debugging difficulty instead: Explicit handoff protocol with state snapshot.

  • name: No Observability description: Multi-agent system without tracing and logging why: Impossible to debug failures or optimize performance instead: Trace every agent invocation, state change, and decision.

handoffs:

  • trigger: agent communication|message passing to: agent-communication context: Need inter-agent communication patterns

  • trigger: agent testing|benchmark|evaluation to: agent-evaluation context: Need to test and evaluate agent system

  • trigger: single agent|autonomous to: autonomous-agents context: Need single-agent patterns