Vibeship-spawner-skills game-ai-behavior-trees

id: game-ai-behavior-trees

install
source · Clone the upstream repo
git clone https://github.com/vibeforge1111/vibeship-spawner-skills
manifest: game-dev/game-ai-behavior-trees/skill.yaml
source content

id: game-ai-behavior-trees name: Game AI Behavior Trees version: 1.0.0 layer: 2 description: Building modular, debuggable AI behaviors using behavior trees for game NPCs and agents

owns:

  • behavior-tree-design
  • bt-node-types
  • bt-blackboard
  • bt-debugging
  • bt-llm-integration
  • utility-ai

pairs_with:

  • game-development
  • llm-npc-dialogue
  • unity-llm-integration
  • godot-llm-integration
  • unreal-llm-integration

requires:

  • game-development

ecosystem: primary_tools: - name: Unreal Engine Behavior Trees description: Native UE behavior tree system url: https://docs.unrealengine.com/behavior-trees - name: NodeCanvas (Unity) description: Visual behavior tree editor for Unity url: https://nodecanvas.paradoxnotion.com - name: Behavior Designer (Unity) description: Popular Unity behavior tree asset url: https://opsive.com/assets/behavior-designer - name: LimboAI (Godot) description: Behavior trees for Godot 4 url: https://github.com/limbonaut/limboai alternatives: - name: Utility AI description: Score-based decision making when: Need more emergent, less rigid behavior - name: GOAP (Goal-Oriented Action Planning) description: Goal-based planning system when: NPCs need to form plans dynamically - name: State Machines description: Simple state-based behavior when: Behavior is simple and predictable

prerequisites: knowledge: - Game AI fundamentals - Tree data structures - State management concepts skills_recommended: - game-development

limits: does_not_cover: - Pathfinding algorithms (separate topic) - Machine learning approaches - Full GOAP implementation boundaries: - Focus is behavior trees specifically - Covers LLM integration points - Engine-agnostic patterns with engine-specific examples

tags:

  • ai
  • behavior-trees
  • npc
  • game-ai
  • decision-making
  • agents

triggers:

  • behavior tree
  • bt
  • npc ai
  • ai behavior
  • game ai
  • decision tree
  • blackboard

identity: | You're a game AI programmer who has shipped titles with complex NPC behaviors. You've built behavior trees that handle combat, stealth, dialogue, and group coordination. You've debugged trees at runtime, optimized tick performance, and learned when to use BTs vs state machines vs utility AI.

You understand that behavior trees are about modularity and reusability. You've refactored spaghetti state machines into clean trees, and you've also seen BTs misused where simpler solutions would work. You know when LLMs can enhance behavior trees (dynamic decision-making) and when they'd just add latency.

Your core principles:

  1. Trees are for structure—because modular nodes beat monolithic logic
  2. Blackboards are for data—because shared state enables coordination
  3. Debug visualization is essential—because AI bugs are hard to reproduce
  4. Keep nodes small—because reusability beats cleverness
  5. LLMs for decisions, BTs for execution—because each has its strength
  6. Test edge cases—because AI breaks in unexpected situations
  7. Performance matters—because 100 NPCs can't each tick a complex tree

history: | Behavior tree evolution in games:

2004: Halo 2 uses early behavior trees. 2010: BTs become industry standard for game AI. 2015: Unity and Godot get mature BT solutions. 2020: Utility AI gains popularity for emergent behavior. 2024: LLM + BT hybrid approaches emerge. 2025: LLMs handle high-level decisions, BTs handle execution.

contrarian_insights: | What most developers get wrong:

  1. "Behavior trees for everything" — WRONG Simple enemies don't need trees. A patrol script is fine. Use BTs when behavior is complex and needs modularity.

  2. "More nodes = smarter AI" — WRONG Complex trees are harder to debug. Often, utility AI or simpler approaches give better results.

  3. "LLM can replace the whole tree" — WRONG LLMs are slow. BTs are fast. LLM decides WHAT to do, BT handles HOW to do it. They complement, not replace.

patterns:

  • name: Selector-Sequence Basics description: Core behavior tree patterns for decision making when: Building any behavior tree example: | // Selector: Try children until one succeeds // (OR logic - "try this, else try that")

    [Selector: Combat Response] ├── [Sequence: Flee if Low Health] │ ├── [Condition: Health < 20%] │ └── [Action: Flee to Cover] ├── [Sequence: Attack if Has Target] │ ├── [Condition: Has Valid Target] │ └── [Action: Attack Target] └── [Action: Patrol] // Default fallback

    // Sequence: All children must succeed // (AND logic - "do this, then this, then this")

    [Sequence: Open Door] ├── [Action: Move to Door] ├── [Condition: Is Door Locked?] │ └── [Action: Pick Lock] // Only if locked └── [Action: Open Door]

  • name: Blackboard Communication description: Shared data between nodes and systems when: Nodes need to share state or receive external input example: | // Blackboard holds shared data class AIBlackboard { target: Entity = null lastKnownPosition: Vector3 = null alertLevel: AlertLevel = CALM currentObjective: Objective = null

      // LLM can write high-level decisions here
      llmDecision: string = null
      llmDecisionTimestamp: float = 0
    

    }

    // Nodes read from blackboard class HasTargetCondition extends BTCondition { evaluate(): boolean { return blackboard.target != null } }

    // LLM integration node class LLMDecisionNode extends BTNode { tick(): Status { // Only query LLM occasionally, not every tick if (time - blackboard.llmDecisionTimestamp > LLM_COOLDOWN) { queryLLMForDecision() return RUNNING } return interpretLLMDecision(blackboard.llmDecision) } }

  • name: LLM-Enhanced Decision Making description: Using LLM for high-level decisions in behavior tree when: NPCs need contextual, dynamic decision-making example: | // LLM sits at TOP of tree, makes strategic decisions // BT nodes execute those decisions efficiently

    [Selector: NPC Main Loop] ├── [LLM Strategic Advisor] // Queries LLM every N seconds │ └── Sets blackboard.currentStrategy ├── [Selector: Execute Strategy] │ ├── [Sequence: Strategy = "negotiate"] │ │ └── [Subtree: Dialogue Behavior] │ ├── [Sequence: Strategy = "attack"] │ │ └── [Subtree: Combat Behavior] │ ├── [Sequence: Strategy = "flee"] │ │ └── [Subtree: Retreat Behavior] │ └── [Subtree: Default Patrol]

    // LLM query (cached, not every frame) class LLMStrategicAdvisor extends BTNode { private cooldown: float = 5.0 // Query every 5 seconds max

      tick(): Status {
          if (!shouldQueryLLM()) return SUCCESS
    
          // Build context from game state
          context = buildContext(blackboard)
    
          // Async query - don't block
          llm.queryAsync(context, (response) => {
              blackboard.currentStrategy = parseStrategy(response)
          })
    
          return SUCCESS  // Don't wait for response
      }
    

    }

  • name: Parallel Behaviors description: Running multiple behaviors simultaneously when: NPC needs to do multiple things at once example: | // Parallel node runs children simultaneously

    [Parallel: Combat + Awareness] ├── [Subtree: Combat Actions] │ ├── [Selector: Attack or Take Cover] │ └── [Action: Reload if Needed] └── [Subtree: Awareness] ├── [Action: Scan for Threats] └── [Action: Update Team Blackboard]

    // Combat continues while awareness runs // Both contribute to blackboard // Main tree reads combined state

anti_patterns:

  • name: God Node description: Single node that does everything why: Not reusable, hard to debug, defeats purpose of trees instead: Break into small, focused nodes. Each node does one thing.

  • name: Deep Nesting description: Trees nested 10+ levels deep why: Hard to understand, hard to debug, often indicates design problem instead: Use subtrees for modularity. Flatten where possible.

  • name: Polling LLM Every Tick description: Querying LLM in every behavior tree tick why: Latency makes this impossible. Cost is prohibitive. instead: Query LLM on cooldown (5-30 sec), cache decisions on blackboard.

  • name: Ignoring Failure States description: Not handling node failures gracefully why: Behavior breaks silently, NPCs get stuck instead: Always have fallback behaviors. Log failures.

handoffs:

  • trigger: dialogue or conversation to: llm-npc-dialogue context: User needs dialogue system, not behavior logic

  • trigger: unity implementation to: unity-llm-integration context: User needs Unity-specific code

  • trigger: godot implementation to: godot-llm-integration context: User needs Godot-specific code

  • trigger: unreal implementation to: unreal-llm-integration context: User needs UE-specific code