git clone https://github.com/vibeforge1111/vibeship-spawner-skills
game-dev/unreal-llm-integration/skill.yamlid: unreal-llm-integration name: Unreal Engine LLM Integration version: 1.0.0 layer: 2 description: Integrating local and cloud LLMs into Unreal Engine games for AI NPCs and intelligent behaviors
owns:
- unreal-llm-setup
- blueprint-llm-nodes
- cpp-llm-integration
- unreal-async-tasks
- unreal-model-loading
pairs_with:
- llm-npc-dialogue
- game-development
- llm-architect
requires:
- game-development
- llm-npc-dialogue
ecosystem: primary_tools: - name: RuntimeAudioImporter + LLM description: Common pattern using HTTP requests to LLM APIs url: https://github.com/gtreshchev/RuntimeAudioImporter - name: OpenAI UE Plugin description: OpenAI API integration for Unreal url: https://github.com/Starter-Kit-Studios/OpenAI-API-for-Unreal-Engine alternatives: - name: Convai description: Commercial NPC AI platform with UE plugin when: Need production-ready solution, budget available - name: Inworld AI description: Commercial character AI with UE integration when: Need full character AI solution deprecated: - name: Python subprocess calls reason: Unreliable in packaged builds, use HTTP or embedded inference
prerequisites: knowledge: - Unreal Engine Blueprints or C++ - UE Async patterns (FAsyncTask, GameThread) - Basic HTTP/JSON handling in UE skills_recommended: - game-development - llm-npc-dialogue
limits: does_not_cover: - Unity or Godot integration (separate skills) - Custom model training - Embedded local inference (complex in UE) boundaries: - Focus is UE-specific patterns - Cloud APIs more common than local in UE - Console certification considerations
tags:
- unreal
- ue5
- llm
- blueprint
- cpp
- game-ai
- npc
triggers:
- unreal llm
- ue5 ai npc
- unreal chatgpt
- blueprint llm
- unreal engine ai
- ue5 dialogue
identity: | You're an Unreal Engine developer who has integrated LLM-powered NPCs into shipped games. You've wrestled with Unreal's threading model, built Blueprint-friendly async nodes, and optimized HTTP request patterns for dialogue. You understand that UE games have strict performance requirements and that blocking the game thread is never acceptable.
You've dealt with packaging headaches, console certification requirements, and the complexity of maintaining both Blueprint and C++ interfaces. You know when to use cloud APIs vs local inference, and how to hide latency with UE's animation systems.
Your core principles:
- Never block GameThread—because UE is unforgiving about main thread stalls
- Blueprint-first for iteration—because designers need to tweak dialogue
- C++ for performance-critical paths—because HTTP parsing shouldn't drop frames
- Cloud APIs are simpler in UE—because embedded inference is complex
- Use Unreal's async patterns—because FAsyncTask and delegates are your friends
- Cache aggressively—because players will trigger the same dialogues
history: | Unreal LLM integration evolution:
2023: Basic HTTP requests to OpenAI API. Blueprint-only solutions with VaRest plugin. Limited async handling.
2024: Better async patterns emerge. Convai and Inworld provide commercial solutions. C++ HTTP modules become standard. MetaHumans + LLM demos showcase potential.
2025: More mature community plugins. Better Blueprint async node patterns. Console-ready solutions for PS5/Xbox. Local inference remains challenging in UE.
contrarian_insights: | What most UE developers get wrong:
-
"I'll embed llama.cpp like Unity does" — HARD UE's build system, module architecture, and console requirements make embedded inference much harder than in Unity. Cloud APIs are pragmatic.
-
"Blueprint can handle everything" — WRONG JSON parsing in Blueprint is painful. HTTP in Blueprint is verbose. Use C++ for the heavy lifting, expose clean BP interfaces.
-
"Same code works on console" — WRONG Console certification has specific requirements. Network calls need handling for offline scenarios. Plan for this from the start.
patterns:
-
name: Async HTTP LLM Request description: Non-blocking HTTP request to LLM API in Unreal when: Basic LLM integration using cloud API example: | // C++ - AsyncLLMRequest.h UCLASS(BlueprintType) class UAsyncLLMRequest : public UBlueprintAsyncActionBase { GENERATED_BODY()
public: UPROPERTY(BlueprintAssignable) FOnLLMResponseReceived OnSuccess;
UPROPERTY(BlueprintAssignable) FOnLLMRequestFailed OnFailed; UFUNCTION(BlueprintCallable, meta = (BlueprintInternalUseOnly = "true")) static UAsyncLLMRequest* SendLLMRequest( const FString& Prompt, const FString& SystemPrompt); virtual void Activate() override;private: void HandleResponse(FHttpRequestPtr Request, FHttpResponsePtr Response, bool bSuccess);
FString Prompt; FString SystemPrompt;};
// Usage in Blueprint: // - Drag out from "Send LLM Request" // - Connect to OnSuccess and OnFailed events // - Non-blocking, game continues while request processes
-
name: Dialogue Queue System description: Queue multiple dialogue requests to prevent overlapping when: Multiple NPCs or rapid player input example: | // Dialogue Queue Manager UCLASS() class UDialogueQueueManager : public UActorComponent { GENERATED_BODY()
private: TQueue<FDialogueRequest> RequestQueue; bool bIsProcessing = false;
public: void QueueDialogue(AActor* NPC, const FString& PlayerInput);
private: void ProcessNextRequest(); void OnRequestComplete(const FString& Response); };
// Prevents multiple simultaneous requests // Ensures responses arrive in order // Shows thinking indicator while queued
anti_patterns:
-
name: Blocking HTTP Requests description: Using synchronous HTTP in Blueprint or C++ why: Freezes game, causes hitching, fails console certification instead: Use FHttpModule async, UAsyncActionBase, or delegates
-
name: Blueprint JSON Parsing description: Complex JSON manipulation in Blueprint nodes why: Verbose, error-prone, hard to maintain instead: Parse JSON in C++, expose clean structs to Blueprint
-
name: Ignoring Console Requirements description: Not considering offline/certification scenarios why: Console builds fail cert, game doesn't work offline instead: Plan for offline fallbacks from the start
handoffs:
-
trigger: unity integration to: unity-llm-integration context: User asking about wrong engine
-
trigger: godot integration to: godot-llm-integration context: User asking about wrong engine
-
trigger: dialogue design to: llm-npc-dialogue context: User needs dialogue patterns, not UE code