Ai-setup llm-provider
Adds a new LLM provider implementing LLMProvider interface from src/llm/types.ts with call() and stream() methods. Integrates config in src/llm/config.ts and factory in src/llm/index.ts. Use when adding a new AI backend, integrating a new model API, or extending provider support. Do NOT use for modifying existing providers or debugging provider issues.
install
source · Clone the upstream repo
git clone https://github.com/caliber-ai-org/ai-setup
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/caliber-ai-org/ai-setup "$T" && mkdir -p ~/.claude/skills && cp -r "$T/.cursor/skills/llm-provider" ~/.claude/skills/caliber-ai-org-ai-setup-llm-provider-6e0812 && rm -rf "$T"
manifest:
.cursor/skills/llm-provider/SKILL.mdsource content
LLM Provider
Critical
- All providers must implement
interface fromLLMProvider
:src/llm/types.ts
andcall(messages, params)
returningstream(messages, params)AsyncIterable<StreamChunk> - No partial implementations: Both
andcall()
must work. Streaming is not optional.stream() - StreamChunk format:
{ type: 'text' | 'error' | 'usage'; value: string; usage?: { input: number; output: number } } - Error handling: Catch provider-specific errors, map to
fromChatError
withsrc/llm/types.ts
(e.g.,code
,'auth'
,'rate_limit'
) and'network'message - Model validation: Call
in config before accepting the model. Refer tovalidateModel(modelId)
for pattern.src/llm/config.ts
Instructions
-
Define provider file at
src/llm/{provider-name}.ts- Import
from{ LLMProvider, ChatMessage, ChatParams, StreamChunk, ChatError }src/llm/types.ts - Export class
{ProviderName}Provider implements LLMProvider - Constructor takes
matching config structure{ apiKey?: string; model: string; baseUrl?: string } - Store config in instance:
this.apiKey = apiKey || process.env.{PROVIDER_API_KEY} - Verify API key exists on first call, throw
withChatError
if missingcode: 'auth'
- Import
-
Implement
methodcall()- Signature:
async call(messages: ChatMessage[], params: ChatParams): Promise<string> - Make HTTP request to provider API with messages and params (temperature, max_tokens, etc.)
- Extract text from response, return as single string
- On error (network, auth, rate limit), throw
with appropriateChatError
andcodemessage - Verify this works before proceeding
- Signature:
-
Implement
methodstream()- Signature:
async *stream(messages: ChatMessage[], params: ChatParams): AsyncIterable<StreamChunk> - Use provider's streaming endpoint (e.g., SSE, WebSocket, chunked response)
- Yield
for each text delta{ type: 'text', value: '<chunk>' } - Yield
at end if available{ type: 'usage', value: '', usage: { input, output } } - On error, yield
and return{ type: 'error', value: '<error message>' } - Test streaming with
for await (const chunk of stream(...)) { console.log(chunk) }
- Signature:
-
Add config in
src/llm/config.ts- Import
in{ProviderName}Provider
functiongetProvider(config, model) - Add condition:
if (config.provider === '{provider-slug}') return new {ProviderName}Provider({ apiKey: config.apiKey, model, baseUrl: config.baseUrl }) - Add case in
: check against provider's official model list or hardcode supported modelsvalidateModel() - Export provider slug in
array if it existsSUPPORTED_PROVIDERS - Verify config function accepts and routes your provider
- Import
-
Register in factory at
src/llm/index.ts- Import provider in
functiongetProvider() - Add to the conditional chain matching provider name from config
- Run
to verify factory picks up providernpm run build && npm run test
- Import provider in
-
Add tests in
src/llm/__tests__/{provider-name}.test.ts- Mock API responses using
orvitest.mock()
stubfetch - Test
: verify message formatting, response parsing, error handlingcall() - Test
: verify chunk parsing, usage reporting, error yieldsstream() - Test config validation: invalid model, missing API key
- Run
npx vitest run src/llm/__tests__/{provider-name}.test.ts
- Mock API responses using
Examples
User says: "Add support for Groq as a new LLM provider."
Actions:
- Create
withsrc/llm/groq.ts
classGroqProvider - Implement
callingcall()
with OpenAI-compatible formathttps://api.groq.com/openai/v1/chat/completions - Implement
using same endpoint withstream()stream: true - In
, addsrc/llm/config.tsif (config.provider === 'groq') return new GroqProvider(...) - In
, import and routesrc/llm/index.ts
provider ingroqgetProvider() - Create tests mocking Groq API responses
- Verify:
passesnpm run test -- src/llm/__tests__/groq.test.ts
Result: Caliber can now use Groq models via
{ provider: 'groq', apiKey: '...', model: 'mixtral-8x7b-32768' }
Common Issues
- "Provider not recognized" in factory: Verify provider slug matches exactly in
condition AND in config file. Check spelling and case sensitivity.getProvider() - "TypeError: stream is not async iterable": Ensure
is a generator function (usesstream()
andasync *
). Test withyield
loop before deploying.for await - "API key is undefined": Verify environment variable name in provider constructor matches what user sets. Log
value in error message:apiKeythrow new ChatError('auth', 'API key missing: check {ENV_VAR_NAME}') - "Stream stops early or yields garbage": Check provider's response format (JSON lines, SSE, etc.). Log raw response chunk:
to debug parsing.console.error('Raw chunk:', chunk) - "Model validation fails but model is valid": Ensure
in config covers all supported models for this provider. If list is dynamic, call provider's models endpoint and cache.validateModel() - Type errors on ChatError: Verify import is
(withfrom 'src/llm/types.js'
extension for ESM)..js - Tests fail with "fetch is not defined": Add
or mock globally inimport { fetch } from 'node-fetch'
.src/test/setup.ts