Ai-setup llm-provider
Adds a new LLM provider implementing LLMProvider interface from src/llm/types.ts with call() and stream() methods. Integrates config in src/llm/config.ts, factory in src/llm/index.ts, and error handling. Use when adding new provider, 'add LLM backend', integrating external API. Do NOT use for modifying existing providers or debugging provider issues.
git clone https://github.com/caliber-ai-org/ai-setup
T=$(mktemp -d) && git clone --depth=1 https://github.com/caliber-ai-org/ai-setup "$T" && mkdir -p ~/.claude/skills && cp -r "$T/.agents/skills/llm-provider" ~/.claude/skills/caliber-ai-org-ai-setup-llm-provider && rm -rf "$T"
.agents/skills/llm-provider/SKILL.mdLLM Provider
Critical
- Interface contract: Provider MUST implement
fromLLMProvider
— bothsrc/llm/types.ts
(non-streaming) andcall()
methods are requiredstream() - No authentication in code: API keys/tokens come from env vars only (e.g.,
). Never hardcode secretsprocess.env.PROVIDER_API_KEY - Validation before integration: Provider must pass a unit test in
before wiring into config/factorysrc/llm/__tests__/ - Error handling: All errors must extend or match the patterns in
(e.g.,src/llm/types.ts
or provider-specific exceptions)LLMError
Instructions
Step 1: Implement the provider in src/llm/
Create
src/llm/{provider-name}.ts implementing LLMProvider<TConfig> from src/llm/types.ts.
Reference existing providers:
— streaming viaanthropic.ts
,message_start
eventscontent_block_delta
— standard OpenAI-compatible APIopenai-compat.ts
— headless agent mode withcursor-acp.ts--stream-partial-output
Implement:
export class MyProvider implements LLMProvider<MyConfig> { constructor(config: MyConfig) { this.config = config; } async call(request: LLMRequest): Promise<string> { // Non-streaming call // Return full text response } async *stream(request: LLMRequest): AsyncGenerator<string> { // Streaming call — yield text chunks // Handle protocol-specific events (if any) } }
Verify: Type-check passes (
npx tsc --noEmit) and provider exports correctly.
Step 2: Add configuration in src/llm/config.ts
Register provider type and config interface in
src/llm/config.ts alongside existing providers (anthropic, openai, cursor, claude-cli).
Add to union type:
export type LLMConfig = AnthropicConfig | OpenAIConfig | MyProviderConfig | ...;
Add config interface:
export interface MyProviderConfig { provider: 'my-provider'; apiKey?: string; // optional if using env var model: string; // other settings }
Verify: Config merges into
LLMConfig union without conflicts.
Step 3: Wire into factory in src/llm/index.ts
Update
src/llm/index.ts createLLMClient() function to instantiate provider:
case 'my-provider': return new MyProvider(config as MyProviderConfig);
Add import at top:
import { MyProvider } from './my-provider.js';
Verify: Factory passes config correctly and type-checks.
Step 4: Write unit test in src/llm/tests/
Create
src/llm/__tests__/my-provider.test.ts with:
- Mock API responses matching provider's protocol
test: verify request format, response parsingcall()
test: verify chunk parsing, event handlingstream()- Error handling test (invalid API key, timeout, malformed response)
Use existing test patterns from
anthropic.test.ts or openai-compat.test.ts.
Verify:
npm run test -- src/llm/__tests__/my-provider.test.ts passes.
Step 5: Test integration
Run full LLM test suite:
npm run test -- src/llm/
Test config loading:
npm run build && npx caliber config --check
If provider requires user setup (interactive prompts), add to
src/commands/interactive-provider-setup.ts following patterns for Anthropic/OpenAI.
Examples
User says: "Add support for Groq API"
Actions:
- Create
→ implementsrc/llm/groq.ts
with OpenAI-compatible fetch callsLLMProvider - Update
→ addsrc/llm/config.ts
type withGroqConfig
,model
fieldsapiKey - Update
→ factory case forsrc/llm/index.ts'groq' - Write
→ mock Groq API responses, testsrc/llm/__tests__/groq.test.ts
andcall()stream() - Run:
→ passesnpm run test -- src/llm/groq.test.ts
Result: User can set
LLM_PROVIDER=groq and GROQ_API_KEY=..., caliber uses Groq for all LLM calls.
Common Issues
"Cannot find module './my-provider'"
- Verify file is named
(kebab-case)src/llm/{provider-name}.ts - Import uses
extension:.jsimport { MyProvider } from './my-provider.js' - Run
to check TypeScriptnpm run build
"MyProvider does not implement LLMProvider"
- Ensure both
andcall()
methods exist and match signature instream()src/llm/types.ts
returnscall()
,Promise<string>
returnsstream()AsyncGenerator<string>- Run
for full type errorsnpx tsc --noEmit
"Provider not found in factory"
- Check
src/llm/index.ts
hascreateLLMClient()
matching config typecase 'my-provider' - Verify config union in
includessrc/llm/config.tsMyProviderConfig
"API key undefined at runtime"
- Provider reads from
orprocess.env.PROVIDER_API_KEYconfig.apiKey - User must set env var or pass in config
- Check
file or shell:.envecho $PROVIDER_API_KEY
"Stream yields nothing / incomplete chunks"
- Verify
parses protocol correctly (event vs. line delimiters)stream() - Test against mock:
npm run test -- src/llm/__tests__/my-provider.test.ts - Compare event structure to
(SSE) oranthropic.ts
(JSON Lines)openai-compat.ts