Claude-skill-registry libllm
install
source · Clone the upstream repo
git clone https://github.com/majiayu000/claude-skill-registry
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/libllm" ~/.claude/skills/majiayu000-claude-skill-registry-libllm && rm -rf "$T"
manifest:
skills/data/libllm/SKILL.mdsource content
libllm Skill
When to Use
- Making chat completion requests to LLM providers
- Generating text embeddings for vector search
- Integrating with OpenAI-compatible APIs
- Handling streaming LLM responses
Key Concepts
LlmApi: HTTP client for OpenAI-compatible endpoints. Handles authentication, streaming, and response parsing.
DEFAULT_MAX_TOKENS: Standard token limit for completions.
Usage Patterns
Pattern 1: Chat completion
import { LlmApi } from "@copilot-ld/libllm"; const api = new LlmApi(config, logger); const response = await api.completion([{ role: "user", content: "Hello" }], { model: "gpt-4", maxTokens: 1000, });
Pattern 2: Generate embeddings
const embeddings = await api.embed(["text to embed"]); // Returns array of vectors
Integration
Used by LLM service. Configurable via environment for different providers (OpenAI, Azure, GitHub Models).