Pinme pinme-api
Use this skill when a PinMe project (Worker TypeScript) needs to integrate email sending (send_email) or LLM API calls (chat/completions). Guides AI to generate correct Worker TS code.
install
source · Clone the upstream repo
git clone https://github.com/glitternetwork/pinme
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/glitternetwork/pinme "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/pinme-api" ~/.claude/skills/glitternetwork-pinme-pinme-api && rm -rf "$T"
manifest:
skills/pinme-api/SKILL.mdsource content
PinMe Worker API Integration
Guides how to call PinMe platform's email sending and LLM APIs in a PinMe Worker (TypeScript).
Environment Variables
The following environment variables are automatically injected when the Worker is created — no manual configuration needed:
// backend/src/worker.ts export interface Env { DB: D1Database; API_KEY: string; // Project API Key — used for send_email and chat/completions authentication BASE_URL?: string; // Optional override for PinMe API base URL, defaults to https://pinme.cloud }
is the sole credential for the Worker to call PinMe platform APIs. WhenAPI_KEYis not set, it defaults toBASE_URL.https://pinme.cloud
API 1: Send Email
Endpoint:
POST {BASE_URL}/api/v4/send_email
Authentication: X-API-Key header (using env.API_KEY)
Sender: Automatically set to {project_name}@pinme.cloud
Request Format
{ "to": "user@example.com", "subject": "Your verification code", "html": "<p>Your code is <strong>123456</strong></p>" }
| Field | Type | Required | Description |
|---|---|---|---|
| string | Yes | Recipient email address |
| string | Yes | Email subject |
| string | Yes | HTML body |
Response Format
Success (200):
{ "code": 200, "msg": "ok", "data": { "ok": true } }
Errors:
| HTTP Status | Meaning | data.error Example |
|---|---|---|
| 401 | API Key missing or invalid | / |
| 400 | Parameter validation failed | / |
| 500 | Email service error | |
Worker Example Code
async function sendEmail(env: Env, to: string, subject: string, html: string): Promise<{ ok: boolean; error?: string }> { const baseUrl = env.BASE_URL ?? 'https://pinme.cloud'; const resp = await fetch(`${baseUrl}/api/v4/send_email`, { method: 'POST', headers: { 'Content-Type': 'application/json', 'X-API-Key': env.API_KEY, }, body: JSON.stringify({ to, subject, html }), }); const result = await resp.json() as { code: number; msg: string; data?: { ok?: boolean; error?: string } }; if (resp.status !== 200 || result.code !== 200) { return { ok: false, error: result.data?.error || result.msg || 'Unknown error' }; } return { ok: true }; } // Usage in routes async function handleSendVerification(request: Request, env: Env): Promise<Response> { const { email } = await request.json() as { email: string }; const code = Math.random().toString().slice(2, 8); const result = await sendEmail(env, email, 'Verification Code', `<p>Your code is <strong>${code}</strong></p>`); if (!result.ok) { return json({ error: result.error }, 500); } return json({ ok: true }); }
API 2: LLM Chat Completions
Endpoint:
POST {BASE_URL}/api/v1/chat/completions?project_name={project_name}
Authentication: X-API-Key header (using env.API_KEY)
Request Body: OpenAI-compatible format, passed through to LLM service as-is
Streaming: Supports SSE (stream: true)
Request Format
{ "model": "openai/gpt-4o-mini", "messages": [ { "role": "system", "content": "You are a helpful assistant." }, { "role": "user", "content": "Hello!" } ], "stream": true }
is parsed from the Worker's subdomain — see example below. For available models, refer to PinMe LLM Supported Models (OpenAI-compatible format).project_name
Response Format
Non-streaming Success (200):
{ "id": "chatcmpl-...", "choices": [{ "message": { "role": "assistant", "content": "Hello!" }, "finish_reason": "stop" }], "usage": { "prompt_tokens": 10, "completion_tokens": 5, "total_tokens": 15 } }
Streaming Success (200): SSE format
data: {"choices":[{"delta":{"content":"Hello"}}]} data: {"choices":[{"delta":{"content":" there"}}]} data: [DONE]
Errors:
| HTTP Status | Meaning | data.error Example |
|---|---|---|
| 401 | API Key missing or invalid | / |
| 400 | project_name missing or LLM not configured | / |
| 413 | Request body exceeds 1MB | |
| 502 | LLM service unavailable | |
Worker Example Code — Non-streaming
// Get project_name: parsed from the Worker's subdomain function getProjectName(request: Request): string { const host = new URL(request.url).hostname; // e.g. "my-app-1a2b.pinme.pro" return host.split('.')[0]; } async function callLLM( env: Env, projectName: string, messages: Array<{ role: string; content: string }>, model = 'openai/gpt-4o-mini', ): Promise<{ content: string; error?: string }> { const baseUrl = env.BASE_URL ?? 'https://pinme.cloud'; const resp = await fetch( `${baseUrl}/api/v1/chat/completions?project_name=${projectName}`, { method: 'POST', headers: { 'Content-Type': 'application/json', 'X-API-Key': env.API_KEY, }, body: JSON.stringify({ model, messages }), }, ); if (!resp.ok) { const err = await resp.json() as { data?: { error?: string } }; return { content: '', error: err.data?.error || `HTTP ${resp.status}` }; } const data = await resp.json() as { choices: Array<{ message: { content: string } }> }; return { content: data.choices[0]?.message?.content || '' }; } // Usage in routes async function handleChat(request: Request, env: Env): Promise<Response> { const { question } = await request.json() as { question: string }; const projectName = getProjectName(request); const result = await callLLM(env, projectName, [ { role: 'system', content: 'You are a helpful assistant.' }, { role: 'user', content: question }, ]); if (result.error) { return json({ error: result.error }, 502); } return json({ answer: result.content }); }
Worker Example Code — Streaming (SSE Passthrough)
async function handleChatStream(request: Request, env: Env): Promise<Response> { const body = await request.text(); const projectName = getProjectName(request); const baseUrl = env.BASE_URL ?? 'https://pinme.cloud'; // Ensure stream=true in the request let parsed = JSON.parse(body); parsed.stream = true; const resp = await fetch( `${baseUrl}/api/v1/chat/completions?project_name=${projectName}`, { method: 'POST', headers: { 'Content-Type': 'application/json', 'X-API-Key': env.API_KEY, }, body: JSON.stringify(parsed), }, ); if (!resp.ok) { const err = await resp.json() as { data?: { error?: string } }; return json({ error: err.data?.error || `HTTP ${resp.status}` }, resp.status); } // Pass through SSE stream directly return new Response(resp.body, { status: 200, headers: { 'Content-Type': 'text/event-stream', 'Cache-Control': 'no-cache', 'Connection': 'keep-alive', ...CORS_HEADERS, }, }); }
Frontend SSE Stream Consumer Example
async function streamChat(question: string, onChunk: (text: string) => void): Promise<void> { const resp = await fetch(getApiUrl('/api/chat/stream'), { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ question }), }); const reader = resp.body!.getReader(); const decoder = new TextDecoder(); let buffer = ''; while (true) { const { done, value } = await reader.read(); if (done) break; buffer += decoder.decode(value, { stream: true }); const lines = buffer.split('\n'); buffer = lines.pop()!; // Keep incomplete line for (const line of lines) { if (!line.startsWith('data: ')) continue; const payload = line.slice(6); if (payload === '[DONE]') return; const chunk = JSON.parse(payload) as { choices: Array<{ delta: { content?: string } }> }; const content = chunk.choices[0]?.delta?.content; if (content) onChunk(content); } } }
Error Handling Patterns
PinMe platform API unified response format:
interface PinmeResponse<T = unknown> { code: number; // 200=success, other=failure msg: string; // "ok" | "error" | "invalid params" data?: T; // Business data on success, may contain { error: string } on failure }
Recommended Unified Error Handler
async function callPinmeAPI<T>(url: string, apiKey: string, body: unknown): Promise<{ data?: T; error?: string }> { let resp: Response; try { resp = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', 'X-API-Key': apiKey }, body: JSON.stringify(body), }); } catch { return { error: 'Network error' }; } if (!resp.ok) { try { const err = await resp.json() as PinmeResponse; return { error: err.data && typeof err.data === 'object' && 'error' in err.data ? (err.data as { error: string }).error : err.msg || `HTTP ${resp.status}` }; } catch { return { error: `HTTP ${resp.status}` }; } } const result = await resp.json() as PinmeResponse<T>; if (result.code !== 200) { return { error: result.data && typeof result.data === 'object' && 'error' in result.data ? (result.data as { error: string }).error : result.msg }; } return { data: result.data as T }; }
Usage Examples
const baseUrl = env.BASE_URL ?? 'https://pinme.cloud'; // Send email const emailResult = await callPinmeAPI<{ ok: boolean }>( `${baseUrl}/api/v4/send_email`, env.API_KEY, { to: 'user@example.com', subject: 'Hello', html: '<p>Hi</p>' }, ); if (emailResult.error) return json({ error: emailResult.error }, 500); // Call LLM (non-streaming) const llmResult = await callPinmeAPI<{ choices: Array<{ message: { content: string } }> }>( `${baseUrl}/api/v1/chat/completions?project_name=${projectName}`, env.API_KEY, { model: 'openai/gpt-4o-mini', messages: [{ role: 'user', content: 'Hi' }] }, ); if (llmResult.error) return json({ error: llmResult.error }, 502);