Skills ai-model-web
Use this skill when developing browser/Web applications (React/Vue/Angular, static websites, SPAs) that need AI capabilities. Features text generation (generateText) and streaming (streamText) via @cloudbase/js-sdk. Built-in models include Hunyuan (hunyuan-2.0-instruct-20251111 recommended) and DeepSeek (deepseek-v3.2 recommended). NOT for Node.js backend (use ai-model-nodejs), WeChat Mini Program (use ai-model-wechat), or image generation (Node SDK only).
install
source · Clone the upstream repo
git clone https://github.com/openclaw/skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/binggg/cloudbase/references/ai-model-web" ~/.claude/skills/openclaw-skills-ai-model-web && rm -rf "$T"
OpenClaw · Install into ~/.openclaw/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.openclaw/skills && cp -r "$T/skills/binggg/cloudbase/references/ai-model-web" ~/.openclaw/skills/openclaw-skills-ai-model-web && rm -rf "$T"
manifest:
skills/binggg/cloudbase/references/ai-model-web/SKILL.mdsource content
When to use this skill
Use this skill for calling AI models in browser/Web applications using
@cloudbase/js-sdk.
Use it when you need to:
- Integrate AI text generation in a frontend Web app
- Stream AI responses for better user experience
- Call Hunyuan or DeepSeek models from browser
Do NOT use for:
- Node.js backend or cloud functions → use
skillai-model-nodejs - WeChat Mini Program → use
skillai-model-wechat - Image generation → use
skill (Node SDK only)ai-model-nodejs - HTTP API integration → use
skillhttp-api
Available Providers and Models
CloudBase provides these built-in providers and models:
| Provider | Models | Recommended |
|---|---|---|
| , , , | ✅ |
| , , | ✅ |
Installation
npm install @cloudbase/js-sdk
Initialization
import cloudbase from "@cloudbase/js-sdk"; const app = cloudbase.init({ env: "<YOUR_ENV_ID>", accessKey: "<YOUR_PUBLISHABLE_KEY>" // Get from CloudBase console }); const auth = app.auth(); await auth.signInAnonymously(); const ai = app.ai();
Important notes:
- Always use synchronous initialization with top-level import
- User must be authenticated before using AI features
- Get
from CloudBase consoleaccessKey
generateText() - Non-streaming
const model = ai.createModel("hunyuan-exp"); const result = await model.generateText({ model: "hunyuan-2.0-instruct-20251111", // Recommended model messages: [{ role: "user", content: "你好,请你介绍一下李白" }], }); console.log(result.text); // Generated text string console.log(result.usage); // { prompt_tokens, completion_tokens, total_tokens } console.log(result.messages); // Full message history console.log(result.rawResponses); // Raw model responses
streamText() - Streaming
const model = ai.createModel("hunyuan-exp"); const res = await model.streamText({ model: "hunyuan-2.0-instruct-20251111", // Recommended model messages: [{ role: "user", content: "你好,请你介绍一下李白" }], }); // Option 1: Iterate text stream (recommended) for await (let text of res.textStream) { console.log(text); // Incremental text chunks } // Option 2: Iterate data stream for full response data for await (let data of res.dataStream) { console.log(data); // Full response chunk with metadata } // Option 3: Get final results const messages = await res.messages; // Full message history const usage = await res.usage; // Token usage
Type Definitions
interface BaseChatModelInput { model: string; // Required: model name messages: Array<ChatModelMessage>; // Required: message array temperature?: number; // Optional: sampling temperature topP?: number; // Optional: nucleus sampling } type ChatModelMessage = | { role: "user"; content: string } | { role: "system"; content: string } | { role: "assistant"; content: string }; interface GenerateTextResult { text: string; // Generated text messages: Array<ChatModelMessage>; // Full message history usage: Usage; // Token usage rawResponses: Array<unknown>; // Raw model responses error?: unknown; // Error if any } interface StreamTextResult { textStream: AsyncIterable<string>; // Incremental text stream dataStream: AsyncIterable<DataChunk>; // Full data stream messages: Promise<ChatModelMessage[]>;// Final message history usage: Promise<Usage>; // Final token usage error?: unknown; // Error if any } interface Usage { prompt_tokens: number; completion_tokens: number; total_tokens: number; }
Best Practices
- Use streaming for long responses - Better user experience
- Handle errors gracefully - Wrap AI calls in try/catch
- Keep accessKey secure - Use publishable key, not secret key
- Initialize early - Initialize SDK in app entry point
- Ensure authentication - User must be signed in before AI calls