Skills ai-model-wechat
Use this skill when developing WeChat Mini Programs (小程序, 企业微信小程序, wx.cloud-based apps) that need AI capabilities. Features text generation (generateText) and streaming (streamText) with callback support (onText, onEvent, onFinish) via wx.cloud.extend.AI. Built-in models include Hunyuan (hunyuan-2.0-instruct-20251111 recommended) and DeepSeek (deepseek-v3.2 recommended). API differs from JS/Node SDK - streamText requires data wrapper, generateText returns raw response. NOT for browser/Web apps (use ai-model-web), Node.js backend (use ai-model-nodejs), or image generation (not supported).
git clone https://github.com/openclaw/skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/binggg/cloudbase/references/ai-model-wechat" ~/.claude/skills/openclaw-skills-ai-model-wechat && rm -rf "$T"
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.openclaw/skills && cp -r "$T/skills/binggg/cloudbase/references/ai-model-wechat" ~/.openclaw/skills/openclaw-skills-ai-model-wechat && rm -rf "$T"
skills/binggg/cloudbase/references/ai-model-wechat/SKILL.mdWhen to use this skill
Use this skill for calling AI models in WeChat Mini Program using
wx.cloud.extend.AI.
Use it when you need to:
- Integrate AI text generation in a Mini Program
- Stream AI responses with callback support
- Call Hunyuan models from WeChat environment
Do NOT use for:
- Browser/Web apps → use
skillai-model-web - Node.js backend or cloud functions → use
skillai-model-nodejs - Image generation → use
skill (not available in Mini Program)ai-model-nodejs - HTTP API integration → use
skillhttp-api
Available Providers and Models
CloudBase provides these built-in providers and models:
| Provider | Models | Recommended |
|---|---|---|
| , , , | ✅ |
| , , | ✅ |
Prerequisites
- WeChat base library 3.7.1+
- No extra SDK installation needed
Initialization
// app.js App({ onLaunch: function() { wx.cloud.init({ env: "<YOUR_ENV_ID>" }); } })
generateText() - Non-streaming
⚠️ Different from JS/Node SDK: Return value is raw model response.
const model = wx.cloud.extend.AI.createModel("hunyuan-exp"); const res = await model.generateText({ model: "hunyuan-2.0-instruct-20251111", // Recommended model messages: [{ role: "user", content: "你好" }], }); // ⚠️ Return value is RAW model response, NOT wrapped like JS/Node SDK console.log(res.choices[0].message.content); // Access via choices array console.log(res.usage); // Token usage
streamText() - Streaming
⚠️ Different from JS/Node SDK: Must wrap parameters in
data object, supports callbacks.
const model = wx.cloud.extend.AI.createModel("hunyuan-exp"); // ⚠️ Parameters MUST be wrapped in `data` object const res = await model.streamText({ data: { // ⚠️ Required wrapper model: "hunyuan-2.0-instruct-20251111", // Recommended model messages: [{ role: "user", content: "hi" }] }, onText: (text) => { // Optional: incremental text callback console.log("New text:", text); }, onEvent: ({ data }) => { // Optional: raw event callback console.log("Event:", data); }, onFinish: (fullText) => { // Optional: completion callback console.log("Done:", fullText); } }); // Async iteration also available for await (let str of res.textStream) { console.log(str); } // Check for completion with eventStream for await (let event of res.eventStream) { console.log(event); if (event.data === "[DONE]") { // ⚠️ Check for [DONE] to stop break; } }
API Comparison: JS/Node SDK vs WeChat Mini Program
| Feature | JS/Node SDK | WeChat Mini Program |
|---|---|---|
| Namespace | | |
| generateText params | Direct object | Direct object |
| generateText return | | Raw: |
| streamText params | Direct object | ⚠️ Wrapped in |
| streamText return | | |
| Callbacks | Not supported | , , |
| Image generation | Node SDK only | Not available |
Type Definitions
streamText() Input
interface WxStreamTextInput { data: { // ⚠️ Required wrapper object model: string; messages: Array<{ role: "user" | "system" | "assistant"; content: string; }>; }; onText?: (text: string) => void; // Incremental text callback onEvent?: (prop: { data: string }) => void; // Raw event callback onFinish?: (text: string) => void; // Completion callback }
streamText() Return
interface WxStreamTextResult { textStream: AsyncIterable<string>; // Incremental text stream eventStream: AsyncIterable<{ // Raw event stream event?: unknown; id?: unknown; data: string; // "[DONE]" when complete }>; }
generateText() Return
// Raw model response (OpenAI-compatible format) interface WxGenerateTextResponse { id: string; object: "chat.completion"; created: number; model: string; choices: Array<{ index: number; message: { role: "assistant"; content: string; }; finish_reason: string; }>; usage: { prompt_tokens: number; completion_tokens: number; total_tokens: number; }; }
Best Practices
- Check base library version - Ensure 3.7.1+ for AI support
- Use callbacks for UI updates -
is great for real-time displayonText - Check for [DONE] - When using
, checkeventStream
to stopevent.data === "[DONE]" - Handle errors gracefully - Wrap AI calls in try/catch
- Remember the
wrapper - streamText params must be wrapped indatadata: {...}