Awesome-omni-skill glove
Expert guide for building AI-powered applications with the Glove framework. Use when working with glove-core, glove-react, glove-next, tools, display stack, model adapters, stores, or any Glove example project.
git clone https://github.com/diegosouzapw/awesome-omni-skill
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skill "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/development/glove" ~/.claude/skills/diegosouzapw-awesome-omni-skill-glove && rm -rf "$T"
skills/development/glove/SKILL.mdGlove Framework — Development Guide
You are an expert on the Glove framework. Use this knowledge when writing, debugging, or reviewing Glove code.
What Glove Is
Glove is an open-source TypeScript framework for building AI-powered applications. Users describe what they want in conversation, and an AI decides which capabilities (tools) to invoke. Developers define tools and renderers; Glove handles the agent loop.
Repository: https://github.com/porkytheblack/glove Docs site: https://glove.dterminal.net License: MIT (dterminal)
Package Overview
| Package | Purpose | Install |
|---|---|---|
| Runtime engine: agent loop, tool execution, display manager, model adapters, stores | |
| React hooks (), , , , , , with colocated renderers | |
| One-line Next.js API route handler () for streaming SSE | |
Most projects need just
+ glove-react
. glove-next
glove-core is included as a dependency of glove-react.
Architecture at a Glance
User message → Agent Loop → Model decides tool calls → Execute tools → Feed results back → Loop until done ↓ Display Stack (pushAndWait / pushAndForget) ↓ React renders UI slots
Core Concepts
- Agent — AI coordinator that replaces router/navigation logic. Reads tools, decides which to call.
- Tool — A capability: name, description, inputSchema (Zod),
function, optionaldo
+render
.renderResult - Display Stack — Stack of UI slots tools push onto.
blocks tool;pushAndWait
doesn't.pushAndForget - Display Strategy — Controls slot visibility lifecycle:
,"stay"
,"hide-on-complete"
."hide-on-new" - renderData — Client-only data returned from
that is NOT sent to the AI model. Used bydo()
for history rendering.renderResult - Adapter — Pluggable interfaces for Model, Store, DisplayManager, and Subscriber. Swap providers without changing app code.
- Context Compaction — Auto-summarizes long conversations to stay within context window limits. The store preserves full message history (so frontends can display the entire chat), while
splits at the last compaction summary so the model only sees post-compaction context. Summary messages are marked withContext.getMessages()
.is_compaction: true
Quick Start (Next.js)
1. Install
pnpm add glove-core glove-react glove-next zod
2. Server route
// app/api/chat/route.ts import { createChatHandler } from "glove-next"; export const POST = createChatHandler({ provider: "anthropic", // or "openai", "openrouter", "gemini", etc. model: "claude-sonnet-4-20250514", });
Set
ANTHROPIC_API_KEY (or OPENAI_API_KEY, etc.) in .env.local.
3. Define tools with defineTool
defineTool// lib/glove.tsx import { GloveClient, defineTool } from "glove-react"; import type { ToolConfig } from "glove-react"; import { z } from "zod"; const inputSchema = z.object({ question: z.string().describe("The question to display"), options: z.array(z.object({ label: z.string().describe("Display text"), value: z.string().describe("Value returned when selected"), })), }); const askPreferenceTool = defineTool({ name: "ask_preference", description: "Present options for the user to choose from.", inputSchema, displayPropsSchema: inputSchema, // Zod schema for display props resolveSchema: z.string(), // Zod schema for resolve value displayStrategy: "hide-on-complete", // Hide slot after user responds async do(input, display) { const selected = await display.pushAndWait(input); // typed! return { status: "success" as const, data: `User selected: ${selected}`, // sent to AI renderData: { question: input.question, selected }, // client-only }; }, render({ props, resolve }) { // typed props, typed resolve return ( <div> <p>{props.question}</p> {props.options.map(opt => ( <button key={opt.value} onClick={() => resolve(opt.value)}> {opt.label} </button> ))} </div> ); }, renderResult({ data }) { // renders from history const { question, selected } = data as { question: string; selected: string }; return <div><p>{question}</p><span>Selected: {selected}</span></div>; }, }); // Tools without display stay as raw ToolConfig const getDateTool: ToolConfig = { name: "get_date", description: "Get today's date", inputSchema: z.object({}), async do() { return { status: "success", data: new Date().toLocaleDateString() }; }, }; export const gloveClient = new GloveClient({ endpoint: "/api/chat", systemPrompt: "You are a helpful assistant.", tools: [askPreferenceTool, getDateTool], });
4. Provider + Render
// app/providers.tsx "use client"; import { GloveProvider } from "glove-react"; import { gloveClient } from "@/lib/glove"; export function Providers({ children }: { children: React.ReactNode }) { return <GloveProvider client={gloveClient}>{children}</GloveProvider>; }
// app/page.tsx — using <Render> component "use client"; import { useGlove, Render } from "glove-react"; export default function Chat() { const glove = useGlove(); return ( <Render glove={glove} strategy="interleaved" renderMessage={({ entry }) => ( <div><strong>{entry.kind === "user" ? "You" : "AI"}:</strong> {entry.text}</div> )} renderStreaming={({ text }) => <div style={{ opacity: 0.7 }}>{text}</div>} /> ); }
Or use
useGlove() directly for full manual control:
// app/page.tsx — manual rendering "use client"; import { useState } from "react"; import { useGlove } from "glove-react"; export default function Chat() { const { timeline, streamingText, busy, slots, sendMessage, renderSlot, renderToolResult } = useGlove(); const [input, setInput] = useState(""); return ( <div> {timeline.map((entry, i) => ( <div key={i}> {entry.kind === "user" && <p><strong>You:</strong> {entry.text}</p>} {entry.kind === "agent_text" && <p><strong>AI:</strong> {entry.text}</p>} {entry.kind === "tool" && ( <> <p>Tool: {entry.name} — {entry.status}</p> {entry.renderData !== undefined && renderToolResult(entry)} </> )} </div> ))} {streamingText && <p style={{ opacity: 0.7 }}>{streamingText}</p>} {slots.map(renderSlot)} <form onSubmit={(e) => { e.preventDefault(); sendMessage(input.trim()); setInput(""); }}> <input value={input} onChange={(e) => setInput(e.target.value)} disabled={busy} /> <button type="submit" disabled={busy}>Send</button> </form> </div> ); }
Display Stack Patterns
pushAndForget — Show results (non-blocking)
async do(input, display) { const data = await fetchData(input); await display.pushAndForget({ input: data }); // Shows UI, tool continues return { status: "success", data: "Displayed results", renderData: data }; }, render({ data }) { return <Card>{data.title}</Card>; }, renderResult({ data }) { return <Card>{(data as any).title}</Card>; // Same card from history },
pushAndWait — Collect user input (blocking)
async do(input, display) { const confirmed = await display.pushAndWait({ input }); // Pauses until user responds return { status: "success", data: confirmed ? "Confirmed" : "Cancelled", renderData: { confirmed }, }; }, render({ data, resolve }) { return ( <div> <p>{data.message}</p> <button onClick={() => resolve(true)}>Yes</button> <button onClick={() => resolve(false)}>No</button> </div> ); }, renderResult({ data }) { const { confirmed } = data as { confirmed: boolean }; return <div>{confirmed ? "Confirmed" : "Cancelled"}</div>; },
Display Strategies
| Strategy | Behavior | Use for |
|---|---|---|
(default) | Slot always visible | Info cards, results |
| Hidden when slot is resolved | Forms, confirmations, pickers |
| Hidden when newer slot from same tool appears | Cart summaries, status panels |
SlotRenderProps
| Prop | Type | Description |
|---|---|---|
| | Input passed to pushAndWait/pushAndForget |
| | Resolves the slot. For pushAndWait, the value returns to . For pushAndForget, use or to dismiss. |
| | Rejects the slot. For pushAndWait, this causes the promise to reject. Use for cancellation flows. |
Tool Definition
defineTool
(recommended for tools with UI)
defineToolimport { defineTool } from "glove-react"; const tool = defineTool({ name: string, description: string, inputSchema: z.ZodType, // Zod schema for tool input displayPropsSchema?: z.ZodType, // Zod schema for display props (recommended for tools with UI) resolveSchema?: z.ZodType, // Zod schema for resolve value (omit for pushAndForget-only) displayStrategy?: SlotDisplayStrategy, requiresPermission?: boolean, unAbortable?: boolean, // Tool runs to completion even if abort signal fires (e.g. voice barge-in) do(input, display): Promise<ToolResultData>, // display is TypedDisplay<D, R> render?({ props, resolve, reject }): ReactNode, renderResult?({ data, output, status }): ReactNode, });
Key points:
should returndo()
—{ status, data, renderData }
goes to model,data
stays client-onlyrenderData
gets typedrender()
(matching displayPropsSchema) and typedprops
(matching resolveSchema)resolve
receivesrenderResult()
for showing read-only views from historyrenderData
is optional but recommended — tools without display should use rawdisplayPropsSchemaToolConfig
ToolConfig
(for tools without UI or manual control)
ToolConfiginterface ToolConfig<I = any> { name: string; description: string; inputSchema: z.ZodType<I>; do: (input: I, display: ToolDisplay) => Promise<ToolResultData>; render?: (props: SlotRenderProps) => ReactNode; renderResult?: (props: ToolResultRenderProps) => ReactNode; displayStrategy?: SlotDisplayStrategy; requiresPermission?: boolean; unAbortable?: boolean; }
ToolResultData
interface ToolResultData { status: "success" | "error"; data: unknown; // Sent to the AI model message?: string; // Error message (for status: "error") renderData?: unknown; // Client-only — NOT sent to model, used by renderResult }
Important: Model adapters explicitly strip
renderData before sending to the AI. This makes it safe to store sensitive client-only data (e.g., email addresses, UI state) in renderData.
<Render>
Component
<Render>Headless render component that replaces manual timeline rendering:
import { Render } from "glove-react"; <Render glove={gloveHandle} // return value of useGlove() strategy="interleaved" // "interleaved" | "slots-before" | "slots-after" | "slots-only" renderMessage={({ entry, index, isLast }) => ...} renderToolStatus={({ entry, index, hasSlot }) => ...} renderStreaming={({ text }) => ...} renderInput={({ send, busy, abort }) => ...} renderSlotContainer={({ slots, renderSlot }) => ...} as="div" // wrapper element className="chat" />
Features:
- Automatic slot visibility based on
displayStrategy - Automatic
rendering for completed tools withrenderResultrenderData - Interleaving: slots appear inline next to their tool call
- Sensible defaults for all render props
GloveHandle
Interface
GloveHandleThe interface consumed by
<Render>, returned by useGlove():
interface GloveHandle { timeline: TimelineEntry[]; streamingText: string; busy: boolean; slots: EnhancedSlot[]; sendMessage: (text: string, images?: { data: string; media_type: string }[]) => void; abort: () => void; renderSlot: (slot: EnhancedSlot) => ReactNode; renderToolResult: (entry: ToolEntry) => ReactNode; resolveSlot: (slotId: string, value: unknown) => void; rejectSlot: (slotId: string, reason?: string) => void; }
useGlove Hook Return
| Property | Type | Description |
|---|---|---|
| | Messages + tool calls |
| | Current streaming buffer |
| | Agent is processing |
| | Context compaction in progress (driven by / events) |
| | Active display stack with metadata |
| | Agent task list |
| | |
| | Send user message |
| | Cancel current request |
| | Render a display slot |
| | Render a tool result from history |
| | Resolve a pushAndWait slot |
| | Reject a pushAndWait slot |
TimelineEntry
type TimelineEntry = | { kind: "user"; text: string; images?: string[] } | { kind: "agent_text"; text: string } | { kind: "tool"; id: string; name: string; input: unknown; status: "running" | "success" | "error"; output?: string; renderData?: unknown }; type ToolEntry = Extract<TimelineEntry, { kind: "tool" }>;
Supported Providers
| Provider | Env Variable | Default Model | SDK Format |
|---|---|---|---|
| | | openai |
| | | anthropic |
| | | openai |
| | | openai |
| | | openai |
| | | openai |
| | | openai |
Pre-built Tool Registry
Available at https://glove.dterminal.net/tools — copy-paste into your project:
— Yes/No confirmation dialogconfirm_action
— Multi-field formcollect_form
— Single-select preference pickerask_preference
— Free-text inputtext_input
— Info/success/warning card (pushAndForget)show_info_card
— Multiple-choice suggestionssuggest_options
— Step-by-step plan approvalapprove_plan
Voice Integration (glove-voice
)
glove-voicePackage Overview
| Package | Purpose | Install |
|---|---|---|
| Voice pipeline: , adapters (STT/TTS/VAD), , | |
| React hook: | Included in |
| Token handlers: (already in , no separate import) | Included in |
Architecture
Mic → VAD → STTAdapter → glove.processRequest() → TTSAdapter → Speaker
GloveVoice wraps a Glove instance with a full-duplex voice pipeline. Glove remains the intelligence layer — all tools, display stack, and context management work normally. STT and TTS are swappable adapters. Text tokens stream through a SentenceBuffer into TTS in real-time.
Quick Start (Next.js + ElevenLabs)
Step 1: Token routes — server-side handlers that exchange your API key for short-lived tokens
// app/api/voice/stt-token/route.ts import { createVoiceTokenHandler } from "glove-next"; export const GET = createVoiceTokenHandler({ provider: "elevenlabs", type: "stt" });
// app/api/voice/tts-token/route.ts import { createVoiceTokenHandler } from "glove-next"; export const GET = createVoiceTokenHandler({ provider: "elevenlabs", type: "tts" });
Set
ELEVENLABS_API_KEY in .env.local.
Step 2: Client voice config
// app/lib/voice.ts import { createElevenLabsAdapters } from "glove-voice"; async function fetchToken(path: string): Promise<string> { const res = await fetch(path); const data = await res.json(); return data.token; } export const { stt, createTTS } = createElevenLabsAdapters({ getSTTToken: () => fetchToken("/api/voice/stt-token"), getTTSToken: () => fetchToken("/api/voice/tts-token"), voiceId: "JBFqnCBsd6RMkjVDRZzb", });
Step 3: SileroVAD — dynamic import for SSR safety
export async function createSileroVAD() { const { SileroVADAdapter } = await import("glove-voice/silero-vad"); const vad = new SileroVADAdapter({ positiveSpeechThreshold: 0.5, negativeSpeechThreshold: 0.35, wasm: { type: "cdn" }, }); await vad.init(); return vad; }
Step 4: React hook
const { runnable } = useGlove({ tools, sessionId }); const voice = useGloveVoice({ runnable, voice: { stt, createTTS, vad } }); // voice.mode, voice.isActive, voice.isMuted, voice.error, voice.transcript // voice.start(), voice.stop(), voice.interrupt(), voice.commitTurn() // voice.mute(), voice.unmute() — gate mic audio to STT/VAD // voice.narrate("text") — speak text via TTS without model (returns Promise)
Turn Modes
| Mode | Behavior | Use for |
|---|---|---|
(default) | Auto speech detection + barge-in | Hands-free, voice-first apps |
| Push-to-talk, explicit | Noisy environments, precise control |
Narration + Mic Control
— Speak arbitrary text through TTS without the model. Resolves when audio finishes. Auto-mutes mic during narration. Abortable viavoice.narrate(text)
. Safe to call frominterrupt()
tool handlers.pushAndWait
/voice.mute()
— Gate mic audio forwarding to STT/VAD.voice.unmute()
events still fire when muted (for visualization).audio_chunk
event — Rawaudio_chunk
PCM from the mic, emitted even when muted. Use for waveform/level visualization.Int16Array- Compaction silence — Voice automatically ignores
during context compaction so the summary is never narrated.text_delta
Voice-First Tool Design
- Use
instead ofpushAndForget
— blocking tools that wait for clicks are unusable in voice modepushAndWait - Return descriptive text in
— the LLM reads it to formulate spoken responsesdata - Add a voice-specific system prompt — instruct the agent to narrate results concisely
- Use
for slot narration — read display content aloud from within tool handlersnarrate()
Supported Voice Providers
| Provider | Token Handler Config | Env Variable |
|---|---|---|
| ElevenLabs | | |
| Deepgram | | |
| Cartesia | | |
Supporting Files
For detailed API reference, see api-reference.md. For example patterns from real implementations, see examples.md.
Common Gotchas
- model_response_complete vs model_response: Streaming adapters emit
, notmodel_response_complete
. Subscribers must handle both.model_response - Closure capture in React hooks: When re-keying sessions, use mutable
to avoid stale closures.let currentKey = key - React useEffect timing: State updates don't take effect in the same render cycle — guard with early returns.
- Browser-safe imports:
barrel exports include native deps (better-sqlite3). For browser code, import from subpaths:glove-core
,glove-core/core
,glove-core/glove
,glove-core/display-manager
.glove-core/tools/task-tool
casing: The concrete class isDisplaymanager
(lowercase 'm'), notDisplaymanager
. Import it as:DisplayManager
.import { Displaymanager } from "glove-core/display-manager"
stream default:createAdapter
defaults tostream
, nottrue
. Passfalse
explicitly if you want synchronous responses.stream: false- Tool return values: The
function should returndo
withToolResultData
.{ status, data, renderData? }
goes to the AI;data
stays client-only.renderData - Zod .describe(): Always add
to schema fields — the AI reads these descriptions to understand what to provide..describe() - displayPropsSchema is optional but recommended:
'sdefineTool
is optional, but recommended for tools with display UI — tools without display should use rawdisplayPropsSchema
instead.ToolConfig - renderData is stripped by model adapters: Model adapters explicitly exclude
when formatting tool results for the AI, so it's safe for client-only data.renderData - SileroVAD must use dynamic import: Never import
at module level in Next.js/SSR. Useglove-voice/silero-vad
to avoid pulling WASM into the server bundle.await import("glove-voice/silero-vad") - Next.js transpilePackages: Add
to"glove-voice"
intranspilePackages
so Next.js processes the ES module.next.config.ts - createTTS must be a factory:
calls it once per turn to get a fresh TTS adapter. PassGloveVoice
, not a single instance.() => new ElevenLabsTTSAdapter(...) - Barge-in protection requires
: AunAbortable
resolver suppresses voice barge-in at the trigger level (GloveVoice skipspushAndWait
wheninterrupt()
). But that alone doesn't protect the tool — ifresolverStore.size > 0
is called by other means, onlyinterrupt()
on the tool guarantees it runs to completion despite the abort signal. Use both together for mutation-critical tools like checkout. UseunAbortable: true
for voice-first tools.pushAndForget - Empty committed transcripts: ElevenLabs Scribe may return empty committed transcripts for short utterances. The adapter auto-falls back to the last partial transcript.
- TTS idle timeout: ElevenLabs TTS WebSocket disconnects after ~20s idle. GloveVoice handles this by closing TTS after each model_response_complete and opening a fresh session on next text_delta.
- onnxruntime-web build warnings:
warnings from onnxruntime-web are expected and harmless.Critical dependency: require function is used in a way... - Audio sample rate: All adapters must agree on 16kHz mono PCM (the default). Don't change unless your provider explicitly requires something different.
auto-mutes mic:narrate()
automatically mutes the mic during playback to prevent TTS audio from feeding back into STT/VAD. It restores the previous mute state when done.voice.narrate()
needs a started pipeline: Callingnarrate()
beforenarrate()
throws. The TTS factory and AudioPlayer must be initialized.voice.start()- Voice auto-silences during compaction: When context compaction is triggered, the voice pipeline ignores all
events betweentext_delta
andcompaction_start
. The compaction summary is never narrated.compaction_end
for React UI feedback:isCompacting
isGloveState.isCompacting
while compaction is in progress. Use it to show a loading indicator or disable input during compaction.true