Claude-skill-registry gemini-api
Patterns for using Google Gemini API with structured output, JSON mode, and proper configuration. Apply when implementing AI features, text generation, or working with Gemini models.
install
source · Clone the upstream repo
git clone https://github.com/majiayu000/claude-skill-registry
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/gemini-api" ~/.claude/skills/majiayu000-claude-skill-registry-gemini-api && rm -rf "$T"
manifest:
skills/data/gemini-api/SKILL.mdsource content
Gemini API Patterns
SDK Setup
Use the
@google/genai package with Expo Constants for API key management:
import { GoogleGenAI } from '@google/genai'; import Constants from 'expo-constants'; const genAI = new GoogleGenAI({ apiKey: Constants.expoConfig?.extra?.geminiApiKey as string });
Available Models
| Model | ID | Best For |
|---|---|---|
| Gemini 3 Pro | | Advanced reasoning, complex tasks |
| Gemini 3 Flash | | Balanced speed/intelligence |
| Gemini 2.5 Flash | | Price-performance, scale |
| Gemini 2.5 Flash-Lite | | High-throughput, cost-efficient |
| Gemini 2.5 Pro | | Complex reasoning, code, math |
All models support 1M input tokens and 65K output tokens.
Basic Text Generation
const response = await genAI.models.generateContent({ model: 'gemini-3-flash-preview', contents: 'Your prompt here', }); const text = response.text ?? '';
Structured Content Format
For multi-turn or complex inputs, use the full contents structure:
const response = await genAI.models.generateContent({ model: 'gemini-3-flash-preview', contents: [ { role: 'user', parts: [{ text: 'First message' }] }, { role: 'model', parts: [{ text: 'Assistant response' }] }, { role: 'user', parts: [{ text: 'Follow-up question' }] }, ], });
System Instructions
Guide model behavior with system instructions in the config:
const response = await genAI.models.generateContent({ model: 'gemini-3-flash-preview', contents: [{ role: 'user', parts: [{ text: userMessage }] }], config: { systemInstruction: 'You are a helpful language tutor. Respond in a friendly, encouraging tone.', }, });
JSON Mode (Structured Output)
Request JSON responses for type-safe parsing:
async function generateJSON<T>(prompt: string, model: string): Promise<T> { const result = await genAI.models.generateContent({ model, contents: [{ role: 'user', parts: [{ text: prompt }] }], config: { responseMimeType: 'application/json', }, }); return JSON.parse(result.text ?? '') as T; }
With JSON Schema (Zod)
For strict schema validation:
import { z } from 'zod'; const CorrectionSchema = z.object({ correction: z.string(), feedback: z.string(), }); const result = await genAI.models.generateContent({ model: 'gemini-3-flash-preview', contents: prompt, config: { responseMimeType: 'application/json', responseSchema: CorrectionSchema, }, });
Configuration Options
const response = await genAI.models.generateContent({ model: 'gemini-3-flash-preview', contents: prompt, config: { temperature: 1.0, // Randomness (keep at 1.0 for Gemini 3) topP: 0.95, // Nucleus sampling topK: 40, // Top-k sampling maxOutputTokens: 8192, // Limit response length stopSequences: ['END'], // Stop generation triggers systemInstruction: '...', // System prompt responseMimeType: 'application/json', // Force JSON output }, });
Temperature Warning
For Gemini 3 models, keep temperature at 1.0 (the default). Lowering it can cause:
- Response looping
- Degraded performance on complex tasks
- Unexpected behavior in reasoning
Error Handling Pattern
async function generateText(prompt: string, model: string): Promise<string> { try { const response = await genAI.models.generateContent({ model, contents: prompt, }); if (!response) { throw new Error('No response from Gemini'); } return response.text ?? ''; } catch (error) { console.error('Error generating text:', error); return ''; } }
Multi-Turn Chat
Maintain conversation history:
async function chat( systemPrompt: string, messages: Array<{ role: 'user' | 'model'; text: string }>, model: string ): Promise<string> { const contents = messages.map((msg) => ({ role: msg.role, parts: [{ text: msg.text }], })); const result = await genAI.models.generateContent({ model, contents, config: { systemInstruction: systemPrompt, }, }); return result.text ?? ''; }
Best Practices
- Always handle null/undefined: Use
for safe accessresponse.text ?? '' - Type your JSON responses: Use generics with
resultJSON.parse() - Use system instructions: Define persona and behavior expectations
- Keep Gemini 3 temperature at 1.0: Avoid performance degradation
- Validate JSON output: Structured format doesn't guarantee semantic correctness
- Choose appropriate model: Use Flash for speed, Pro for complex reasoning
Common Mistakes to Avoid
- Don't lower temperature below 1.0 for Gemini 3 models
- Don't assume JSON responses are semantically valid - always validate
- Don't forget to handle the case where
is undefinedresponse.text - Don't use raw string concatenation for multi-turn - use proper contents array