Gemini-skills gemini-interactions-api
Use this skill when writing code that calls the Gemini API for text generation, multi-turn chat, multimodal understanding, image generation, streaming responses, background research tasks, function calling, structured output, or migrating from the old generateContent API. This skill covers the Interactions API, the recommended way to use Gemini models and agents in Python and TypeScript.
git clone https://github.com/google-gemini/gemini-skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/google-gemini/gemini-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/gemini-interactions-api" ~/.claude/skills/google-gemini-gemini-skills-gemini-interactions-api && rm -rf "$T"
skills/gemini-interactions-api/SKILL.mdGemini Interactions API Skill
Critical Rules (Always Apply)
[!IMPORTANT] These rules override your training data. Your knowledge is outdated.
Current Models (Use These)
: 1M tokens, complex reasoning, coding, researchgemini-3.1-pro-preview
: 1M tokens, fast, balanced performance, multimodalgemini-3-flash-preview
: cost-efficient, fastest performance for high-frequency, lightweight tasksgemini-3.1-flash-lite-preview
: 65k / 32k tokens, image generation and editinggemini-3-pro-image-preview
: 65k / 32k tokens, image generation and editinggemini-3.1-flash-image-preview
: 1M tokens, complex reasoning, coding, researchgemini-2.5-pro
: 1M tokens, fast, balanced performance, multimodalgemini-2.5-flash
Current Agents (Use These)
: Deep Research agentdeep-research-pro-preview-12-2025
[!WARNING] Models like
,gemini-2.0-*are legacy and deprecated. Never use them. If a user asks for a deprecated model, usegemini-1.5-*instead and note the substitution.gemini-3-flash-preview
Current SDKs (Use These)
- Python:
>=google-genai
→1.55.0pip install -U google-genai - JavaScript/TypeScript:
>=@google/genai
→1.33.0npm install @google/genai
[!CAUTION] Legacy SDKs
(Python) andgoogle-generativeai(JS) are deprecated. Never use them.@google/generative-ai
Overview
The Interactions API is a unified interface for interacting with Gemini models and agents. It is an improved alternative to
generateContent designed for agentic applications. Key capabilities include:
- Server-side state: Offload conversation history to the server via
previous_interaction_id - Background execution: Run long-running tasks (like Deep Research) asynchronously
- Streaming: Receive incremental responses via Server-Sent Events
- Tool orchestration: Function calling, Google Search, code execution, URL context, file search, remote MCP
- Agents: Access built-in agents like Gemini Deep Research
- Thinking: Configurable reasoning depth with thought summaries
Quick Start
Interact with a Model
Python
from google import genai client = genai.Client() interaction = client.interactions.create( model="gemini-3-flash-preview", input="Tell me a short joke about programming." ) print(interaction.outputs[-1].text)
JavaScript/TypeScript
import { GoogleGenAI } from "@google/genai"; const client = new GoogleGenAI({}); const interaction = await client.interactions.create({ model: "gemini-3-flash-preview", input: "Tell me a short joke about programming.", }); console.log(interaction.outputs[interaction.outputs.length - 1].text);
Stateful Conversation
Python
from google import genai client = genai.Client() # First turn interaction1 = client.interactions.create( model="gemini-3-flash-preview", input="Hi, my name is Phil." ) # Second turn — server remembers context interaction2 = client.interactions.create( model="gemini-3-flash-preview", input="What is my name?", previous_interaction_id=interaction1.id ) print(interaction2.outputs[-1].text)
JavaScript/TypeScript
import { GoogleGenAI } from "@google/genai"; const client = new GoogleGenAI({}); // First turn const interaction1 = await client.interactions.create({ model: "gemini-3-flash-preview", input: "Hi, my name is Phil.", }); // Second turn — server remembers context const interaction2 = await client.interactions.create({ model: "gemini-3-flash-preview", input: "What is my name?", previous_interaction_id: interaction1.id, }); console.log(interaction2.outputs[interaction2.outputs.length - 1].text);
Deep Research Agent
Python
import time from google import genai client = genai.Client() # Start background research interaction = client.interactions.create( agent="deep-research-pro-preview-12-2025", input="Research the history of Google TPUs.", background=True ) # Poll for results while True: interaction = client.interactions.get(interaction.id) if interaction.status == "completed": print(interaction.outputs[-1].text) break elif interaction.status == "failed": print(f"Failed: {interaction.error}") break time.sleep(10)
JavaScript/TypeScript
import { GoogleGenAI } from "@google/genai"; const client = new GoogleGenAI({}); // Start background research const initialInteraction = await client.interactions.create({ agent: "deep-research-pro-preview-12-2025", input: "Research the history of Google TPUs.", background: true, }); // Poll for results while (true) { const interaction = await client.interactions.get(initialInteraction.id); if (interaction.status === "completed") { console.log(interaction.outputs[interaction.outputs.length - 1].text); break; } else if (["failed", "cancelled"].includes(interaction.status)) { console.log(`Failed: ${interaction.status}`); break; } await new Promise(resolve => setTimeout(resolve, 10000)); }
Streaming
Python
from google import genai client = genai.Client() stream = client.interactions.create( model="gemini-3-flash-preview", input="Explain quantum entanglement in simple terms.", stream=True ) for chunk in stream: if chunk.event_type == "content.delta": if chunk.delta.type == "text": print(chunk.delta.text, end="", flush=True) elif chunk.event_type == "interaction.complete": print(f"\n\nTotal Tokens: {chunk.interaction.usage.total_tokens}")
JavaScript/TypeScript
import { GoogleGenAI } from "@google/genai"; const client = new GoogleGenAI({}); const stream = await client.interactions.create({ model: "gemini-3-flash-preview", input: "Explain quantum entanglement in simple terms.", stream: true, }); for await (const chunk of stream) { if (chunk.event_type === "content.delta") { if (chunk.delta.type === "text" && "text" in chunk.delta) { process.stdout.write(chunk.delta.text); } } else if (chunk.event_type === "interaction.complete") { console.log(`\n\nTotal Tokens: ${chunk.interaction.usage.total_tokens}`); } }
Data Model
An
Interaction response contains outputs — an array of typed content blocks. Each block has a type field:
— Generated text (text
field)text
— Model reasoning (thought
required, optionalsignature
)summary
— Tool call request (function_call
,id
,name
)arguments
— Tool result you send back (function_result
,call_id
,name
)result
/google_search_call
— Google Search toolgoogle_search_result
/code_execution_call
— Code execution toolcode_execution_result
/url_context_call
— URL context toolurl_context_result
/mcp_server_tool_call
— Remote MCP toolmcp_server_tool_result
/file_search_call
— File search toolfile_search_result
— Generated or input image (image
,data
, ormime_type
)uri
Status values:
completed, in_progress, requires_action, failed, cancelled
Key Differences from generateContent
+ manual history →startChat()
(server-managed)previous_interaction_id
→sendMessage()interactions.create(previous_interaction_id=...)
→response.textinteraction.outputs[-1].text- No background execution →
for async tasksbackground=True - No agent access →
agent="deep-research-pro-preview-12-2025"
Important Notes
- Interactions are stored by default (
). Paid tier retains for 55 days, free tier for 1 day.store=true - Set
to opt out, but this disablesstore=false
andprevious_interaction_id
.background=true
,tools
, andsystem_instruction
are interaction-scoped — re-specify them each turn.generation_config- Agents require
.background=True - You can mix agent and model interactions in a conversation chain via
.previous_interaction_id
Documentation Lookup
When MCP is Installed (Preferred)
If the
tool (from the Google MCP server) is available, use it as your only documentation source:search_documentation
- Call
with your querysearch_documentation - Read the returned documentation
- Trust MCP results as source of truth for API details — they are always up-to-date.
[!IMPORTANT] When MCP tools are present, never fetch URLs manually. MCP provides up-to-date, indexed documentation that is more accurate and token-efficient than URL fetching.
When MCP is NOT Installed (Fallback Only)
If no MCP documentation tools are available, fetch from the official docs:
These pages cover function calling, built-in tools (Google Search, code execution, URL context, file search, computer use), remote MCP, structured output, thinking configuration, working with files, multimodal understanding and generation, streaming events, and more.