Skills mistral-api
install
source · Clone the upstream repo
git clone https://github.com/TerminalSkills/skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/TerminalSkills/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/mistral-api" ~/.claude/skills/terminalskills-skills-mistral-api && rm -rf "$T"
manifest:
skills/mistral-api/SKILL.mdsafety · automated scan (medium risk)
This is a pattern-based risk scan, not a security review. Our crawler flagged:
- pip install
- references .env files
- references API keys
Always read a skill's source content before installing. Patterns alone don't mean the skill is malicious — but they warrant attention.
source content
Mistral AI API
Overview
Mistral AI is a French AI company providing high-quality, cost-efficient language models with EU data residency and GDPR compliance. Their models excel at code generation (Codestral), multilingual tasks, and reasoning. Mistral's API follows OpenAI conventions closely, making integration straightforward.
Setup
# Python pip install mistralai # TypeScript/Node npm install @mistralai/mistralai
export MISTRAL_API_KEY=...
Available Models
| Model | Context | Best For |
|---|---|---|
| 128k | Most capable, complex reasoning |
| 128k | Cost-efficient, everyday tasks |
| 256k | Code generation & completion |
| 8k | Text embeddings |
| 128k | Open-weight, edge deployment |
Instructions
Basic Chat Completion (Python)
from mistralai import Mistral client = Mistral(api_key="your_api_key") # or reads MISTRAL_API_KEY response = client.chat.complete( model="mistral-large-latest", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Explain the difference between async and sync programming."}, ], ) print(response.choices[0].message.content) print(f"Prompt tokens: {response.usage.prompt_tokens}") print(f"Completion tokens: {response.usage.completion_tokens}")
TypeScript/Node.js
import Mistral from "@mistralai/mistralai"; const client = new Mistral({ apiKey: process.env.MISTRAL_API_KEY }); const response = await client.chat.complete({ model: "mistral-large-latest", messages: [{ role: "user", content: "Hello from TypeScript!" }], }); console.log(response.choices[0].message.content);
Streaming
from mistralai import Mistral client = Mistral() stream = client.chat.stream( model="mistral-small-latest", messages=[{"role": "user", "content": "Write a haiku about programming."}], ) for event in stream: chunk = event.data.choices[0].delta.content if chunk: print(chunk, end="", flush=True) print()
Function Calling
import json from mistralai import Mistral client = Mistral() tools = [ { "type": "function", "function": { "name": "search_products", "description": "Search for products in a catalog", "parameters": { "type": "object", "properties": { "query": {"type": "string"}, "max_price": {"type": "number"}, "category": {"type": "string"}, }, "required": ["query"], }, }, } ] messages = [{"role": "user", "content": "Find laptops under $1000"}] response = client.chat.complete( model="mistral-large-latest", messages=messages, tools=tools, tool_choice="auto", ) if response.choices[0].finish_reason == "tool_calls": tool_call = response.choices[0].message.tool_calls[0] args = json.loads(tool_call.function.arguments) print(f"Function: {tool_call.function.name}, Args: {args}") # Add tool result and continue messages.append(response.choices[0].message) messages.append({ "role": "tool", "tool_call_id": tool_call.id, "content": json.dumps([{"name": "ThinkPad X1", "price": 899}]), }) final = client.chat.complete(model="mistral-large-latest", messages=messages) print(final.choices[0].message.content)
JSON Mode
from mistralai import Mistral import json client = Mistral() response = client.chat.complete( model="mistral-small-latest", messages=[ { "role": "user", "content": "Return a JSON object with fields: title, author, year for the book '1984'", } ], response_format={"type": "json_object"}, ) data = json.loads(response.choices[0].message.content) print(data) # {"title": "1984", "author": "George Orwell", "year": 1949}
Text Embeddings
from mistralai import Mistral client = Mistral() response = client.embeddings.create( model="mistral-embed", inputs=["Machine learning is transforming industries.", "AI is the future of technology."], ) embeddings = [item.embedding for item in response.data] print(f"Embedding dimension: {len(embeddings[0])}") # 1024 # Compute cosine similarity import numpy as np def cosine_similarity(a, b): return np.dot(a, b) / (np.linalg.norm(a) * np.linalg.norm(b)) similarity = cosine_similarity(embeddings[0], embeddings[1]) print(f"Similarity: {similarity:.3f}")
Codestral for Code Completion
from mistralai import Mistral client = Mistral() # Fill-in-the-middle (FIM) — Codestral's signature feature response = client.fim.complete( model="codestral-latest", prompt="def fibonacci(n):\n if n <= 1:\n return n\n ", suffix="\n\nresult = fibonacci(10)\nprint(result)", ) print(response.choices[0].message.content) # Returns the middle code that connects prompt to suffix
# Standard code generation response = client.chat.complete( model="codestral-latest", messages=[ { "role": "user", "content": "Write a Python class for a rate limiter using token bucket algorithm.", } ], ) print(response.choices[0].message.content)
GDPR Compliance Notes
- All API data processed in EU data centers by default.
- Mistral AI is headquartered in Paris, France — subject to EU/GDPR jurisdiction.
- For enterprise data residency guarantees, use Mistral's Azure or GCP deployments.
- No training on user data by default — check your plan's DPA for details.
Guidelines
- Use
for complex tasks,mistral-large-latest
for cost savings.mistral-small-latest - Codestral is specialized for code and significantly outperforms general models on FIM tasks.
- The
model produces 1024-dimensional vectors.mistral-embed - Mistral models have strong multilingual performance, especially in French, Spanish, Italian, German, and Portuguese.
- Function calling requires
to be set — usetool_choice
for model-driven decisions."auto" - JSON mode requires the system or user prompt to explicitly mention JSON output.