Awesome-omni-skill cerebras-api
Cerebras API integration for building AI-powered applications with ultra-fast LLM inference. Use when working with Cerebras's Chat Completions API, Python SDK (cerebras_cloud_sdk), TypeScript SDK (@cerebras/cerebras_cloud_sdk), tool use/function calling, structured outputs with JSON schemas, reasoning models with thinking tokens, streaming responses, or any Cerebras API integration task. Triggers on mentions of Cerebras, Cerebras Inference, Llama on Cerebras, Qwen on Cerebras, GLM, or fast LLM inference needs.
git clone https://github.com/diegosouzapw/awesome-omni-skill
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skill "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/development/cerebras-api" ~/.claude/skills/diegosouzapw-awesome-omni-skill-cerebras-api-541f3e && rm -rf "$T"
skills/development/cerebras-api/SKILL.mdCerebras API
Cerebras provides the world's fastest AI inference (2,000+ tokens/s). OpenAI-compatible API with Python and TypeScript SDKs.
Quick Reference
| Resource | Location |
|---|---|
| API Base URL | |
| Get API Key | https://cloud.cerebras.ai |
| Python SDK | |
| TypeScript SDK | |
Available Models
Deprecation Notice:
andqwen-3-32bare scheduled for deprecation on February 16, 2026.llama-3.3-70b
Production Models
Fully supported for production use.
| Model | Model ID | Parameters | Speed |
|---|---|---|---|
| Llama 3.1 8B | | 8B | ~2200 tok/s |
| Llama 3.3 70B | | 70B | ~2100 tok/s |
| OpenAI GPT OSS | | 120B | ~3000 tok/s |
| Qwen 3 32B | | 32B | ~2600 tok/s |
Preview Models
For evaluation only - may be discontinued with short notice.
| Model | Model ID | Parameters | Speed |
|---|---|---|---|
| Qwen 3 235B Instruct | | 235B | ~1400 tok/s |
| Z.ai GLM 4.7 | | 355B | ~1000 tok/s |
Migrating to GLM? See GLM 4.7 Migration Guide.
Model Selection Guide
| Use Case | Recommended Model |
|---|---|
| Speed-critical (real-time chat) | |
| Balanced (chat, coding, math) | |
| Hybrid reasoning | |
| Multilingual, instruction following | |
| Science, math, complex reasoning | |
| Agents, superior tool use | |
Model Compression
All models are unpruned original versions. Precision varies:
| Model | Precision | Weights |
|---|---|---|
| FP16 | HuggingFace |
| FP16 | HuggingFace |
| FP16/FP8 (weights only) | HuggingFace |
| FP16 | HuggingFace |
| FP16/FP8 (weights only) | HuggingFace |
| FP16/FP8 (weights only) | HuggingFace |
Note: FP16/FP8 models use selective weight-only quantization for storage. Sensitive layers remain at full precision, with dequantization on-the-fly. Activations and KV cache remain unquantized.
Basic Usage
Python
import os from cerebras.cloud.sdk import Cerebras client = Cerebras(api_key=os.environ.get("CEREBRAS_API_KEY")) response = client.chat.completions.create( model="llama-3.3-70b", messages=[{"role": "user", "content": "Explain quantum computing"}] ) print(response.choices[0].message.content)
TypeScript
import Cerebras from '@cerebras/cerebras_cloud_sdk'; const client = new Cerebras({ apiKey: process.env.CEREBRAS_API_KEY }); const response = await client.chat.completions.create({ model: 'llama-3.3-70b', messages: [{ role: 'user', content: 'Explain quantum computing' }] }); console.log(response.choices[0].message.content);
Streaming
The Cerebras API supports streaming responses, allowing messages to be sent back in chunks and displayed incrementally as they are generated. Set
stream=True to receive an iterable of chunks.
Python
stream = client.chat.completions.create( model="llama-3.3-70b", messages=[{"role": "user", "content": "Why is fast inference important?"}], stream=True ) for chunk in stream: print(chunk.choices[0].delta.content or "", end="")
TypeScript
const stream = await client.chat.completions.create({ model: 'llama-3.3-70b', messages: [{ role: 'user', content: 'Why is fast inference important?' }], stream: true }); for await (const chunk of stream) { process.stdout.write(chunk.choices[0]?.delta?.content || ''); }
Streaming Notes
- Each chunk contains a
object with incremental contentdelta
andusage
are only available in the final chunktime_info- Use
in Python print for real-time display:flush=Trueprint(..., end="", flush=True)
Cancel Streaming (TypeScript)
const stream = await client.chat.completions.create({ model: 'llama-3.3-70b', messages: [{ role: 'user', content: 'Long response' }], stream: true }); for await (const chunk of stream) { if (shouldStop) { stream.controller.abort(); break; } process.stdout.write(chunk.choices[0]?.delta?.content || ''); }
Async Streaming (Python)
from cerebras.cloud.sdk import AsyncCerebras client = AsyncCerebras() async def stream_response(): stream = await client.chat.completions.create( model="llama-3.3-70b", messages=[{"role": "user", "content": "Tell me a joke"}], stream=True ) async for chunk in stream: content = chunk.choices[0].delta.content if content: print(content, end="", flush=True)
Tool Calling
Tool calling (also known as function calling) enables models to interact with external tools, APIs, or applications to perform actions and access real-time information.
Supported models:
gpt-oss-120b, qwen-3-32b, zai-glm-4.7
How It Works
- Define tools - Provide name, description, and parameters for each tool
- Send request - Include tool definitions with your API call
- Model decides - Model analyzes if a tool can help answer the question
- Execute & respond - Your code executes the tool and returns results to the model
Define Tools
tools = [ { "type": "function", "function": { "name": "get_weather", "strict": True, "description": "Get temperature for a given location.", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "City and country e.g. Toronto, Canada" } }, "required": ["location"], "additionalProperties": False } } } ]
Make API Call
response = client.chat.completions.create( model="zai-glm-4.7", messages=[{"role": "user", "content": "What's the weather in Tokyo?"}], tools=tools )
Handle Tool Calls
import json choice = response.choices[0].message if choice.tool_calls: # Add assistant message with tool_calls messages.append(choice) for tool_call in choice.tool_calls: # Execute your tool arguments = json.loads(tool_call.function.arguments) result = get_weather(arguments["location"]) # Append tool result messages.append({ "role": "tool", "content": json.dumps(result), "tool_call_id": tool_call.id }) # Get final response final_response = client.chat.completions.create( model="zai-glm-4.7", messages=messages ) print(final_response.choices[0].message.content)
Parallel Tool Calling
When a query requires multiple independent data points (e.g., comparing weather in different cities), the model can request multiple tools at once.
response = client.chat.completions.create( model="zai-glm-4.7", messages=[{"role": "user", "content": "Is Toronto warmer than Montreal?"}], tools=tools, parallel_tool_calls=True # Default: enabled ) # Response may contain multiple tool_calls for tool_call in response.choices[0].message.tool_calls: print(f"Tool: {tool_call.function.name}, Args: {tool_call.function.arguments}")
To disable parallel calling:
response = client.chat.completions.create( model="zai-glm-4.7", messages=messages, tools=tools, parallel_tool_calls=False # Force sequential execution )
TypeScript Example
const tools: Cerebras.Chat.ChatCompletionTool[] = [{ type: 'function', function: { name: 'get_weather', strict: true, description: 'Get temperature for a given location.', parameters: { type: 'object', properties: { location: { type: 'string', description: 'City name' } }, required: ['location'], additionalProperties: false } } }]; const response = await client.chat.completions.create({ model: 'zai-glm-4.7', messages: [{ role: 'user', content: 'Weather in Paris?' }], tools }); if (response.choices[0].message.tool_calls) { for (const toolCall of response.choices[0].message.tool_calls) { const args = JSON.parse(toolCall.function.arguments); // Execute tool and continue conversation } }
Best Practices
- Use
for reliable JSON argument parsingstrict: true - Always set
in parameter schemasadditionalProperties: false - Provide clear, descriptive tool descriptions
- Handle cases where model doesn't call any tools
Structured Outputs
Generate structured data with enforced JSON schema compliance. Key benefits:
- Reduced Variability - Consistent outputs adhering to predefined fields
- Type Safety - Enforces correct data types, preventing mismatches
- Easier Parsing - Direct use in applications without extra processing
Defining the Schema
Define a JSON schema specifying fields, types, and required properties.
For every
array, you must setrequired.additionalProperties: false
Python (with Pydantic)
from pydantic import BaseModel class Movie(BaseModel): title: str director: str year: int response = client.chat.completions.create( model="llama-3.3-70b", messages=[ {"role": "system", "content": "You are a helpful assistant"}, {"role": "user", "content": "Suggest a sci-fi movie"} ], response_format={ "type": "json_schema", "json_schema": { "name": "movie", "strict": True, "schema": Movie.model_json_schema() } } ) import json movie = json.loads(response.choices[0].message.content)
Python (with raw schema)
movie_schema = { "type": "object", "properties": { "title": {"type": "string"}, "director": {"type": "string"}, "year": {"type": "integer"} }, "required": ["title", "director", "year"], "additionalProperties": False } response = client.chat.completions.create( model="llama-3.3-70b", messages=[{"role": "user", "content": "Suggest a sci-fi movie"}], response_format={ "type": "json_schema", "json_schema": { "name": "movie", "strict": True, "schema": movie_schema } } )
TypeScript (with Zod)
import { z } from 'zod'; import { zodToJsonSchema } from 'zod-to-json-schema'; const MovieSchema = z.object({ title: z.string(), director: z.string(), year: z.number().int() }); const response = await client.chat.completions.create({ model: 'llama-3.3-70b', messages: [{ role: 'user', content: 'Suggest a sci-fi movie' }], response_format: { type: 'json_schema', json_schema: { name: 'movie', strict: true, schema: zodToJsonSchema(MovieSchema) } } }); const movie = MovieSchema.parse(JSON.parse(response.choices[0].message.content || '{}'));
Response Format Modes
| Mode | Valid JSON | Adheres to Schema | Extra Fields | Constrained Decoding |
|---|---|---|---|---|
(strict: true) | Yes | Yes (guaranteed) | No | Yes |
(strict: false) | Yes (best-effort) | Yes | Yes | No |
| Yes | No (flexible) | No | No |
Enabling each mode:
- Strict:
response_format: { type: "json_schema", json_schema: { strict: true, schema: ... } } - Non-strict:
response_format: { type: "json_schema", json_schema: { strict: false, schema: ... } } - JSON object:
response_format: { type: "json_object" }
Schema Requirements
- Set
for all objects with required fields"additionalProperties": false - Max nesting: 5 levels
- Max schema length: 5,000 chars
- No recursive schemas
andtoolscannot be used in the same request.response_format
Reasoning Models
Reasoning models generate intermediate thinking tokens before their final response, enabling better problem-solving and allowing inspection of the model's thought process.
Supported models:
qwen-3-32b, gpt-oss-120b, zai-glm-4.7
Reasoning Format Options
| Format | Behavior | Use Case |
|---|---|---|
| Reasoning in separate field, logprobs split into | When you need structured access to thinking |
| Reasoning prepended to content with wrapper tokens ( for GLM/Qwen) | When you want full visibility |
| Reasoning text dropped from response (tokens still counted/billed) | When you want benefits without exposing thinking |
| Uses model's default behavior | Default |
Default behaviors by model:
- Qwen3:
(orraw
for JSON output)hidden - GLM:
text_parsed - GPT-OSS:
text_parsed
Basic Usage
response = client.chat.completions.create( model="qwen-3-32b", messages=[{"role": "user", "content": "Solve: 15% of 240"}], reasoning_format="parsed" ) print("Thinking:", response.choices[0].message.reasoning) print("Answer:", response.choices[0].message.content)
const response = await client.chat.completions.create({ model: 'qwen-3-32b', messages: [{ role: 'user', content: 'Solve: 15% of 240' }], reasoning_format: 'parsed' }); console.log('Thinking:', response.choices[0].message.reasoning); console.log('Answer:', response.choices[0].message.content);
GPT-OSS: Reasoning Effort
Control reasoning intensity with
reasoning_effort:
response = client.chat.completions.create( model="gpt-oss-120b", messages=[{"role": "user", "content": "Prove the Pythagorean theorem"}], reasoning_effort="high" # "low", "medium" (default), "high" )
GLM: Disable Reasoning
Toggle reasoning on/off for GLM:
response = client.chat.completions.create( model="zai-glm-4.7", messages=[{"role": "user", "content": "Quick factual question"}], disable_reasoning=True # Skip thinking for simple queries )
Multi-Turn Reasoning Context
To retain reasoning awareness across conversation turns, include prior reasoning in assistant messages using the model's native format.
GPT-OSS (reasoning prepended directly):
messages = [ {"role": "user", "content": "What is 25 * 4?"}, {"role": "assistant", "content": "Multiply 25 times 4 equals 100. The answer is 100."}, {"role": "user", "content": "Now divide that by 2."} ] response = client.chat.completions.create(model="gpt-oss-120b", messages=messages)
GLM/Qwen (reasoning in
<think> tags):
messages = [ {"role": "user", "content": "What is 25 * 4?"}, {"role": "assistant", "content": "<think>Multiply 25 times 4 equals 100.</think>The answer is 100."}, {"role": "user", "content": "Now divide that by 2."} ] response = client.chat.completions.create(model="zai-glm-4.7", messages=messages)
Predicted Outputs
Reduce latency by specifying parts of the response that are already known. (Public Preview)
Supported models:
gpt-oss-120b, llama3.1-8b, zai-glm-4.7
Predicted Outputs speed up response generation when parts of the output are already known. This is most useful when regenerating text or code that requires only minor changes.
Python
code = """ html { margin: 0; padding: 0; box-sizing: border-box; color: #00FF00; } """ response = client.chat.completions.create( model="gpt-oss-120b", messages=[ {"role": "user", "content": "Change the color to blue. Respond only with code."}, {"role": "user", "content": code} ], prediction={"type": "content", "content": code} )
TypeScript
const code = ` html { margin: 0; padding: 0; box-sizing: border-box; color: #00FF00; } `; const response = await client.chat.completions.create({ model: 'gpt-oss-120b', messages: [ { role: 'user', content: "Change the color to blue. Respond only with code." }, { role: 'user', content: code } ], prediction: { type: 'content', content: code } });
Token-Reuse Metrics
The response includes usage metrics showing prediction efficiency:
{ "usage": { "completion_tokens": 224, "prompt_tokens": 204, "completion_tokens_details": { "accepted_prediction_tokens": 76, "rejected_prediction_tokens": 20 } } }
A high ratio of accepted to rejected tokens indicates efficient prediction reuse.
Best Practices
- Use when most output is known - The larger the known section, the greater the efficiency gain
- Set
- Reduces randomness and increases token acceptancetemperature=0 - Keep predictions accurate - Misaligned predictions increase rejected tokens
- Monitor metrics - Track accepted vs rejected tokens to evaluate effectiveness
Limitations
- Rejected tokens are billed at completion-token rates
- Not compatible with:
,logprobs
,n > 1tools - Reasoning tokens may generate additional
rejected_prediction_tokens
Prompt Caching
Store and reuse previously processed prompts to reduce latency. Designed to significantly reduce Time to First Token (TTFT) for long-context workloads like multi-turn conversations, RAG, and agentic workflows.
How It Works
Automatic - No code changes required. Works on all supported API requests.
- Prefix Matching - System analyzes the beginning of your prompt (system prompts, tool definitions, few-shot examples)
- Block-Based Caching - Prompts processed in blocks (100-600 tokens). Matching blocks reuse cached computation
- Cache Hit - Cached blocks skip processing, resulting in lower latency
- Cache Miss - Prompt processed normally, prefix stored for future matches
- Auto Expiration - TTL guaranteed 5 minutes, may persist up to 1 hour
The entire beginning of your prompt must match exactly with a cached prefix. Even a single character difference causes a cache miss.
Checking Cache Usage
Check the
usage.prompt_tokens_details.cached_tokens field in your response:
response = client.chat.completions.create( model="llama-3.3-70b", messages=messages ) cached = response.usage.prompt_tokens_details.cached_tokens print(f"Cached tokens: {cached}")
{ "usage": { "prompt_tokens": 1500, "prompt_tokens_details": { "cached_tokens": 1200 } } }
Best Practices for Cache Hits
- Keep prefixes consistent - System prompts, tool definitions, and few-shot examples should be identical across requests
- Order matters - Place stable content (system prompt, tools) before dynamic content (user messages)
- Multi-turn conversations - Cache naturally builds as conversation history grows
- RAG workflows - Place frequently-used context at the beginning
Example: Multi-Turn with Tools
# System message and tools are cached across turns messages = [ {"role": "system", "content": "You are a shopping assistant."}, {"role": "user", "content": "Where is my order ORD-123456?"} ] # Turn 1 - creates cache for system + tools response = client.chat.completions.create( model="qwen-3-32b", messages=messages, tools=tools ) print(f"Turn 1 cached: {response.usage.prompt_tokens_details.cached_tokens}") # Turn 2 - reuses cached system + tools messages.append(response.choices[0].message) messages.append({"role": "user", "content": "Please cancel it."}) response = client.chat.completions.create( model="qwen-3-32b", messages=messages, tools=tools ) print(f"Turn 2 cached: {response.usage.prompt_tokens_details.cached_tokens}")
FAQ
- Pricing: No additional cost. Standard token rates apply
- Quality: Caching only affects input processing. Output generation unchanged
- Manual clear: Not available. System manages cache automatically
- TTL: Guaranteed 5 minutes, up to 1 hour depending on load
Async Usage (Python)
import asyncio from cerebras.cloud.sdk import AsyncCerebras client = AsyncCerebras() async def main(): response = await client.chat.completions.create( model="llama-3.3-70b", messages=[{"role": "user", "content": "Hello"}] ) print(response.choices[0].message.content) asyncio.run(main())
Error Handling
All errors inherit from
cerebras.cloud.sdk.APIError. Main categories:
- Unable to connect to the APIAPIConnectionError
- API returns non-success status code (4xx or 5xx)APIStatusError
Error Codes
| Status | Exception | Description |
|---|---|---|
| 400 | | Invalid request parameters |
| 401 | | Invalid or missing API key |
| 402 | | Payment required |
| 403 | | Insufficient permissions |
| 404 | | Resource not found |
| 422 | | Validation error |
| 429 | | Too many requests |
| 500 | | Server error |
| 503 | | Service temporarily unavailable |
| N/A | | Network/connection issue |
Python Example
import cerebras.cloud.sdk from cerebras.cloud.sdk import Cerebras client = Cerebras() try: response = client.chat.completions.create( model="llama-3.3-70b", messages=[{"role": "user", "content": "Hello"}] ) except cerebras.cloud.sdk.APIConnectionError as e: print("Server could not be reached") print(e.__cause__) except cerebras.cloud.sdk.RateLimitError as e: print("Rate limited - implement backoff") except cerebras.cloud.sdk.APIStatusError as e: print(f"Error {e.status_code}: {e.response}")
TypeScript Example
try { const response = await client.chat.completions.create({ model: 'llama-3.3-70b', messages: [{ role: 'user', content: 'Hello' }] }); } catch (err) { if (err instanceof Cerebras.APIError) { console.log(err.status); // 400 console.log(err.name); // BadRequestError console.log(err.headers); // Response headers } else { throw err; } }
Retries & Timeouts
Automatic Retries
By default, these errors are retried 2 times with exponential backoff:
- Connection errors
- 408 Request Timeout
- 429 Rate Limit
-
= 500 Internal errors
# Python - configure retries client = Cerebras(max_retries=0) # Disable retries # Per-request override client.with_options(max_retries=5).chat.completions.create(...)
// TypeScript - configure retries const client = new Cerebras({ maxRetries: 0 }); // Per-request override await client.chat.completions.create(params, { maxRetries: 5 });
Timeouts
Default timeout is 60 seconds. On timeout,
APITimeoutError is thrown.
# Python - configure timeout client = Cerebras(timeout=20.0) # 20 seconds # Granular control import httpx client = Cerebras( timeout=httpx.Timeout(60.0, read=5.0, write=10.0, connect=2.0) ) # Per-request override client.with_options(timeout=5.0).chat.completions.create(...)
// TypeScript - configure timeout const client = new Cerebras({ timeout: 20 * 1000 }); // Per-request override await client.chat.completions.create(params, { timeout: 5 * 1000 });
TCP Warming
SDK sends warmup requests on init to reduce first-token latency. Disable if needed:
client = Cerebras(warm_tcp_connection=False)
const client = new Cerebras({ warmTCPConnection: false });
Key Parameters
| Parameter | Description |
|---|---|
| Model identifier (required) |
| Conversation history (required) |
| Randomness 0-1.5 (default varies) |
| Max output tokens |
| Up to 4 stop sequences |
| Enable streaming |
| , , or |
| Function definitions for tool calling |
| , , |
| , , (gpt-oss only) |
References
For detailed SDK documentation:
- Python: See references/python.md
- TypeScript: See references/typescript.md