Claude-skill-registry deepseek
DeepSeek AI large language model API via curl. Use this skill for chat completions, reasoning, and code generation with OpenAI-compatible endpoints.
git clone https://github.com/majiayu000/claude-skill-registry
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/deepseek" ~/.claude/skills/majiayu000-claude-skill-registry-deepseek && rm -rf "$T"
skills/data/deepseek/SKILL.mdDeepSeek API
Use the DeepSeek API via direct
curl calls to access powerful AI language models for chat, reasoning, and code generation.
Official docs:
https://api-docs.deepseek.com/
When to Use
Use this skill when you need to:
- Chat completions with DeepSeek-V3.2 model
- Deep reasoning tasks using the reasoning model
- Code generation and completion (FIM - Fill-in-the-Middle)
- OpenAI-compatible API as a cost-effective alternative
Prerequisites
- Sign up at DeepSeek Platform and create an account
- Go to API Keys and generate a new API key
- Top up your balance (no free tier, but very affordable pricing)
export DEEPSEEK_API_KEY="your-api-key"
Pricing (per 1M tokens)
| Type | Price |
|---|---|
| Input (cache hit) | $0.028 |
| Input (cache miss) | $0.28 |
| Output | $0.42 |
Rate Limits
DeepSeek does not enforce strict rate limits. They will try to serve every request. During high traffic, connections are maintained with keep-alive signals.
Important: When using
in a command that pipes to another command, wrap the command containing$VARin$VAR. Due to a Claude Code bug, environment variables are silently cleared when pipes are used directly.bash -c '...'bash -c 'curl -s "https://api.example.com" -H "Authorization: Bearer $API_KEY"'
How to Use
All examples below assume you have
DEEPSEEK_API_KEY set.
The base URL for the DeepSeek API is:
(recommended)https://api.deepseek.com
(OpenAI-compatible)https://api.deepseek.com/v1
1. Basic Chat Completion
Send a simple chat message:
Write to
/tmp/deepseek_request.json:
{ "model": "deepseek-chat", "messages": [ { "role": "system", "content": "You are a helpful assistant." }, { "role": "user", "content": "Hello, who are you?" } ] }
Then run:
bash -c 'curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer ${DEEPSEEK_API_KEY}" -d @/tmp/deepseek_request.json'
Available models:
: DeepSeek-V3.2 non-thinking mode (128K context, 8K max output)deepseek-chat
: DeepSeek-V3.2 thinking mode (128K context, 64K max output)deepseek-reasoner
2. Chat with Temperature Control
Adjust creativity/randomness with temperature:
Write to
/tmp/deepseek_request.json:
{ "model": "deepseek-chat", "messages": [ { "role": "user", "content": "Write a short poem about coding." } ], "temperature": 0.7, "max_tokens": 200 }
Then run:
bash -c 'curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer ${DEEPSEEK_API_KEY}" -d @/tmp/deepseek_request.json' | jq -r '.choices[0].message.content'
Parameters:
(0-2, default 1): Higher = more creative, lower = more deterministictemperature
(0-1, default 1): Nucleus sampling thresholdtop_p
: Maximum tokens to generatemax_tokens
3. Streaming Response
Get real-time token-by-token output:
Write to
/tmp/deepseek_request.json:
{ "model": "deepseek-chat", "messages": [ { "role": "user", "content": "Explain quantum computing in simple terms." } ], "stream": true }
Then run:
bash -c 'curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer ${DEEPSEEK_API_KEY}" -d @/tmp/deepseek_request.json'
Streaming returns Server-Sent Events (SSE) with delta chunks, ending with
data: [DONE].
4. Deep Reasoning (Thinking Mode)
Use the reasoner model for complex reasoning tasks:
Write to
/tmp/deepseek_request.json:
{ "model": "deepseek-reasoner", "messages": [ { "role": "user", "content": "What is 15 * 17? Show your work." } ] }
Then run:
bash -c 'curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer ${DEEPSEEK_API_KEY}" -d @/tmp/deepseek_request.json' | jq -r '.choices[0].message.content'
The reasoner model excels at math, logic, and multi-step problems.
5. JSON Output Mode
Force the model to return valid JSON:
Write to
/tmp/deepseek_request.json:
{ "model": "deepseek-chat", "messages": [ { "role": "system", "content": "You are a JSON generator. Always respond with valid JSON." }, { "role": "user", "content": "List 3 programming languages with their main use cases." } ], "response_format": { "type": "json_object" } }
Then run:
bash -c 'curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer ${DEEPSEEK_API_KEY}" -d @/tmp/deepseek_request.json' | jq -r '.choices[0].message.content'
6. Multi-turn Conversation
Continue a conversation with message history:
Write to
/tmp/deepseek_request.json:
{ "model": "deepseek-chat", "messages": [ { "role": "user", "content": "My name is Alice." }, { "role": "assistant", "content": "Nice to meet you, Alice." }, { "role": "user", "content": "What is my name?" } ] }
Then run:
bash -c 'curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer ${DEEPSEEK_API_KEY}" -d @/tmp/deepseek_request.json' | jq -r '.choices[0].message.content'
7. Code Completion (FIM)
Use Fill-in-the-Middle for code completion (beta endpoint):
Write to
/tmp/deepseek_request.json:
{ "model": "deepseek-chat", "prompt": "def add(a, b):\n ", "max_tokens": 20 }
Then run:
bash -c 'curl -s "https://api.deepseek.com/beta/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer ${DEEPSEEK_API_KEY}" -d @/tmp/deepseek_request.json' | jq -r '.choices[0].text'
FIM is useful for:
- Code completion in editors
- Filling gaps in documents
- Context-aware text generation
8. Function Calling (Tools)
Define functions the model can call:
Write to
/tmp/deepseek_request.json:
{ "model": "deepseek-chat", "messages": [ { "role": "user", "content": "What is the weather in Tokyo?" } ], "tools": [ { "type": "function", "function": { "name": "get_weather", "description": "Get the current weather for a location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city name" } }, "required": ["location"] } } } ] }
Then run:
bash -c 'curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer ${DEEPSEEK_API_KEY}" -d @/tmp/deepseek_request.json'
The model will return a
tool_calls array when it wants to use a function.
9. Check Token Usage
Extract usage information from response:
Write to
/tmp/deepseek_request.json:
{ "model": "deepseek-chat", "messages": [ { "role": "user", "content": "Hello" } ] }
Then run:
bash -c 'curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer ${DEEPSEEK_API_KEY}" -d @/tmp/deepseek_request.json' | jq '.usage'
Response includes:
: Input token countprompt_tokens
: Output token countcompletion_tokens
: Sum of bothtotal_tokens
OpenAI SDK Compatibility
DeepSeek is fully compatible with OpenAI SDKs. Just change the base URL:
Python:
from openai import OpenAI client = OpenAI(api_key="your-deepseek-key", base_url="https://api.deepseek.com")
Node.js:
import OpenAI from 'openai'; const client = new OpenAI({ apiKey: 'your-deepseek-key', baseURL: 'https://api.deepseek.com' });
Tips: Complex JSON Payloads
For complex requests with nested JSON (like function calling), use a temp file to avoid shell escaping issues:
Write to
/tmp/deepseek_request.json:
{ "model": "deepseek-chat", "messages": [{"role": "user", "content": "What is the weather in Tokyo?"}], "tools": [{ "type": "function", "function": { "name": "get_weather", "description": "Get current weather", "parameters": { "type": "object", "properties": {"location": {"type": "string"}}, "required": ["location"] } } }] }
Then run:
bash -c 'curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer ${DEEPSEEK_API_KEY}" -d @/tmp/deepseek_request.json'
Guidelines
- Choose the right model: Use
for general tasks,deepseek-chat
for complex reasoningdeepseek-reasoner - Use caching: Repeated prompts with same prefix benefit from cache pricing ($0.028 vs $0.28)
- Set max_tokens: Prevent runaway generation by setting appropriate limits
- Use streaming for long responses: Better UX for real-time applications
- JSON mode requires system prompt: When using
, include JSON instructions in system messageresponse_format - FIM uses beta endpoint: Code completion endpoint is at
api.deepseek.com/beta - Complex JSON: Use temp files with
to avoid shell quoting issues-d @filename