Skills azure-aigateway

Configure Azure API Management as an AI Gateway for AI models, MCP tools, and agents. WHEN: semantic caching, token limit, content safety, load balancing, AI model governance, MCP rate limiting, jailbreak detection, add Azure OpenAI backend, add AI Foundry model, test AI gateway, LLM policies, configure AI backend, token metrics, AI cost control, convert API to MCP, import OpenAPI to gateway.

install
source · Clone the upstream repo
git clone https://github.com/microsoft/skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/microsoft/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/.github/plugins/azure-skills/skills/azure-aigateway" ~/.claude/skills/microsoft-skills-azure-aigateway && rm -rf "$T"
manifest: .github/plugins/azure-skills/skills/azure-aigateway/SKILL.md
source content

Azure AI Gateway

Configure Azure API Management (APIM) as an AI Gateway for governing AI models, MCP tools, and agents.

To deploy APIM, use the azure-prepare skill. See APIM deployment guide.

When to Use This Skill

CategoryTriggers
Model Governance"semantic caching", "token limits", "load balance AI", "track token usage"
Tool Governance"rate limit MCP", "protect my tools", "configure my tool", "convert API to MCP"
Agent Governance"content safety", "jailbreak detection", "filter harmful content"
Configuration"add Azure OpenAI backend", "configure my model", "add AI Foundry model"
Testing"test AI gateway", "call OpenAI through gateway"

Quick Reference

PolicyPurposeDetails
azure-openai-token-limit
Cost controlModel Policies
azure-openai-semantic-cache-lookup/store
60-80% cost savingsModel Policies
azure-openai-emit-token-metric
ObservabilityModel Policies
llm-content-safety
Safety & complianceAgent Policies
rate-limit-by-key
MCP/tool protectionTool Policies

Get Gateway Details

# Get gateway URL
az apim show --name <apim-name> --resource-group <rg> --query "gatewayUrl" -o tsv

# List backends (AI models)
az apim backend list --service-name <apim-name> --resource-group <rg> \
  --query "[].{id:name, url:url}" -o table

# Get subscription key
az apim subscription keys list \
  --service-name <apim-name> --resource-group <rg> --subscription-id <sub-id>

Test AI Endpoint

GATEWAY_URL=$(az apim show --name <apim-name> --resource-group <rg> --query "gatewayUrl" -o tsv)

curl -X POST "${GATEWAY_URL}/openai/deployments/<deployment>/chat/completions?api-version=2024-02-01" \
  -H "Content-Type: application/json" \
  -H "Ocp-Apim-Subscription-Key: <key>" \
  -d '{"messages": [{"role": "user", "content": "Hello"}], "max_tokens": 100}'

Common Tasks

Add AI Backend

See references/patterns.md for full steps.

# Discover AI resources
az cognitiveservices account list --query "[?kind=='OpenAI']" -o table

# Create backend
az apim backend create --service-name <apim> --resource-group <rg> \
  --backend-id openai-backend --protocol http --url "https://<aoai>.openai.azure.com/openai"

# Grant access (managed identity)
az role assignment create --assignee <apim-principal-id> \
  --role "Cognitive Services User" --scope <aoai-resource-id>

Apply AI Governance Policy

Recommended policy order in

<inbound>
:

  1. Authentication - Managed identity to backend
  2. Semantic Cache Lookup - Check cache before calling AI
  3. Token Limits - Cost control
  4. Content Safety - Filter harmful content
  5. Backend Selection - Load balancing
  6. Metrics - Token usage tracking

See references/policies.md for complete example.


Troubleshooting

IssueSolution
Token limit 429Increase
tokens-per-minute
or add load balancing
No cache hitsLower
score-threshold
to 0.7
Content false positivesIncrease category thresholds (5-6)
Backend auth 401Grant APIM "Cognitive Services User" role

See references/troubleshooting.md for details.


References

SDK Quick References