Rei-skills azure-ai-contentsafety-py
install
source · Clone the upstream repo
git clone https://github.com/rootcastleco/rei-skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/rootcastleco/rei-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/azure-ai-contentsafety-py" ~/.claude/skills/rootcastleco-rei-skills-azure-ai-contentsafety-py && rm -rf "$T"
manifest:
skills/azure-ai-contentsafety-py/SKILL.mdsource content
⚠️ AUTHORIZED USE ONLY — This skill is intended for authorized security professionals only. Use only against systems you own or have explicit written permission to test. Unauthorized use may violate applicable laws.
Azure AI Content Safety SDK for Python
Detect harmful user-generated and AI-generated content in applications.
Installation
pip install azure-ai-contentsafety
Environment Variables
CONTENT_SAFETY_ENDPOINT=https://<resource>.cognitiveservices.azure.com CONTENT_SAFETY_KEY=<your-api-key>
Authentication
API Key
from azure.ai.contentsafety import ContentSafetyClient from azure.core.credentials import AzureKeyCredential import os client = ContentSafetyClient( endpoint=os.environ["CONTENT_SAFETY_ENDPOINT"], credential=AzureKeyCredential(os.environ["CONTENT_SAFETY_KEY"]) )
Entra ID
from azure.ai.contentsafety import ContentSafetyClient from azure.identity import DefaultAzureCredential client = ContentSafetyClient( endpoint=os.environ["CONTENT_SAFETY_ENDPOINT"], credential=DefaultAzureCredential() )
Analyze Text
from azure.ai.contentsafety import ContentSafetyClient from azure.ai.contentsafety.models import AnalyzeTextOptions, TextCategory from azure.core.credentials import AzureKeyCredential client = ContentSafetyClient(endpoint, AzureKeyCredential(key)) request = AnalyzeTextOptions(text="Your text content to analyze") response = client.analyze_text(request) # Check each category for category in [TextCategory.HATE, TextCategory.SELF_HARM, TextCategory.SEXUAL, TextCategory.VIOLENCE]: result = next((r for r in response.categories_analysis if r.category == category), None) if result: print(f"{category}: severity {result.severity}")
Analyze Image
from azure.ai.contentsafety import ContentSafetyClient from azure.ai.contentsafety.models import AnalyzeImageOptions, ImageData from azure.core.credentials import AzureKeyCredential import base64 client = ContentSafetyClient(endpoint, AzureKeyCredential(key)) # From file with open("image.jpg", "rb") as f: image_data = base64.b64encode(f.read()).decode("utf-8") request = AnalyzeImageOptions( image=ImageData(content=image_data) ) response = client.analyze_image(request) for result in response.categories_analysis: print(f"{result.category}: severity {result.severity}")
Image from URL
from azure.ai.contentsafety.models import AnalyzeImageOptions, ImageData request = AnalyzeImageOptions( image=ImageData(blob_url="https://example.com/image.jpg") ) response = client.analyze_image(request)
Text Blocklist Management
Create Blocklist
from azure.ai.contentsafety import BlocklistClient from azure.ai.contentsafety.models import TextBlocklist from azure.core.credentials import AzureKeyCredential blocklist_client = BlocklistClient(endpoint, AzureKeyCredential(key)) blocklist = TextBlocklist( blocklist_name="my-blocklist", description="Custom terms to block" ) result = blocklist_client.create_or_update_text_blocklist( blocklist_name="my-blocklist", options=blocklist )
Add Block Items
from azure.ai.contentsafety.models import AddOrUpdateTextBlocklistItemsOptions, TextBlocklistItem items = AddOrUpdateTextBlocklistItemsOptions( blocklist_items=[ TextBlocklistItem(text="blocked-term-1"), TextBlocklistItem(text="blocked-term-2") ] ) result = blocklist_client.add_or_update_blocklist_items( blocklist_name="my-blocklist", options=items )
Analyze with Blocklist
from azure.ai.contentsafety.models import AnalyzeTextOptions request = AnalyzeTextOptions( text="Text containing blocked-term-1", blocklist_names=["my-blocklist"], halt_on_blocklist_hit=True ) response = client.analyze_text(request) if response.blocklists_match: for match in response.blocklists_match: print(f"Blocked: {match.blocklist_item_text}")
Severity Levels
Text analysis returns 4 severity levels (0, 2, 4, 6) by default. For 8 levels (0-7):
from azure.ai.contentsafety.models import AnalyzeTextOptions, AnalyzeTextOutputType request = AnalyzeTextOptions( text="Your text", output_type=AnalyzeTextOutputType.EIGHT_SEVERITY_LEVELS )
Harm Categories
| Category | Description |
|---|---|
| Attacks based on identity (race, religion, gender, etc.) |
| Sexual content, relationships, anatomy |
| Physical harm, weapons, injury |
| Self-injury, suicide, eating disorders |
Severity Scale
| Level | Text Range | Image Range | Meaning |
|---|---|---|---|
| 0 | Safe | Safe | No harmful content |
| 2 | Low | Low | Mild references |
| 4 | Medium | Medium | Moderate content |
| 6 | High | High | Severe content |
Client Types
| Client | Purpose |
|---|---|
| Analyze text and images |
| Manage custom blocklists |
Best Practices
- Use blocklists for domain-specific terms
- Set severity thresholds appropriate for your use case
- Handle multiple categories — content can be harmful in multiple ways
- Use halt_on_blocklist_hit for immediate rejection
- Log analysis results for audit and improvement
- Consider 8-severity mode for finer-grained control
- Pre-moderate AI outputs before showing to users
When to Use
This skill is applicable to execute the workflow or actions described in the overview.
🏰 Rei Skills — Curated by Rootcastle Engineering & Innovation | Batuhan Ayrıbaş
Engineering Beyond Boundaries | admin@rootcastle.com