Awesome-omni-skill AI Nervous System - Document Intelligence
Vector search and AI-powered document processing skills for OpenClaw integration
git clone https://github.com/diegosouzapw/awesome-omni-skill
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skill "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/backend/ai-nervous-system-document-intelligence" ~/.claude/skills/diegosouzapw-awesome-omni-skill-ai-nervous-system-document-intelligence && rm -rf "$T"
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skill "$T" && mkdir -p ~/.openclaw/skills && cp -r "$T/skills/backend/ai-nervous-system-document-intelligence" ~/.openclaw/skills/diegosouzapw-awesome-omni-skill-ai-nervous-system-document-intelligence && rm -rf "$T"
skills/backend/ai-nervous-system-document-intelligence/SKILL.mdDocument Intelligence Skills
These skills enable OpenClaw (Moltbot) to interact with the AI Nervous System's document processing pipeline via Telegram, Discord, or other interfaces.
Skills
document_upload
Purpose: Upload a document for AI processing (summarization + vector embedding)
Trigger Phrases:
- "upload document"
- "process file"
- "index document"
- "add to knowledge base"
Parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
| file_path | string | Yes | Path to the document file |
| file_url | string | No | URL to download document from |
API Endpoint:
POST http://localhost:8000/upload Content-Type: multipart/form-data file: <binary file data>
Response Schema:
{ "message": "Document uploaded successfully", "document": { "id": 1, "filename": "example.md", "status": "processing", "summary": null, "created_at": "2024-01-01T00:00:00Z" } }
Instructions:
- Accept file from user (attachment or path)
- POST to /upload endpoint with multipart form data
- Return document ID and initial status
- Optionally poll /status/{doc_id} for completion
- Return AI summary when processing completes
Example Interaction:
User: Upload this research paper Bot: Uploading document... Created document #42 Bot: Processing with Ollama (llama3.2:3b)... Bot: Complete! Summary: • Key finding 1 • Key finding 2 • Key finding 3
document_search
Purpose: Semantic vector search across indexed documents
Trigger Phrases:
- "search documents for"
- "find files about"
- "what documents mention"
- "search knowledge base"
Parameters:
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| query | string | Yes | - | Natural language search query |
| limit | integer | No | 5 | Max results to return (1-20) |
API Endpoint:
GET http://localhost:8000/search?q={query}&limit={limit}
Response Schema:
{ "query": "machine learning optimization", "results": [ { "id": 1, "filename": "ml_paper.md", "summary": "This document covers...", "similarity": 0.89 } ], "count": 1 }
Instructions:
- Parse user's search intent into query string
- GET /search with URL-encoded query
- Format results with similarity scores
- Highlight high-relevance (>0.7) matches
- Include document summaries in response
Example Interaction:
User: Search for documents about Python async patterns Bot: Found 3 relevant documents: 📄 CLAUDE.md (89% match) Summary: Project uses async/await patterns with FastAPI... 📄 api_design.txt (72% match) Summary: Guidelines for async endpoint design... 📄 notes.md (58% match) Summary: Meeting notes discussing concurrency...
document_status
Purpose: Check processing status of a specific document
Trigger Phrases:
- "status of document"
- "is document ready"
- "check processing"
Parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
| doc_id | integer | Yes | Document ID to check |
API Endpoint:
GET http://localhost:8000/status/{doc_id}
Response Schema:
{ "id": 1, "filename": "example.md", "status": "completed", "summary": "AI-generated summary...", "created_at": "2024-01-01T00:00:00Z", "updated_at": "2024-01-01T00:01:00Z" }
Status Values:
- Queued for processingpending
- Ollama is analyzingprocessing
- Ready with summary and embeddingcompleted
- Processing failederror
Autonomous Behaviors
Proactive File Watcher
The Document Intelligence module includes an autonomous file watcher that monitors:
~/Downloads~/Documents/ToProcess
When new
.txt, .md, or .pdf files appear, they are automatically:
- Uploaded to the FastAPI backend
- Processed with Ollama for summarization
- Embedded with nomic-embed-text for vector search
- Added to the knowledge base
Notification Pattern:
Bot: 🔔 New document detected: research_paper.pdf Bot: Processing... Bot: ✅ Indexed! Summary: [3 bullet points]
Heartbeat Monitor
Every 30 minutes, the system audits the document pipeline:
- Checks for stuck documents (processing > 10 min)
- Retries failed Ollama tasks
- Prepares daily intelligence briefing
Integration Notes
For Telegram Bot:
@bot.message_handler(content_types=['document']) async def handle_document(message): file_info = await bot.get_file(message.document.file_id) # Download and POST to /upload
For Discord Bot:
@bot.event async def on_message(message): if message.attachments: for attachment in message.attachments: # Download and POST to /upload
Cold War Jazz Aesthetic
All responses should maintain the tactical intelligence aesthetic:
- Use
,[INTEL]
,[CLASSIFIED]
prefixes[BRIEFING] - Monospace formatting for data
- Teal (#4a9c94) for success, Amber (#d4a56a) for processing