Trending-skills metatron-pentest-assistant
AI-powered penetration testing assistant using local LLM (metatron-qwen via Ollama) on Parrot OS Linux
git clone https://github.com/Aradotso/trending-skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/Aradotso/trending-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/metatron-pentest-assistant" ~/.claude/skills/aradotso-trending-skills-metatron-pentest-assistant && rm -rf "$T"
skills/metatron-pentest-assistant/SKILL.mdMETATRON Penetration Testing Assistant
Skill by ara.so — Daily 2026 Skills collection.
METATRON is a CLI-based AI penetration testing assistant that runs entirely locally — no cloud, no API keys. It orchestrates recon tools (nmap, whois, whatweb, curl, dig, nikto), feeds results to a locally running fine-tuned LLM (
metatron-qwen via Ollama), and stores all findings in MariaDB with full scan history, vulnerability tracking, and PDF/HTML export.
Architecture Overview
metatron.py ← CLI entry point, main menu, scan orchestration db.py ← MariaDB CRUD (history, vulns, fixes, exploits, summary) tools.py ← Recon tool runners (nmap, whois, whatweb, curl, dig, nikto) llm.py ← Ollama interface, agentic loop, AI tool dispatch search.py ← DuckDuckGo search + CVE lookup (no API key) Modelfile ← Custom metatron-qwen model config
Database spine: every scan creates a
sl_no in history; all other tables link via sl_no.
Installation
1. Clone and set up Python environment
git clone https://github.com/sooryathejas/METATRON.git cd METATRON python3 -m venv venv source venv/bin/activate pip install -r requirements.txt
2. Install system recon tools
sudo apt install nmap whois whatweb curl dnsutils nikto
3. Install Ollama and pull base model
curl -fsSL https://ollama.com/install.sh | sh # 8GB+ RAM: ollama pull huihui_ai/qwen3.5-abliterated:9b # <8GB RAM — use 4b and edit Modelfile FROM line accordingly: ollama pull huihui_ai/qwen3.5-abliterated:4b
4. Build the custom metatron-qwen model
ollama create metatron-qwen -f Modelfile ollama list # verify metatron-qwen appears
Modelfile (the repo ships this — key parameters):
FROM huihui_ai/qwen3.5-abliterated:9b PARAMETER num_ctx 16384 PARAMETER temperature 0.7 PARAMETER top_k 10 PARAMETER top_p 0.9
To use 4b instead, edit
Modelfile:
FROM huihui_ai/qwen3.5-abliterated:4b
Then rebuild:
ollama create metatron-qwen -f Modelfile
5. Set up MariaDB
sudo systemctl start mariadb sudo systemctl enable mariadb mysql -u root
CREATE DATABASE metatron; CREATE USER 'metatron'@'localhost' IDENTIFIED BY '123'; GRANT ALL PRIVILEGES ON metatron.* TO 'metatron'@'localhost'; FLUSH PRIVILEGES; EXIT;
Create all tables:
mysql -u metatron -p123 metatron < schema.sql
Or manually (paste from README schema block). The 5 tables:
— one row per scan session (spine)history
— findings per sessionvulnerabilities
— remediation per vulnerabilityfixes
— exploit attempts per sessionexploits_attempted
— raw scan + full AI analysis dumpsummary
Running METATRON
METATRON requires two terminals:
Terminal 1 — Load model into memory:
ollama run metatron-qwen # Wait for >>> prompt before proceeding
Terminal 2 — Launch the assistant:
cd ~/METATRON source venv/bin/activate python metatron.py
Main Menu Flow
[1] New Scan → enter target IP/domain → select tools → AI analyzes → saved to DB [2] View History → browse past scans → view/edit/delete/export [3] Exit
New Scan — Tool Selection
[1] nmap [2] whois [3] whatweb [4] curl headers [5] dig DNS [6] nikto [a] Run all (except nikto) [n] Run all + nikto (slow, thorough)
Exporting Reports
From View History → select scan → export:
— professional vulnerability reportPDF
— browser-viewable reportHTML
Code Examples
Programmatically run a scan and save to DB (db.py
patterns)
db.pyimport mysql.connector def get_db_connection(): return mysql.connector.connect( host="localhost", user="metatron", password="123", database="metatron" ) def create_scan_session(target: str) -> int: """Create a new history entry, return sl_no.""" from datetime import datetime conn = get_db_connection() cursor = conn.cursor() cursor.execute( "INSERT INTO history (target, scan_date, status) VALUES (%s, %s, %s)", (target, datetime.now(), "active") ) conn.commit() sl_no = cursor.lastrowid cursor.close() conn.close() return sl_no def save_vulnerability(sl_no: int, vuln_name: str, severity: str, port: str, service: str, description: str) -> int: """Save a vulnerability finding, return vuln id.""" conn = get_db_connection() cursor = conn.cursor() cursor.execute( """INSERT INTO vulnerabilities (sl_no, vuln_name, severity, port, service, description) VALUES (%s, %s, %s, %s, %s, %s)""", (sl_no, vuln_name, severity, port, service, description) ) conn.commit() vuln_id = cursor.lastrowid cursor.close() conn.close() return vuln_id def save_fix(sl_no: int, vuln_id: int, fix_text: str, source: str = "AI"): conn = get_db_connection() cursor = conn.cursor() cursor.execute( "INSERT INTO fixes (sl_no, vuln_id, fix_text, source) VALUES (%s, %s, %s, %s)", (sl_no, vuln_id, fix_text, source) ) conn.commit() cursor.close() conn.close() def save_summary(sl_no: int, raw_scan: str, ai_analysis: str, risk_level: str): from datetime import datetime conn = get_db_connection() cursor = conn.cursor() cursor.execute( """INSERT INTO summary (sl_no, raw_scan, ai_analysis, risk_level, generated_at) VALUES (%s, %s, %s, %s, %s)""", (sl_no, raw_scan, ai_analysis, risk_level, datetime.now()) ) conn.commit() cursor.close() conn.close() def get_scan_history(): """Retrieve all scan sessions.""" conn = get_db_connection() cursor = conn.cursor(dictionary=True) cursor.execute("SELECT * FROM history ORDER BY scan_date DESC") rows = cursor.fetchall() cursor.close() conn.close() return rows def get_vulnerabilities_for_scan(sl_no: int): conn = get_db_connection() cursor = conn.cursor(dictionary=True) cursor.execute( "SELECT * FROM vulnerabilities WHERE sl_no = %s", (sl_no,) ) rows = cursor.fetchall() cursor.close() conn.close() return rows
Running recon tools (tools.py
patterns)
tools.pyimport subprocess def run_nmap(target: str) -> str: """Run nmap service/version scan.""" result = subprocess.run( ["nmap", "-sV", "-sC", "-T4", target], capture_output=True, text=True, timeout=120 ) return result.stdout + result.stderr def run_whois(target: str) -> str: result = subprocess.run( ["whois", target], capture_output=True, text=True, timeout=30 ) return result.stdout def run_whatweb(target: str) -> str: result = subprocess.run( ["whatweb", "-a", "3", target], capture_output=True, text=True, timeout=60 ) return result.stdout def run_curl_headers(target: str) -> str: result = subprocess.run( ["curl", "-I", "-L", "--max-time", "15", target], capture_output=True, text=True, timeout=20 ) return result.stdout def run_dig(target: str) -> str: result = subprocess.run( ["dig", target, "ANY"], capture_output=True, text=True, timeout=15 ) return result.stdout def run_nikto(target: str) -> str: """Slow but thorough web scanner.""" result = subprocess.run( ["nikto", "-h", target], capture_output=True, text=True, timeout=300 ) return result.stdout def run_selected_tools(target: str, selections: list) -> dict: """ selections: list of tool names, e.g. ['nmap', 'whois', 'dig'] Returns dict of {tool_name: output} """ tool_map = { 'nmap': run_nmap, 'whois': run_whois, 'whatweb': run_whatweb, 'curl': run_curl_headers, 'dig': run_dig, 'nikto': run_nikto, } results = {} for tool in selections: if tool in tool_map: print(f"[*] Running {tool} on {target}...") try: results[tool] = tool_map[tool](target) except subprocess.TimeoutExpired: results[tool] = f"[TIMEOUT] {tool} timed out" except Exception as e: results[tool] = f"[ERROR] {tool}: {e}" return results
Querying Ollama LLM (llm.py
patterns)
llm.pyimport requests import json OLLAMA_URL = "http://localhost:11434/api/generate" MODEL_NAME = "metatron-qwen" def query_llm(prompt: str, stream: bool = True) -> str: """Send prompt to metatron-qwen, return full response.""" payload = { "model": MODEL_NAME, "prompt": prompt, "stream": stream } response = requests.post(OLLAMA_URL, json=payload, stream=stream) if not stream: return response.json().get("response", "") full_response = "" for line in response.iter_lines(): if line: chunk = json.loads(line) token = chunk.get("response", "") print(token, end="", flush=True) full_response += token if chunk.get("done"): break print() return full_response def build_pentest_prompt(target: str, scan_results: dict) -> str: """Build the analysis prompt from scan results.""" combined = "\n\n".join( f"=== {tool.upper()} ===\n{output}" for tool, output in scan_results.items() ) return f"""You are an expert penetration tester analyzing scan results for: {target} SCAN RESULTS: {combined} Provide a structured analysis covering: 1. VULNERABILITIES FOUND — name, severity (Critical/High/Medium/Low), port, service, description 2. EXPLOIT SUGGESTIONS — specific tools or techniques for each vulnerability 3. RECOMMENDED FIXES — actionable remediation steps 4. OVERALL RISK LEVEL — Critical / High / Medium / Low Format vulnerabilities as: VULN: <name> | SEVERITY: <level> | PORT: <port> | SERVICE: <service> DESC: <description> FIX: <remediation> """ def analyze_target(target: str, scan_results: dict) -> str: prompt = build_pentest_prompt(target, scan_results) print("\n[🤖] metatron-qwen analyzing...\n") return query_llm(prompt)
DuckDuckGo search and CVE lookup (search.py
patterns)
search.pyfrom duckduckgo_search import DDGS def search_exploits(query: str, max_results: int = 5) -> list: """Search DuckDuckGo for exploit info — no API key needed.""" with DDGS() as ddgs: results = list(ddgs.text(query, max_results=max_results)) return results def lookup_cve(cve_id: str) -> list: """Look up a CVE identifier.""" query = f"{cve_id} vulnerability exploit details" return search_exploits(query) def search_service_vulns(service: str, version: str) -> list: query = f"{service} {version} known vulnerabilities CVE exploit" return search_exploits(query) # Usage example: # results = lookup_cve("CVE-2021-44228") # results = search_service_vulns("Apache", "2.4.49")
Full scan pipeline (end-to-end)
from tools import run_selected_tools from llm import analyze_target from db import (create_scan_session, save_vulnerability, save_fix, save_summary) def run_full_scan(target: str, tools: list = None): if tools is None: tools = ['nmap', 'whois', 'whatweb', 'curl', 'dig'] # 1. Create DB session sl_no = create_scan_session(target) print(f"[+] Scan session #{sl_no} created for {target}") # 2. Run recon scan_results = run_selected_tools(target, tools) raw_scan = "\n\n".join(f"{k}:\n{v}" for k, v in scan_results.items()) # 3. AI analysis ai_output = analyze_target(target, scan_results) # 4. Parse and save (simplified — real parser in llm.py) # Save summary save_summary(sl_no, raw_scan, ai_output, risk_level="High") print(f"\n[✓] Results saved to database (sl_no={sl_no})") return sl_no, ai_output # Run it: # sl_no, analysis = run_full_scan("192.168.1.1", ['nmap', 'whois'])
Common Patterns
Check if Ollama model is running before scan
import requests def check_ollama_ready(model: str = "metatron-qwen") -> bool: try: resp = requests.get("http://localhost:11434/api/tags", timeout=5) models = [m["name"] for m in resp.json().get("models", [])] return any(model in m for m in models) except Exception: return False if not check_ollama_ready(): print("[!] metatron-qwen not found. Run: ollama run metatron-qwen") exit(1)
Query scan history
from db import get_db_connection def get_full_scan_report(sl_no: int) -> dict: conn = get_db_connection() cursor = conn.cursor(dictionary=True) cursor.execute("SELECT * FROM history WHERE sl_no = %s", (sl_no,)) history = cursor.fetchone() cursor.execute("SELECT * FROM vulnerabilities WHERE sl_no = %s", (sl_no,)) vulns = cursor.fetchall() cursor.execute("SELECT * FROM summary WHERE sl_no = %s", (sl_no,)) summary = cursor.fetchone() cursor.close() conn.close() return {"history": history, "vulnerabilities": vulns, "summary": summary}
Add a custom recon tool
# In tools.py — add your tool function: def run_gobuster(target: str, wordlist: str = "/usr/share/wordlists/dirb/common.txt") -> str: result = subprocess.run( ["gobuster", "dir", "-u", f"http://{target}", "-w", wordlist], capture_output=True, text=True, timeout=180 ) return result.stdout # Register it in the tool_map in run_selected_tools(): tool_map['gobuster'] = run_gobuster
Troubleshooting
metatron-qwen
not found / connection refused
metatron-qwen# Terminal 1: ensure model is loaded ollama run metatron-qwen # Should show >>> prompt # Verify Ollama API is reachable curl http://localhost:11434/api/tags
Out of memory when running 9b model
# Switch to 4b: edit Modelfile first line: # FROM huihui_ai/qwen3.5-abliterated:4b ollama create metatron-qwen -f Modelfile
MariaDB connection error
sudo systemctl status mariadb sudo systemctl start mariadb # Verify credentials work: mysql -u metatron -p123 metatron -e "SHOW TABLES;"
mysql.connector
not found
mysql.connectorsource venv/bin/activate pip install mysql-connector-python
nmap requires root for SYN scan
sudo nmap -sV -sC -T4 <target> # Or use TCP connect scan (no root needed): nmap -sT -sV <target>
Nikto timeout
Nikto is slow by design. Either use
[a] (all without nikto) or increase the subprocess timeout in tools.py:
result = subprocess.run(["nikto", "-h", target], capture_output=True, text=True, timeout=600) # 10 minutes
Slow AI responses
The 9b model needs time to load. If response is slow after the first query, it's still loading. Subsequent queries will be faster. Ensure no other GPU/memory-heavy processes are running.
Configuration Reference
| Setting | Location | Default | Notes |
|---|---|---|---|
| DB host | | | Change for remote DB |
| DB user | | | Match SQL user created |
| DB password | | | Change in production |
| DB name | | | |
| Ollama URL | | | |
| Model name | | | Must match |
| Context window | | | Increase for large scans |
| Temperature | | | Lower = more deterministic |
Security note: For production use, replace the hardcoded DB password with an environment variable:
os.environ.get("METATRON_DB_PASSWORD", "123")
Legal Disclaimer
METATRON is for educational purposes and authorized penetration testing only. Only scan systems you own or have explicit written permission to test. Unauthorized scanning is illegal.