Pro-workflow llm-gate
LLM-powered quality verification using prompt hooks. Validates commit messages, code patterns, and conventions using AI before allowing operations. Use to set up intelligent guardrails.
install
source · Clone the upstream repo
git clone https://github.com/rohitg00/pro-workflow
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/rohitg00/pro-workflow "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/llm-gate" ~/.claude/skills/rohitg00-pro-workflow-llm-gate && rm -rf "$T"
manifest:
skills/llm-gate/SKILL.mdsource content
LLM Gate
Use Claude Code's
type: "prompt" hooks to create intelligent quality gates that use AI to verify operations.
Trigger
Use when:
- Setting up commit message validation
- Enforcing code conventions beyond what linters catch
- Creating smart guardrails for specific operations
How Prompt Hooks Work
Claude Code supports hooks with
type: "prompt" that run a small LLM (Haiku by default) to verify conditions:
{ "PreToolUse": [{ "matcher": "Bash", "hooks": [{ "type": "prompt", "if": "Bash(git commit*)", "prompt": "Check if this git commit follows conventional commit format (<type>(<scope>): <summary>). The commit command is: $ARGUMENTS. Return {\"ok\": true} if valid, {\"ok\": false, \"reason\": \"...\"} if not.", "model": "haiku", "timeout": 15 }] }] }
The hook:
- Substitutes
with the JSON hook input$ARGUMENTS - Sends to Haiku (fast, cheap)
- Expects
or{"ok": true}{"ok": false, "reason": "..."} - If not ok → blocks the tool call with the reason
Example Gates
Conventional Commit Validator
{ "type": "prompt", "if": "Bash(git commit*)", "prompt": "Verify this git commit follows conventional commits: type(scope): summary. Types: feat,fix,refactor,test,docs,chore,perf,ci. Summary under 72 chars. Input: $ARGUMENTS", "model": "haiku" }
Destructive Command Guard
{ "type": "prompt", "if": "Bash(rm *)", "prompt": "Check if this rm command is safe. Flag if it uses -rf on important directories (src/, node_modules/, .git/). Input: $ARGUMENTS", "model": "haiku" }
API Key Leak Prevention
{ "type": "prompt", "matcher": "Write", "prompt": "Check if this file write contains hardcoded API keys, secrets, passwords, or tokens. Input: $ARGUMENTS. Return ok:false if secrets found.", "model": "haiku" }
Agent Hooks
For complex verification, use
type: "agent" (runs a full agent):
{ "type": "agent", "if": "Bash(git push*)", "prompt": "Review all staged changes for security issues before pushing. Check for: hardcoded secrets, SQL injection, XSS vulnerabilities, exposed internal URLs.", "model": "haiku", "timeout": 60 }
Setup Guide
- Choose which operations to gate
- Write the prompt (keep it focused, under 100 words)
- Pick the model (haiku for speed, sonnet for accuracy)
- Set timeout (15s for prompts, 60s for agents)
- Add to hooks.json under the appropriate event
Rules
- Use Haiku for simple checks (fast, cheap)
- Use Sonnet only for complex analysis
- Keep prompts under 100 words for reliability
- Always include
condition to avoid running on every tool callif - Set reasonable timeouts (15s prompt, 60s agent)
- Test hooks before deploying to avoid blocking workflows