Vibeguard vibeguard

AI-assisted development of anti-hallucination specifications. Check out the seven-layer defense architecture, quantitative indicators, execution templates and practical cases. Used for code review, task startup inspection, and weekly review.

install
source · Clone the upstream repo
git clone https://github.com/majiayu000/vibeguard
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/vibeguard "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/vibeguard" ~/.claude/skills/majiayu000-vibeguard-vibeguard && rm -rf "$T"
manifest: skills/vibeguard/SKILL.md
source content

#VibeGuard — Anti-hallucination specification Skill

Overview

VibeGuard is an anti-hallucination framework for AI-assisted development that systematically blocks common failure modes in LLM code generation through a seven-layer defense architecture.

Canonical contract sources for this skill:

  • README.md
    — product entry and current Core vs Workflow boundary
  • docs/rule-reference.md
    — public rule/guard summary
  • schemas/install-modules.json
    — install/runtime contract

docs/internal/history/spec.md
remains a historical design snapshot and should not be treated as the authoritative implementation contract.

Calling

/vibeguard
can:

  • View the complete anti-hallucination specifications
  • Get task startup checklist
  • View the scoring matrix for risk assessment
  • Get weekly review template

Trigger conditions

Triggered when user mentions:

  • "Check anti-hallucination specifications", "vibeguard"
  • "task startup check", "task contract"
  • "Weekly review", "review template"
  • "risk assessment", "risk scoring"
  • "code quality guard", "guard rules"

Quick review of seven-layer defense architecture

HierarchyNameKey Tools/Rules
L1Anti-duplicate system
check_duplicates.py
/ Search first then write
L2Naming constraints
check_naming_convention.py
/snake_case
L3Pre-commit Hooksruff / gitleaks / shellcheck
L4Architecture guard testing
test_code_quality_guards.py
Five rules
L5Skill/Workflowplan-flow / fixflow / optflow
L6Prompt embedded rulesCLAUDE.md mandatory rules
L7Weekly reviewreview-template.md

Quick use

Task startup check

Refer to references/task-contract.yaml and confirm:
1. Goals are clear and verifiable
2. The data source has been determined
3. Acceptance criteria can be tested

risk assessment

Refer to references/scoring-matrix.md to score each finding:
- impact: 1-5
- effort: 1-5
- risk: 1-5
- confidence: 1-5
Formula: priority = (impact × confidence) - (effort + risk)

Weekly review

Refer to references/review-template.md, record:
1. Return event this week
2. Guard interception statistics
3. Indicator trends
4. Highlights for next week

Reference documentation

  • references/task-contract.yaml
    — Task startup Checklist (machine verification format)
  • references/review-template.md
    — weekly review template
  • references/scoring-matrix.md
    — risk-impact scoring matrix
  • README.md
    (repository root directory) — current product entrypoint
  • docs/rule-reference.md
    (repository root directory) — current rule/guard summary
  • schemas/install-modules.json
    (repository root directory) — current install/runtime contract

Execution rules

  • Go through the task contract before starting each development task
  • Conduct a review every Friday, using review template
  • When a regression is discovered, first locate the failed defense line and then strengthen the rules.
  • New rules must have corresponding automatic detection methods (guard/hook/test)