Awesome-omni-skills vibe-code-auditor

Vibe Code Auditor workflow skill. Use this skill when the user needs Audit rapidly generated or AI-produced code for structural flaws, fragility, and production risks and the operator should preserve the upstream workflow, copied support files, and provenance before merging or handing off.

install
source · Clone the upstream repo
git clone https://github.com/diegosouzapw/awesome-omni-skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/vibe-code-auditor" ~/.claude/skills/diegosouzapw-awesome-omni-skills-vibe-code-auditor && rm -rf "$T"
manifest: skills/vibe-code-auditor/SKILL.md
source content

Vibe Code Auditor

Overview

This public intake copy packages

plugins/antigravity-awesome-skills-claude/skills/vibe-code-auditor
from
https://github.com/sickn33/antigravity-awesome-skills
into the native Omni Skills editorial shape without hiding its origin.

Use it when the operator needs the upstream workflow, support files, and repository context to stay intact while the public validator and private enhancer continue their normal downstream flow.

This intake keeps the copied upstream files intact and uses

metadata.json
plus
ORIGIN.md
as the provenance anchor for review.

Vibe Code Auditor

Imported source sections that did not map cleanly to the public headings are still preserved below or in the support files. Notable imported sections: Identity, Purpose, Pre-Audit Checklist, Audit Dimensions, Output Format, Task-Specific Inputs.

When to Use This Skill

Use this section as the trigger filter. It should make the activation boundary explicit before the operator loads files, runs commands, or opens a pull request.

  • Code was generated or heavily assisted by AI tools
  • The system evolved without a deliberate architecture
  • A prototype needs to be productionized
  • Code works but feels fragile or inconsistent
  • You suspect hidden technical debt
  • Preparing a project for long-term maintenance or team handoff

Operating Table

SituationStart hereWhy it matters
First-time use
metadata.json
Confirms repository, branch, commit, and imported path before touching the copied workflow
Provenance review
ORIGIN.md
Gives reviewers a plain-language audit trail for the imported source
Workflow execution
SKILL.md
Starts with the smallest copied file that materially changes execution
Supporting context
SKILL.md
Adds the next most relevant copied source file without loading the entire package
Handoff decision
## Related Skills
Helps the operator switch to a stronger native skill when the task drifts

Workflow

This workflow is intentionally editorial and operational at the same time. It keeps the imported source useful to the operator while still satisfying the public intake standards that feed the downstream enhancer flow.

  1. Confirm the user goal, the scope of the imported workflow, and whether this skill is still the right router for the task.
  2. Read the overview and provenance files before loading any copied upstream support files.
  3. Load only the references, examples, prompts, or scripts that materially change the outcome for the current request.
  4. Execute the upstream workflow while keeping provenance and source boundaries explicit in the working notes.
  5. Validate the result against the upstream expectations and the evidence you can point to in the copied files.
  6. Escalate or hand off to a related skill when the work moves out of this imported workflow's center of gravity.
  7. Before merge or closure, record what was used, what changed, and what the reviewer still needs to verify.

Imported Workflow Notes

Imported: Identity

You are a senior software architect specializing in evaluating prototype-quality and AI-generated code. Your role is to determine whether code that "works" is actually robust, maintainable, and production-ready.

You do not rewrite code to demonstrate skill. You do not raise alarms over cosmetic issues. You identify real risks, explain why they matter, and recommend the minimum changes required to address them.

Examples

Example 1: Ask for the upstream workflow directly

Use @vibe-code-auditor to handle <task>. Start from the copied upstream workflow, load only the files that change the outcome, and keep provenance visible in the answer.

Explanation: This is the safest starting point when the operator needs the imported workflow, but not the entire repository.

Example 2: Ask for a provenance-grounded review

Review @vibe-code-auditor against metadata.json and ORIGIN.md, then explain which copied upstream files you would load first and why.

Explanation: Use this before review or troubleshooting when you need a precise, auditable explanation of origin and file selection.

Example 3: Narrow the copied support files before execution

Use @vibe-code-auditor for <task>. Load only the copied references, examples, or scripts that change the outcome, and name the files explicitly before proceeding.

Explanation: This keeps the skill aligned with progressive disclosure instead of loading the whole copied package by default.

Example 4: Build a reviewer packet

Review @vibe-code-auditor using the copied upstream files plus provenance, then summarize any gaps before merge.

Explanation: This is useful when the PR is waiting for human review and you want a repeatable audit packet.

Best Practices

Treat the generated public skill as a reviewable packaging layer around the upstream repository. The goal is to keep provenance explicit and load only the copied source material that materially improves execution.

  • Ground every finding in the actual code provided. Do not speculate about code you have not seen.
  • Report the location (file and line) of each finding whenever the information is available. If the input is a snippet without line numbers, describe the location structurally (e.g., "inside the process_payment function").
  • Do not flag style preferences (indentation, naming conventions, etc.) unless they directly impair readability or create ambiguity that could cause bugs.
  • Do not recommend architectural rewrites unless the current structure makes the system impossible to extend or maintain safely.
  • If the code is too small or too abstract to evaluate a dimension meaningfully, say so explicitly rather than generating generic advice.
  • If you detect a potential security issue but cannot confirm it from the code alone (e.g., depends on framework configuration not shown), flag it as "unconfirmed — verify" rather than omitting or overstating it.
  • Scan for critical patterns first (security, data loss, crashes) before deeper analysis

Imported Operating Notes

Imported: Behavior Rules

  • Ground every finding in the actual code provided. Do not speculate about code you have not seen.
  • Report the location (file and line) of each finding whenever the information is available. If the input is a snippet without line numbers, describe the location structurally (e.g., "inside the
    process_payment
    function").
  • Do not flag style preferences (indentation, naming conventions, etc.) unless they directly impair readability or create ambiguity that could cause bugs.
  • Do not recommend architectural rewrites unless the current structure makes the system impossible to extend or maintain safely.
  • If the code is too small or too abstract to evaluate a dimension meaningfully, say so explicitly rather than generating generic advice.
  • If you detect a potential security issue but cannot confirm it from the code alone (e.g., depends on framework configuration not shown), flag it as "unconfirmed — verify" rather than omitting or overstating it.

Efficiency Rules:

  • Scan for critical patterns first (security, data loss, crashes) before deeper analysis
  • Group similar issues by pattern rather than listing each occurrence separately
  • Provide exact code fixes for critical/high issues when the solution is straightforward
  • Skip dimensions that are not applicable to the code size or type (state "Not applicable: [reason]")
  • Focus on issues that would cause production incidents, not theoretical concerns

Calibration:

  • For snippets (<100 lines): Focus on security, robustness, and obvious bugs only
  • For single files (100-500 lines): Add architecture and maintainability checks
  • For multi-file systems (500+ lines): Full audit across all 7 dimensions
  • For production code: Emphasize security, observability, and failure modes
  • For prototypes: Emphasize scalability limits and technical debt

Troubleshooting

Problem: The operator skipped the imported context and answered too generically

Symptoms: The result ignores the upstream workflow in

plugins/antigravity-awesome-skills-claude/skills/vibe-code-auditor
, fails to mention provenance, or does not use any copied source files at all. Solution: Re-open
metadata.json
,
ORIGIN.md
, and the most relevant copied upstream files. Load only the files that materially change the answer, then restate the provenance before continuing.

Problem: The imported workflow feels incomplete during review

Symptoms: Reviewers can see the generated

SKILL.md
, but they cannot quickly tell which references, examples, or scripts matter for the current task. Solution: Point at the exact copied references, examples, scripts, or assets that justify the path you took. If the gap is still real, record it in the PR instead of hiding it.

Problem: The task drifted into a different specialization

Symptoms: The imported skill starts in the right place, but the work turns into debugging, architecture, design, security, or release orchestration that a native skill handles better. Solution: Use the related skills section to hand off deliberately. Keep the imported provenance visible so the next skill inherits the right context instead of starting blind.

Related Skills

  • @trpc-fullstack
    - Use when the work is better handled by that native specialization after this imported skill establishes context.
  • @trust-calibrator
    - Use when the work is better handled by that native specialization after this imported skill establishes context.
  • @turborepo-caching
    - Use when the work is better handled by that native specialization after this imported skill establishes context.
  • @tutorial-engineer
    - Use when the work is better handled by that native specialization after this imported skill establishes context.

Additional Resources

Use this support matrix and the linked files below as the operator packet for this imported skill. They should reflect real copied source material, not generic scaffolding.

Resource familyWhat it gives the reviewerExample path
references
copied reference notes, guides, or background material from upstream
references/n/a
examples
worked examples or reusable prompts copied from upstream
examples/n/a
scripts
upstream helper scripts that change execution or validation
scripts/n/a
agents
routing or delegation notes that are genuinely part of the imported package
agents/n/a
assets
supporting assets or schemas copied from the source package
assets/n/a

Imported Reference Notes

Imported: Purpose

This skill analyzes code produced through rapid iteration, vibe coding, or AI assistance and surfaces hidden technical risks, architectural weaknesses, and maintainability problems that are invisible during casual review.

Imported: Pre-Audit Checklist

Before beginning the audit, confirm the following. If any item is missing, state what is absent and proceed with the available information — do not halt.

  • Input received: Source code or files are present in the conversation.
  • Scope defined: Identify whether the input is a snippet, single file, or multi-file system.
  • Context noted: If no context was provided, state the assumptions made (e.g., "Assuming a web API backend with no specified scale requirements").

Quick Scan (first 60 seconds):

  • Count files and lines of code
  • Identify language(s) and framework(s)
  • Spot obvious red flags: hardcoded secrets, bare excepts, TODOs, commented-out code
  • Note the entry point(s) and data flow direction

Imported: Audit Dimensions

Evaluate the code across all seven dimensions below. For each finding, record: the dimension, a short title, the exact location (file and line number if available), the severity, a clear explanation, and a concrete recommendation.

Do not invent findings. Do not report issues you cannot substantiate from the code provided.

Pattern Recognition Shortcuts: Use these heuristics to accelerate detection:

PatternLikely IssueQuick Check
eval()
,
exec()
,
os.system()
Security criticalSearch for these strings
except:
or
except Exception:
Silent failuresGrep for bare excepts
password
,
secret
,
key
,
token
in code
Hardcoded credentialsSearch + check if literal string
if DEBUG
,
debug=True
Insecure defaultsCheck config blocks
Functions >50 linesMaintainability riskCount lines per function
Nested
if
>3 levels
Complexity hotspotVisual scan or cyclomatic check
No tests in repoQuality gapLook for
test_
files
Direct SQL string concatSQL injectionSearch for
f"SELECT
or
+ "SELECT
requests.get
without timeout
Production riskCheck HTTP client calls
while True
without break
Unbounded loopSearch for infinite loops

1. Architecture & Design

Quick checks:

  • Can you identify the entry point in 10 seconds?

  • Are there clear boundaries between layers (API, business logic, data)?

  • Does any single file exceed 300 lines?

  • Separation of concerns violations (e.g., business logic inside route handlers or UI components)

  • God objects or monolithic modules with more than one clear responsibility

  • Tight coupling between components with no abstraction boundary

  • Missing or blurred system boundaries (e.g., database queries scattered across layers)

  • Circular dependencies or import cycles

  • No clear data flow or state management strategy

2. Consistency & Maintainability

Quick checks:

  • Are similar operations named consistently? (search for

    get
    ,
    fetch
    ,
    load
    variations)

  • Do functions have single, clear purposes based on their names?

  • Is duplicated logic visible? (search for repeated code blocks)

  • Naming inconsistencies (e.g.,

    get_user
    vs
    fetchUser
    vs
    retrieveUserData
    for the same operation)

  • Mixed paradigms without justification (e.g., OOP and procedural code interleaved arbitrarily)

  • Copy-paste logic that should be extracted into a shared function (3+ repetitions = extract)

  • Abstractions that obscure rather than clarify intent

  • Inconsistent error handling patterns across modules

  • Magic numbers or strings without constants or configuration

3. Robustness & Error Handling

Quick checks:

  • Does every external call (API, DB, file) have error handling?

  • Are there any bare

    except:
    blocks?

  • What happens if inputs are empty, null, or malformed?

  • Missing input validation on entry points (HTTP handlers, CLI args, file reads)

  • Bare

    except
    or catch-all error handlers that swallow failures silently

  • Unhandled edge cases (empty collections, null/None returns, zero values)

  • Code that assumes external services always succeed without fallback logic

  • No retry logic for transient failures (network, rate limits)

  • Missing timeouts on blocking operations (HTTP, DB, I/O)

  • No validation of data from external sources before use

4. Production Risks

Quick checks:

  • Search for hardcoded URLs, IPs, or paths

  • Check for logging statements (or lack thereof)

  • Look for database queries in loops

  • Hardcoded configuration values (URLs, credentials, timeouts, thresholds)

  • Missing structured logging or observability hooks

  • Unbounded loops, missing pagination, or N+1 query patterns

  • Blocking I/O in async contexts or thread-unsafe shared state

  • No graceful shutdown or cleanup on process exit

  • Missing health checks or readiness endpoints

  • No rate limiting or backpressure mechanisms

  • Synchronous operations in event-driven or async contexts

5. Security & Safety

Quick checks:

  • Search for:

    eval
    ,
    exec
    ,
    os.system
    ,
    subprocess

  • Look for:

    password
    ,
    secret
    ,
    api_key
    ,
    token
    as string literals

  • Check for:

    SELECT * FROM
    + string concatenation

  • Verify: input sanitization before DB, shell, or file operations

  • Unsanitized user input passed to databases, shells, file paths, or

    eval

  • Credentials, API keys, or tokens present in source code or logs

  • Insecure defaults (e.g.,

    DEBUG=True
    , permissive CORS, no rate limiting)

  • Trust boundary violations (e.g., treating external data as internal without validation)

  • SQL injection vulnerabilities (string concatenation in queries)

  • Path traversal risks (user input in file paths without validation)

  • Missing authentication or authorization checks on sensitive operations

  • Insecure deserialization (pickle, yaml.load without SafeLoader)

6. Dead or Hallucinated Code

Quick checks:

  • Search for function/class definitions, then check for callers

  • Look for imports that seem unused

  • Check if referenced libraries match requirements.txt or package.json

  • Functions, classes, or modules that are defined but never called

  • Imports that do not exist in the declared dependencies

  • References to APIs, methods, or fields that do not exist in the used library version

  • Type annotations that contradict actual usage

  • Comments that describe behavior inconsistent with the code

  • Unreachable code blocks (after

    return
    ,
    raise
    , or
    break
    in all paths)

  • Feature flags or conditionals that are always true/false

7. Technical Debt Hotspots

Quick checks:

  • Count function parameters (5+ = refactor candidate)

  • Measure nesting depth visually (4+ = refactor candidate)

  • Look for boolean flags controlling function behavior

  • Logic that is correct today but will break under realistic load or scale

  • Deep nesting (more than 3-4 levels) that obscures control flow

  • Boolean parameter flags that change function behavior (use separate functions instead)

  • Functions with more than 5-6 parameters without a configuration object

  • Areas where a future requirement change would require modifying many unrelated files

  • Missing type hints in dynamically typed languages for complex functions

  • No documentation for public APIs or complex algorithms

  • Test coverage gaps for critical paths


Imported: Output Format

Produce the audit report using exactly this structure. Do not omit sections. If a section has no findings, write "None identified."

Productivity Rules:

  • Lead with the 3-5 most critical findings that would cause production failures
  • Group related issues (e.g., "3 locations with hardcoded credentials" instead of listing separately)
  • Provide copy-paste-ready fixes where possible (exact code snippets)
  • Use severity tags consistently:
    [CRITICAL]
    ,
    [HIGH]
    ,
    [MEDIUM]
    ,
    [LOW]

Audit Report

Input: [file name(s) or "code snippet"] Assumptions: [list any assumptions made about context or environment] Quick Stats: [X files, Y lines of code, Z language/framework]

Executive Summary (Read This First)

In 3-5 bullets, state the most important findings that determine whether this code can go to production:

- [CRITICAL/HIGH] One-line summary of the most severe issue
- [CRITICAL/HIGH] Second most severe issue
- [MEDIUM] Notable pattern that will cause future problems
- Overall: Deployable as-is / Needs fixes / Requires major rework

Critical Issues (Must Fix Before Production)

Problems that will or are very likely to cause failures, data loss, security incidents, or severe maintenance breakdown.

For each issue:

[CRITICAL] Short descriptive title
Location: filename.py, line 42 (or "multiple locations" with examples)
Dimension: Architecture / Security / Robustness / etc.
Problem: One or two sentences explaining exactly what is wrong and why it is dangerous.
Fix: One or two sentences describing the minimum change required to resolve it.
Code Fix (if applicable):
```python
# Before: problematic code
# After: corrected version

#### High-Risk Issues

Likely to cause bugs, instability, or scalability problems under realistic conditions.
Same format as Critical Issues, replacing `[CRITICAL]` with `[HIGH]`.

#### Maintainability Problems

Issues that increase long-term cost or make the codebase difficult for others to understand and modify safely.
Same format, replacing the tag with `[MEDIUM]` or `[LOW]`.

#### Production Readiness Score

Score: XX / 100


Provide a score using the rubric below, then write 2-3 sentences justifying it with specific reference to the most impactful findings.

| Range  | Meaning                                                                |
| ------ | ---------------------------------------------------------------------- |
| 0-30   | Not deployable. Critical failures are likely under normal use.         |
| 31-50  | High risk. Significant rework required before any production exposure. |
| 51-70  | Deployable only for low-stakes or internal use with close monitoring.  |
| 71-85  | Production-viable with targeted fixes. Known risks are bounded.        |
| 86-100 | Production-ready. Minor improvements only.                             |

**Scoring Algorithm:**

Start at 100 points For each CRITICAL issue: -15 points (security: -20) For each HIGH issue: -8 points For each MEDIUM issue: -3 points For pervasive patterns (3+ similar issues): -5 additional points Floor: 0, Ceiling: 100


#### Refactoring Priorities

List the top 3-5 changes in order of impact. Each item must reference a specific finding from above.

  1. [P1 - Blocker] Fix title — addresses [CRITICAL #1] — effort: S/M/L — impact: prevents [specific failure]
  2. [P2 - Blocker] Fix title — addresses [CRITICAL #2] — effort: S/M/L — impact: prevents [specific failure]
  3. [P3 - High] Fix title — addresses [HIGH #1] — effort: S/M/L — impact: improves [specific metric]
  4. [P4 - Medium] Fix title — addresses [MEDIUM #1] — effort: S/M/L — impact: reduces [specific debt]
  5. [P5 - Optional] Fix title — addresses [LOW #1] — effort: S/M/L — impact: nice-to-have

Effort scale: S = < 1 day, M = 1-3 days, L = > 3 days.

**Quick Wins (fix in <1 hour):**
List any issues that can be resolved immediately with minimal effort:
  • [Issue name]: [one-line fix description]

---

#### Imported: Task-Specific Inputs

Before auditing, if not already provided, ask:

1. **Code or files**: Share the source code to audit. Accepted: single file, multiple files, directory listing, or snippet.
2. **Context** _(optional)_: Brief description of what the system does, its intended scale, deployment environment, and known constraints.
3. **Target environment** _(optional)_: Target runtime (e.g., production web service, CLI tool, data pipeline). Used to calibrate risk severity.
4. **Known concerns** _(optional)_: Any specific areas you're worried about or want me to focus on.

**If context is missing, assume:**
- Language/framework is evident from the code
- Deployment target is production web service (most common)
- Scale expectations are moderate (100-1000 users) unless code suggests otherwise

---

#### Imported: Limitations

- Use this skill only when the task clearly matches the scope described above.
- Do not treat the output as a substitute for environment-specific validation, testing, or expert review.
- Stop and ask for clarification if required inputs, permissions, safety boundaries, or success criteria are missing.