Antigravity-awesome-skills project-skill-audit
Audit a project and recommend the highest-value skills to add or update.
git clone https://github.com/sickn33/antigravity-awesome-skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/sickn33/antigravity-awesome-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/project-skill-audit" ~/.claude/skills/sickn33-antigravity-awesome-skills-project-skill-audit-cc8b76 && rm -rf "$T"
skills/project-skill-audit/SKILL.mdProject Skill Audit
Overview
Audit the project's real recurring workflows before recommending skills. Prefer evidence from memory, rollout summaries, existing skill folders, and current repo conventions over generic brainstorming.
Recommend updates before new skills when an existing project skill is already close to the needed behavior.
When to Use
- When the user asks what skills a project needs or which existing skills should be updated.
- When recommendations should be grounded in project history, memory files, and local conventions.
Workflow
-
Map the current project surface. Identify the repo root and read the most relevant project guidance first, such as
,AGENTS.md
, roadmap/ledger files, and local docs that define workflows or validation expectations.README.md -
Build the memory/session path first. Resolve the memory base as
when set, otherwise default to$CODEX_HOME
. Use these locations:~/.codex- memory index:
or$CODEX_HOME/memories/MEMORY.md~/.codex/memories/MEMORY.md - rollout summaries:
$CODEX_HOME/memories/rollout_summaries/ - raw sessions:
or$CODEX_HOME/sessions/~/.codex/sessions/
- memory index:
-
Read project past sessions in this order. If the runtime prompt already includes a memory summary, start there. Then search
for:MEMORY.md- repo name
- repo basename
- current
cwd - important module or file names Open only the 1-3 most relevant rollout summaries first. Fall back to raw session JSONL only when the summaries are missing the exact evidence you need.
-
Scan existing project-local skills before suggesting anything new. Check these locations relative to the current repo root:
.agents/skills.codex/skills
Read bothskills
andSKILL.md
when present.agents/openai.yaml
-
Compare project-local skills against recurring work. Look for repeated patterns in past sessions:
- repeated validation sequences
- repeated failure shields
- recurring ownership boundaries
- repeated root-cause categories
- workflows that repeatedly require the same repo-specific context If the pattern appears repeatedly and is not already well captured, it is a candidate skill.
-
Separate
fromnew skill
. Recommend an update when an existing skill is already the right bucket but has stale triggers, missing guardrails, outdated paths, weak validation instructions, or incomplete scope. Recommend a new skill only when the workflow is distinct enough that stretching an existing skill would make it vague or confusing.update existing skill -
Check for overlap with global skills only after reviewing project-local skills. Use
and$CODEX_HOME/skills
to avoid proposing project-local skills for workflows already solved well by a generic shared skill. Do not reject a project-local skill just because a global skill exists; project-specific guardrails can still justify a local specialization.$CODEX_HOME/skills/public
Session Analysis
1. Search memory index first
- Search
withMEMORY.md
using the repo name, basename, andrg
.cwd - Prefer entries that already cite rollout summaries with the same repo path.
- Capture:
- repeated workflows
- validation commands
- failure shields
- ownership boundaries
- milestone or roadmap coupling
2. Open targeted rollout summaries
- Open the most relevant summary files under
.memories/rollout_summaries/ - Prefer summaries whose filenames,
, orcwd
match the current project.keywords - Extract:
- what the user asked for repeatedly
- what steps kept recurring
- what broke repeatedly
- what commands proved correctness
- what project-specific context had to be rediscovered
3. Use raw sessions only as a fallback
- Only search
JSONL files if rollout summaries are missing a concrete detail.sessions/ - Search by:
- exact
cwd - repo basename
- thread ID from a rollout summary
- specific file paths or commands
- exact
- Use raw sessions to recover exact prompts, command sequences, diffs, or failure text, not to replace the summary pass.
4. Turn session evidence into skill candidates
- A candidate
should correspond to a repeated workflow, not just a repeated topic.new skill - A candidate
should correspond to a workflow already covered by a local skill whose triggers, guardrails, or validation instructions no longer match the recorded sessions.skill update - Prefer concrete evidence such as:
- "this validation sequence appeared in 4 sessions"
- "this ownership confusion repeated across extractor and runtime fixes"
- "the same local script and telemetry probes had to be rediscovered repeatedly"
Recommendation Rules
-
Recommend a new skill when:
- the same repo-specific workflow or failure mode appears multiple times across sessions
- success depends on project-specific paths, scripts, ownership rules, or validation steps
- the workflow benefits from strong defaults or failure shields
-
Recommend an update when:
- an existing project-local skill already covers most of the need
andSKILL.md
drift from each otheragents/openai.yaml- paths, scripts, validation commands, or milestone references are stale
- the skill body is too generic to reflect how the project is actually worked on
-
Do not recommend a skill when:
- the pattern is a one-off bug rather than a reusable workflow
- a generic global skill already fits with no meaningful project-specific additions
- the workflow has not recurred enough to justify the maintenance cost
What To Scan
-
Past sessions and memory:
- memory summary already in context, if any
or$CODEX_HOME/memories/MEMORY.md~/.codex/memories/MEMORY.md- the 1-3 most relevant rollout summaries for the current repo
- raw
or$CODEX_HOME/sessions
JSONL files only if summaries are insufficient~/.codex/sessions
-
Project-local skill surface:
./.agents/skills/*/SKILL.md./.agents/skills/*/agents/openai.yaml./.codex/skills/*/SKILL.md./skills/*/SKILL.md
-
Project conventions:
AGENTS.mdREADME.md- roadmap, ledger, architecture, or validation docs
- current worktree or recent touched areas if needed for context
Output Expectations
Return a compact audit with:
-
List the project-local skills found and the main workflow each one covers.Existing skills -
For each update candidate, include:Suggested updates- skill name
- why it is incomplete or stale
- the highest-value change to make
-
For each new skill, include:Suggested new skills- recommended skill name
- why it should exist
- what would trigger it
- the core workflow it should encode
-
Rank the top recommendations by expected value.Priority order
Naming Guidance
- Prefer short hyphen-case names.
- Use project prefixes for project-local skills when that improves clarity.
- Prefer verb-led or action-oriented names over vague nouns.
Failure Shields
- Do not invent recurring patterns without session or repo evidence.
- Do not recommend duplicate skills when an update to an existing skill would suffice.
- Do not rely on a single memory note if the current repo clearly evolved since then.
- Do not bulk-load all rollout summaries; stay targeted.
- Do not skip rollout summaries and jump straight to raw sessions unless the summaries are insufficient.
- Do not recommend skills from themes alone; recommendations should come from repeated procedures, repeated validation flows, or repeated failure modes.
- Do not confuse a project's current implementation tasks with its reusable skill needs.
Follow-up
If the user asks to actually create or update one of the recommended skills, switch to
$skill-creator and implement the chosen skill rather than continuing the audit.
Limitations
- Use this skill only when the task clearly matches the scope described above.
- Do not treat the output as a substitute for environment-specific validation, testing, or expert review.
- Stop and ask for clarification if required inputs, permissions, safety boundaries, or success criteria are missing.