Awesome-omni-skill bootstrap
Generate AI tool configuration for an existing project. Explores the codebase and produces context files, path-scoped pattern rules, landmine rules, and agents for Claude Code, VS Code Copilot, and Cursor. Run once per project.
git clone https://github.com/diegosouzapw/awesome-omni-skill
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skill "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data-ai/bootstrap" ~/.claude/skills/diegosouzapw-awesome-omni-skill-bootstrap-ac774a && rm -rf "$T"
skills/data-ai/bootstrap/SKILL.mdProject Bootstrap — AI Tool Configuration Generator
Purpose
Explore an existing project and generate native configuration for the AI tools in use so they work effectively from the first task. Runs once, produces artifacts that each tool auto-loads and auto-enforces.
Outputs per tool:
| Artifact | Claude Code | VS Code Copilot | Cursor |
|---|---|---|---|
| Project context | | | — |
| Pattern rules | | | |
| Landmine rules | | | |
| Agents | | | — |
| Module context | (in module dir) | — | — |
| Universal | | | |
Pattern and landmine rules use the same content — only the frontmatter format differs per tool.
Only artifacts for enabled targets are generated. Check
.north-starr.json at the project root to see which tools are enabled. If the file is missing, generate for all tools (backward compatible).
Tool Target Preferences
Before generating any output, check for
.north-starr.json in the project root:
{ "version": 1, "targets": ["claude", "copilot", "cursor"] }
- If the file exists, only generate artifacts for the listed targets
- If the file is missing, ask the user which tools they use, save their answer to
, then generate only for those targets.north-starr.json
is always generated regardless of preferences (it's universal)AGENTS.md- Valid targets:
,"claude"
,"copilot""cursor"
When to Use
- First time working on an existing project with any AI coding tool
- When project-specific AI configuration is empty or missing
- When onboarding to a new codebase
Prerequisites
- The project root must be accessible
- Git history is helpful but not required (used for churn analysis)
Content Depth
Generated rules must carry enough depth to be genuinely useful. Use two content structures from the project's knowledge base:
- Pattern structure (
) — for conventions and reusable approaches. Each pattern rule follows the full template: When to Use, Problem It Solves, Core Approach with step-by-step code examples, Best Practices, Common Mistakes with wrong/fix code, Variations, Related patterns and landmines.skills/_references/patterns/_TEMPLATE.md - Landmine structure (
) — for danger zones and known traps. Each landmine rule follows the full template: Severity, Symptoms, Root Cause, The Trap (why devs fall in), Safe Approach (Don't/Do with code), Validation, Prevention, Related patterns and landmines.skills/_references/landmines/_TEMPLATE.md
Line limits:
- Context files (CLAUDE.md, AGENTS.md, copilot-instructions.md): MUST stay under 100 lines (max 125 if critical context would be lost). Split into multiple scoped files rather than exceeding.
- Pattern and landmine rules: Should be as detailed as needed to follow the full template — typically 50-150 lines. Depth and working code examples matter more than brevity. These are the project's knowledge base.
Workflow
Step 1: Explore & Detect
Goal: Understand the shape and stack of the project before reading code.
Actions:
- Identify the technology stack from config files at the root (package managers, build tools, language configs, CI/CD)
- Map the top-level directory structure — modules, packages, feature areas
- Identify entry points (main files, app delegates, index files, server entry points)
- Check for existing documentation (README, docs/, inline doc comments)
- Note the build system, test runner, and deployment mechanism
Output: Mental model of project structure. No files written yet.
Step 1.5: Validate Declared Config (if present)
Goal: If
/architect was run previously, validate declarations against reality rather than starting from scratch.
Actions:
- Check root context files (
,CLAUDE.md
,AGENTS.md
) for.github/copilot-instructions.md
tags[DECLARED] - Check
,.claude/rules/
,.github/instructions/
for.cursor/rules/
markers<!-- Generated by /architect — [DECLARED] --> - If
tags are found, enter validation mode for each declared section:[DECLARED]
Validation outcomes per section:
- CONFIRMED — the code matches the declaration → remove the
tag, keep the content[DECLARED] - DIVERGED — the code differs from the declaration → present both versions, ask the user which to keep
- NOT YET — no code exists yet for this declaration → keep the
tag as-is[DECLARED]
Actions in validation mode:
- For each
section in context files:[DECLARED]- Compare the declared architecture, grain, module map, and conventions against what the code actually shows
- Mark each section CONFIRMED, DIVERGED, or NOT YET
- For each
rule:[DECLARED]- Check if the convention is followed in actual code
- CONFIRMED rules get their
tag removed[DECLARED] - DIVERGED rules get flagged for user review
- NOT YET rules (no matching code exists) keep their tag
- Present a validation summary before making any changes
- Continue to Step 2 for any areas not covered by declarations (new modules, undeclared patterns)
If no
tags are found, skip this step and proceed normally.[DECLARED]
Step 2: Identify Architecture & Grain
Goal: Understand the architectural pattern and which direction changes flow easily.
Actions:
- Determine the high-level architecture:
- Pattern: MVC, MVVM, Clean Architecture, Hexagonal, etc.
- Topology: monolith, microservices, modular monolith, serverless, etc.
- Communication: client-server, event-driven, message-based, etc.
- Map layers and their responsibilities (presentation, domain, data, infrastructure)
- Identify the grain — which changes are easy vs. hard:
- Adding a new feature: what files must change?
- Adding a new data model: what layers are affected?
- Changes that go against the grain are friction sources
- Note framework conventions that shape the architecture:
- Dependency injection approach
- State management strategy
- Navigation / routing pattern
- Error handling conventions
Step 3: Discover Patterns
Goal: Build a comprehensive catalogue of "how things are done here" so new code follows conventions and the knowledge survives across sessions.
Scope: Analyze ALL modules, not a sample. Walk the entire codebase systematically. A 3-5 module sample misses cross-cutting patterns, infrastructure conventions, and operational practices that only surface when looking broadly.
Actions:
-
Map every module — list all top-level directories/packages. Group them by role (feature modules, shared libraries, infrastructure, configuration, tests, scripts, deployment).
-
Scan each group for recurring patterns. Look for conventions in these areas — not all will apply to every project, focus on what the codebase actually uses:
- Structure — how features, modules, or components are organized and laid out
- Data flow — how data enters, moves through, and exits the system
- Dependencies — how components get what they need (injection, imports, configuration)
- Error handling — how errors are caught, surfaced, and recovered from
- State — how state is managed, shared, and synchronized
- External boundaries — how the system communicates with anything outside itself
- Testing — how tests are organized, what's mocked, what's tested end-to-end
- Build & deploy — how the project is built, packaged, and shipped
- Naming — file names, types, functions, variables, constants
-
Look for shared utilities, base classes, protocols, or helpers reused across modules — these often encode implicit patterns worth documenting explicitly.
-
Cross-reference patterns — note which patterns work together and which are alternatives to each other.
-
For each discovered pattern, capture using the full pattern structure from
:skills/_references/patterns/_TEMPLATE.md- When to Use / Not Good For — specific situations
- Problem It Solves — what goes wrong without it, what improves with it
- Core Approach — step-by-step with code examples
- Best Practices — do this, why
- Common Mistakes — wrong approach with code, fix with code
- Variations — alternative forms of the pattern found in the codebase
- Related — links to other patterns and landmines
Aim for completeness. A thorough bootstrap should discover 15-40 patterns depending on project complexity. If you find fewer than 10, you likely stopped too early — revisit areas beyond the core feature code (build, deploy, testing, configuration, shared infrastructure).
Step 4: Detect Danger Zones
Goal: Build a comprehensive map of every area where developers can get burned — from code-level traps to operational pitfalls.
Scope: Look everywhere, not just code hotspots. Danger zones exist in configuration, deployment, infrastructure, third-party integrations, and operational procedures — not only in source code.
Actions:
-
Complexity hotspots — large files, deeply nested logic, functions with many parameters, types with many responsibilities. These areas break easily and are hard to modify safely.
-
Misleading abstractions — code that doesn't do what its name suggests, dead code paths, unused parameters that look required. These trap developers into wrong assumptions.
-
Silent failures — swallowed errors, empty catch blocks, default fallbacks that hide problems. These make debugging nearly impossible.
-
Developer warnings — search for
,TODO
,FIXME
,HACK
,XXX
,WORKAROUND
comments. Each one is a documented landmine left by a previous developer.TEMPORARY -
Git churn (if git history available):
git log --since="6 months ago" --pretty=format: --name-only | sort | uniq -c | sort -rn | head -20Files changed most frequently often contain instability or poorly understood behavior.
-
External boundaries — anywhere the system communicates with something outside itself (APIs, services, SDKs, hardware, file systems). These are where assumptions break and failures cascade.
-
Configuration sensitivity — settings, credentials, feature flags, or environment-specific behavior where a wrong value causes silent or catastrophic failure.
-
Resource management — anything the system allocates, opens, or acquires that must be released, closed, or returned. Leaks here cause gradual degradation.
-
Test gaps — modules or features with no test coverage. Untested code is a landmine waiting to detonate.
-
For each danger zone, capture using the full landmine structure from
:skills/_references/landmines/_TEMPLATE.md- Severity — CRITICAL / HIGH / MEDIUM / LOW based on real-world impact
- Symptoms — observable signs you've hit this
- Root Cause — technical explanation of why this happens
- The Trap — why it seems correct, what makes it non-obvious
- Safe Approach — Don't (dangerous code with explanation) / Do (safe code with explanation)
- Validation — how to verify you're safe, detection in existing code
- Prevention — habits, code review checks, and validation steps
- Related — safe patterns that avoid this, other related landmines
Aim for completeness. A thorough bootstrap should discover 5-15 landmines depending on project maturity. If you find fewer than 3, you likely stopped too early — revisit areas beyond the core feature code.
Step 5: Generate Configuration
Goal: Produce configuration files for the AI tools in use. Generate all sections below. The content is the same — only the file locations differ per tool.
A. Project Context (root-level)
Write the project context to enabled target locations:
— Claude Code (auto-loaded) — generate ifCLAUDE.md
target is enabledclaude
— Universal (works with any AI tool) — always generatedAGENTS.md
— VS Code Copilot (auto-loaded) — generate if.github/copilot-instructions.md
target is enabledcopilot
All context files get the same content:
# [Project Name] [One-sentence project description] ## Tech Stack [List languages with versions, frameworks, key dependencies, build tools, package manager, test runner, CI/CD — be specific, not generic] ## Architecture [Name the pattern (MVVM, Clean, etc.), topology (monolith, modular, etc.). List each layer with its responsibility and dependency direction. Include DI approach and state management strategy.] ## Grain [What changes easily (e.g. adding a new feature screen) vs. what is hard (e.g. changing navigation pattern). State what to avoid going against and why.] ## Module Map [List each top-level module with one-line purpose. Show key dependencies between modules. Note shared infrastructure.] ## Key Patterns [Summarize all discovered patterns from Step 3. For each: name, one-line description, which modules use it. Reference the corresponding pattern rule file for full details.] ## Known Landmines [Summarize all danger zones from Step 4. For each: name, severity (CRITICAL/HIGH/MEDIUM/LOW), one-line description. Reference the corresponding landmine rule file for full details.] ## Vocabulary [Terms with project-specific meanings. Format: "Term — definition". Only genuinely ambiguous terms.] <!-- [NORTH-STARR:how-to-approach-tasks] --> ## How to Approach Tasks **Before starting any task, assess its complexity:** | Complexity | Signals | Action | |------------|---------|--------| | **Low** | Single file, straightforward change, well-understood area | Implement directly | | **Medium** | 3+ files, unfamiliar area, non-trivial logic, or integration points | Run `/invert` then `/plan` automatically | | **High** | Wide blast radius, irreversible changes, unfamiliar domain, or critical system | Run `/invert` then `/plan` automatically | **For Medium and High complexity tasks — do this automatically, don't wait for the user to ask:** 1. Run `/invert` — identify risks, edge cases, and failure modes 2. Run `/plan` — break the work into tracked tasks, using invert's risks as constraints 3. Execute the plan with progress tracking and session notes **Resuming work:** Always check `.plans/` for active implementation plans before starting new work. If an active plan exists, resume it instead of starting fresh. <!-- [/NORTH-STARR:how-to-approach-tasks] --> <!-- [NORTH-STARR:auto-learn] --> ## When to Learn Automatically **Run `/learn` automatically — do not wait for the user to ask — when any of these signals occur during a session:** | Signal | Example | What to Capture | |--------|---------|-----------------| | **User corrects your approach** | "No, don't do it that way — use X instead" | **Pattern** — the correct approach so it's followed next time | | **Same fix requested twice** | User asks you to fix the same issue or area more than once in a session | **Landmine** — the fragile area and why it keeps breaking | | **Your change breaks something** | Tests fail, build breaks, or existing behavior regresses after your edit | **Landmine** — what broke and why, so it's avoided next time | | **User rejects generated code** | "That's wrong", "revert that", or user manually undoes your change | **Pattern or Landmine** — capture what was wrong and what's correct | | **You discover an undocumented convention** | Code follows a pattern not captured in any rule or context file | **Pattern** — document it before it's forgotten | | **You hit a trap not in any landmine rule** | Something looked safe but caused unexpected problems | **Landmine** — document the trap for future sessions | **How auto-learn works:** 1. Detect the signal during normal work 2. Finish the immediate fix or correction first 3. Then run `/learn` to capture the insight as a pattern or landmine rule 4. If a pattern or landmine already exists for this area, update it — do not create duplicates. Prompt the user when the update contradicts existing content. <!-- [/NORTH-STARR:auto-learn] -->
If any of these files already exist with project-specific content, merge rather than overwrite.
B. Module-Level Context Files
For each danger zone or complex module found in Step 4, write context in that directory:
— Claude Code (auto-loaded when working in that directory)CLAUDE.md
# [Module Name] [What this module does, how it fits in the architecture] ## Caution [Specific warnings: race conditions, fragile logic, missing tests, known bugs] ## Patterns [How this module does things, if different from the project defaults]
C. Pattern Rules & Landmine Rules
Generate pattern and landmine rules for each enabled target. The content is the same — only the file location and frontmatter format differ per tool.
File formats per tool (generate only for enabled targets):
Claude Code —
.claude/rules/*.md (if claude target enabled):
--- paths: ["glob/pattern/**"] --- [Rule content]
VS Code Copilot —
.github/instructions/*.instructions.md (if copilot target enabled):
--- applyTo: "glob/pattern/**" --- [Same rule content]
Cursor —
.cursor/rules/*.mdc (if cursor target enabled):
--- globs: glob/pattern/** --- [Same rule content]
Pattern Rules — one rule file per pattern discovered in Step 3.
Follow the full pattern template from
skills/_references/patterns/_TEMPLATE.md. Each pattern rule file must include:
- Category and Language/Framework
— Good For / Not Good For## When to Use
— what goes wrong without it, what improves with it## Problem It Solves
— core idea, step-by-step with code examples, complete working example## The Pattern
— do this, why## Best Practices
— wrong code with explanation, fix code with explanation## Common Mistakes
— alternative forms found in the codebase## Variations
— how to verify correct application## Testing This Pattern## Performance Considerations
— links to related pattern and landmine rule files## Related
File naming:
[descriptive-name]-pattern.md (e.g. caching-pattern.md, repository-pattern.md)
Landmine Rules — one rule file per danger zone discovered in Step 4.
Follow the full landmine template from
skills/_references/landmines/_TEMPLATE.md. Each landmine rule file must include:
- Severity (CRITICAL / HIGH / MEDIUM / LOW) and Category
— one-line description## Quick Summary
— observable signs you've hit this## Symptoms
— technical explanation of why this happens## Root Cause
— why developers fall in, what makes it non-obvious## The Trap
— Don't (dangerous code with explanation) / Do (safe code with explanation)## Safe Approach
— how to verify you're safe, detection patterns in existing code## Validation
— what actually happens when this goes wrong## Real-World Impact
— habits, code review checks, validation steps## Prevention
— safe pattern rules that avoid this, other related landmine rules## Related
File naming:
[descriptive-name].md (e.g. broken-exists-method.md, silent-auth-failure.md)
What to generate rules for:
Create one rule file per pattern or landmine discovered in Steps 3 and 4. Patterns become pattern rules, danger zones become landmine rules. The specific concerns depend on the project — generate rules only for what was actually found in the codebase.
Guidelines:
- Generate only rules that reflect real patterns or dangers found in the codebase — never invent conventions
- Use specific path globs — broad rules waste context on irrelevant files
- Keep each rule file focused on one concern
- Include code examples in every rule — abstract descriptions without code are not actionable
- Pattern and landmine rules should be as detailed as the templates require — typically 50-150 lines. Depth matters.
- The content is the same across tools — only the frontmatter format differs
- Include a
in the rules directory of each tool for future contributions via_TEMPLATE.md/learn
D. Agents
Generate agents for each enabled target that supports them:
Claude Code —
.claude/agents/*.md (if claude target enabled):
--- name: [project]-explorer description: Deep exploration and analysis of the [project] codebase model: sonnet tools: Read, Glob, Grep memory: project ---
VS Code Copilot —
.github/agents/*.agent.md (if copilot target enabled):
--- name: [project]-explorer description: Deep exploration and analysis of the [project] codebase tools: codebase ---
The agent prompt should include:
- The discovered architecture and grain
- Key modules and their relationships
- Known danger zones to watch for
Generate additional agents only if the project clearly warrants them. Default to one.
Post-Bootstrap Checklist
-
exists (created if missing during this run).north-starr.json -
at root (always)AGENTS.md -
at root (ifCLAUDE.md
target enabled)claude -
(if.github/copilot-instructions.md
target enabled)copilot - Module-level
for each identified danger zone (ifCLAUDE.md
target enabled)claude - Pattern rules in enabled tool formats — aim for 15-40 depending on project complexity
- Landmine rules in enabled tool formats — aim for 5-15 depending on project maturity
-
in each enabled tool's rules directory for future contributions_TEMPLATE.md - At least one project-tuned agent per enabled tool that supports agents
Output Summary
After completing all steps, present:
## Bootstrap Complete **Project:** [name] **Tech Stack:** [languages, frameworks, tools] **Architecture:** [pattern, layers] **Grain:** [easy changes vs. hard changes] **Enabled Tools:** [list from .north-starr.json] **Files Generated:** Universal: - AGENTS.md — [sections included] [Include only sections for enabled targets:] Claude Code: ← if claude target enabled - CLAUDE.md — [sections included] - [N] .claude/rules/ files — [N] patterns, [N] landmines — [list names] - [N] .claude/agents/ files — [list names] VS Code Copilot: ← if copilot target enabled - .github/copilot-instructions.md - [N] .github/instructions/ files — [N] patterns, [N] landmines — [list names] - [N] .github/agents/ files — [list names] Cursor: ← if cursor target enabled - [N] .cursor/rules/ files — [N] patterns, [N] landmines — [list names] Module-level: ← if claude target enabled - [N] CLAUDE.md files — [list directories] **Recommended First Read:** [2-3 files a newcomer should read first] **Key Danger Zones:** [areas to approach with caution]
Notes
- This skill is language-agnostic — it detects the project's stack and generates appropriate configuration
- This skill respects tool target preferences — check
for enabled targets, or ask and save if missing. Only generate artifacts for enabled tools..north-starr.json - Can be run incrementally — bootstrap just the area you're working in, expand later
- Be thorough — analyze the entire codebase, not a sample. Shallow bootstraps produce shallow configuration that misses real patterns and dangers
- Balance breadth and depth: understand the whole project broadly, then go deep on patterns and landmines with full code examples and operational detail
- Generate only rules for patterns that actually exist — never invent conventions
- If the project already has configuration, build on what exists rather than overwriting
- The generated configuration is a starting point — it improves through subsequent
invocations/learn