Awesome-omni-skill generate-knowledge-base
Generate a product knowledge base from a codebase. Analyzes source code to create an Obsidian vault with architecture docs, API references, domain logic, data models, and infrastructure documentation. Use when the user asks to document a codebase, create a knowledge base, or generate product docs.
git clone https://github.com/diegosouzapw/awesome-omni-skill
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skill "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/development/generate-knowledge-base" ~/.claude/skills/diegosouzapw-awesome-omni-skill-generate-knowledge-base && rm -rf "$T"
skills/development/generate-knowledge-base/SKILL.mdGenerate Product Knowledge Base
You are generating a comprehensive product knowledge base from source code analysis. The output is an Obsidian vault with interconnected documents covering architecture, data models, APIs, business domains, and infrastructure.
Before You Start
Read these reference files to understand the expected output format and quality criteria:
— the 4-part document structure with examplesreferences/document-formats.md
— where to find information for each tech stackreferences/category-patterns.md
— self-review criteria for every documentreferences/quality-checklist.md
Workflow
Execute these steps in order. Do not skip steps. Wait for user approval at Step 2 before generating documents.
Step 1 — Setup & Discovery
Gather project information:
- Product name: Ask the user for the product/project name. Use it in all generated doc titles and references.
- Codebase path: Use
if provided, otherwise ask the user. Resolve to an absolute path. Verify the directory exists.$ARGUMENTS - Output directory: Ask where to write the vault. Default: a sibling directory named
next to the codebase.<product>-knowledge/
Detect the tech stack:
-
Glob for marker files at the codebase root and one level deep:
,package.json
→ JavaScript/TypeScripttsconfig.json
,requirements.txt
,pyproject.toml
,setup.py
→ PythonPipfile
,pom.xml
,build.gradle
→ Java/Kotlinbuild.gradle.kts
→ Gogo.mod
→ RustCargo.toml
→ RubyGemfile
→ PHPcomposer.json
→ Elixirmix.exs
,*.sln
→ C#/.NET*.csproj
-
Read each detected marker file to identify specific frameworks:
→ checkpackage.json
fordependencies
,next
,express
,nestjs
, etc.react
/requirements.txt
→ check forpyproject.toml
,django
,fastapi
, etc.flask
→ check forbuild.gradle.kts
,ktor
, etc.spring-boot
→ check forgo.mod
,gin
,echo
, etc.fiber
-
Map the directory structure:
- Find top-level directories:
,src/
,app/
,cmd/
,internal/
,lib/
,pkg/
,server/
,services/
,api/
,routes/
,controllers/
,models/
,views/
,templates/
,static/
,public/
,frontend/
,backend/
,infra/
,terraform/
,deploy/
,migrations/
,.github/.circleci/ - Identify monorepo patterns: multiple
files, workspace configs,package.json
directories with independent modulesservices/ - Find test directories:
,test/
,tests/
,__tests__/spec/ - Find SDK/client directories:
,sdk/
,client/packages/
- Find top-level directories:
-
Report findings to the user:
Detected: [Language] with [Framework] Services: [list of services/modules found] Database: [type if detected from configs] Infrastructure: [CI/CD, cloud provider if found]
Step 2 — Plan the Vault
Based on detected tech stack, determine which categories to generate:
Always include:
— system overview, tech stack, data flowsarchitecture/
— endpoint documentation (if HTTP routes found)api/
— business logic by domaindomains/
Include if relevant sources found:
— if migration files, ORM models, or schema definitions founddata-model/
— if Terraform, CloudFormation, Docker, or CI configs foundinfrastructure/
— if SDK or client library code foundsdks/
— if multiple backend services (monorepo/microservices)services/
— if third-party service integrations foundintegrations/
Identify business domains by analyzing:
- Directory names under
,src/
,app/
,internal/services/ - Route/controller groupings
- Model/entity names
- Service class names
Present the plan to the user:
## Generation Plan Product: [name] Output: [path] Tech Stack: [detected] ### Documents to Generate (~XX total) **Architecture** (X docs) - architecture/overview.md - architecture/tech-stack.md - ... **API** (X docs) - api/overview.md - ... **Domains** (X docs) - domains/[domain-1]/overview.md - ... Shall I proceed?
Wait for explicit user approval before continuing.
Step 3 — Generate Architecture Docs
Generate 3-8 architecture documents by reading:
- README files, docker-compose files
- Entry points (
,main.ts
,app.py
,Application.kt
, etc.)main.go - Infrastructure configs (Terraform, CloudFormation, Dockerfile)
- Build configs (
scripts,package.json
,Makefile
)build.gradle.kts
Required documents:
— system topology with a Mermaid diagram showing services, data stores, and external dependenciesarchitecture/overview.md
— languages, frameworks, databases, queues, cloud services with version numbers where availablearchitecture/tech-stack.md
Optional documents (create if sufficient source material exists):
— request lifecycle, async processing flowsarchitecture/data-flow.md
— service responsibilities, ports, deploymentarchitecture/backend-services.md
— frontend architecture, routing, state managementarchitecture/frontend-apps.md
Step 4 — Generate Data Model Docs
Generate 2-10 data model documents by reading:
- Migration files (
,migrations/
,db/migrate/
)alembic/ - ORM models (Django
, SQLAlchemy models, Exposed tables, GORM structs)models.py - Schema definitions (SQL files, Prisma schema, TypeORM entities)
- Seed data files
Required documents:
— database architecture, schema organizationdata-model/overview.md
Per-entity documents:
— table/collection schema with columns, types, constraints, relationshipsdata-model/<entity>.md
Step 5 — Generate API Docs
Generate 3-20 API documents by reading:
- Route definitions (Express routers, Django URLs, Ktor routing, Go handlers)
- Controller/handler implementations
- OpenAPI/Swagger specs if available
- Middleware (auth, validation, rate limiting)
- Request/response types (protobuf, TypeScript interfaces, Pydantic models)
Required documents:
— API architecture, authentication methods, common patternsapi/overview.md
Per-resource documents:
— endpoints for a resource group with routes, methods, request/response shapes, and auth requirementsapi/<resource>.md
If the codebase has multiple API servers (external + internal, public + admin), organize as:
api/external-api/overview.mdapi/internal-api/overview.md
Step 6 — Generate Domain Docs
Generate 10-30 domain documents. This is the largest category and should be chunked.
For each identified business domain:
- Read service layer, domain models, and business logic files
- Generate
— concept, lifecycle, state machinedomains/<domain>/overview.md - Generate
— specific feature logicdomains/<domain>/<feature>.md
Chunking strategy:
- Generate domains in batches of 5-10 documents
- After each batch, verify wikilinks between generated docs
- Continue until all domains are covered
Use the Task tool to parallelize independent domain research when the codebase is large.
Step 7 — Generate Infrastructure Docs
Generate 2-5 infrastructure documents by reading:
- Terraform/CloudFormation/Pulumi files
- CI/CD configs (
,.github/workflows/
,.circleci/
,Jenkinsfile
).gitlab-ci.yml - Docker files (
,Dockerfile
)docker-compose.yml - Monitoring configs (CloudWatch, Datadog, Prometheus)
- Deployment scripts
Required documents:
— cloud architecture, deployment topologyinfrastructure/overview.md
Optional documents:
— build and deploy pipelineinfrastructure/ci-cd.md
— observability, alerting, logginginfrastructure/monitoring.md
— backup, scaling, connection poolinginfrastructure/database-management.md
Step 8 — Finalize
-
Generate README.md: Create the vault's master index using the
. List every generated document as aassets/README.md.template
organized by category.[[wikilink]] -
Generate CLAUDE.md: Create the vault's CLAUDE.md using the
. Fill in:assets/CLAUDE.md.template- Product name
- Vault structure (categories and their contents)
- Source code paths table
- Conventions (wikilinks, document format, Mermaid diagrams)
-
Validate wikilinks: Run
on the output directory. Fix any broken links it reports.scripts/validate-wikilinks.sh -
Print summary:
## Generation Complete Product: [name] Location: [path] Documents: [count] across [N] categories Wikilinks: [count] total, [broken] broken Categories: - architecture/: X docs - data-model/: X docs - api/: X docs - domains/: X docs - infrastructure/: X docs Open the vault in Obsidian to browse the knowledge graph.
Key Rules
-
Code-first: Every statement must trace to actual source code. Never invent or assume logic. If you cannot find the implementation, say "Not found in source" rather than guessing.
-
Source attribution: Every document must include a
block listing the exact files analyzed. Use relative paths from the codebase root.> **Source files**: -
Fully-qualified wikilinks: Always use the full path from the vault root:
, never[[domains/campaigns/overview]]
or[[overview]]
.[[campaigns/overview]] -
One concern per file: Each document covers exactly one topic. Split large topics into multiple documents.
-
Mermaid diagrams: Include a Mermaid diagram for any flow with 3+ steps. Use
for flowcharts andgraph TD/TB/LR
for interaction flows.sequenceDiagram -
No marketing language: Write for engineers. Include file paths, function names, and implementation details. This is internal documentation, not a product page.
-
Quality check: Before finalizing each document, verify it against
.references/quality-checklist.md