Awesome-omni-skills readme
README Generator workflow skill. Use this skill when the user needs You are an expert technical writer creating comprehensive project documentation. Your goal is to write a README.md that is absurdly thorough\u2014the kind of documentation you wish every project had and the operator should preserve the upstream workflow, copied support files, and provenance before merging or handing off.
git clone https://github.com/diegosouzapw/awesome-omni-skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/readme" ~/.claude/skills/diegosouzapw-awesome-omni-skills-readme && rm -rf "$T"
skills/readme/SKILL.mdREADME Generator
Overview
This public intake copy packages
plugins/antigravity-awesome-skills-claude/skills/readme from https://github.com/sickn33/antigravity-awesome-skills into the native Omni Skills editorial shape without hiding its origin.
Use it when the operator needs the upstream workflow, support files, and repository context to stay intact while the public validator and private enhancer continue their normal downstream flow.
This intake keeps the copied upstream files intact and uses
metadata.json plus ORIGIN.md as the provenance anchor for review.
README Generator You are an expert technical writer creating comprehensive project documentation. Your goal is to write a README.md that is absurdly thorough—the kind of documentation you wish every project had.
Imported source sections that did not map cleanly to the public headings are still preserved below or in the support files. Notable imported sections: The Three Purposes of a README, Before Writing, README Structure, Key Features, Tech Stack, Prerequisites.
When to Use This Skill
Use this section as the trigger filter. It should make the activation boundary explicit before the operator loads files, runs commands, or opens a pull request.
- User wants to create or update a README.md file
- User says "write readme" or "create readme"
- User asks to "document this project"
- User requests "project documentation"
- User asks for help with README.md
- Use when the request clearly matches the imported source intent: You are an expert technical writer creating comprehensive project documentation. Your goal is to write a README.md that is absurdly thorough—the kind of documentation you wish every project had.
Operating Table
| Situation | Start here | Why it matters |
|---|---|---|
| First-time use | | Confirms repository, branch, commit, and imported path before touching the copied workflow |
| Provenance review | | Gives reviewers a plain-language audit trail for the imported source |
| Workflow execution | | Starts with the smallest copied file that materially changes execution |
| Supporting context | | Adds the next most relevant copied source file without loading the entire package |
| Handoff decision | | Helps the operator switch to a stronger native skill when the task drifts |
Workflow
This workflow is intentionally editorial and operational at the same time. It keeps the imported source useful to the operator while still satisfying the public intake standards that feed the downstream enhancer flow.
- Confirm the user goal, the scope of the imported workflow, and whether this skill is still the right router for the task.
- Read the overview and provenance files before loading any copied upstream support files.
- Load only the references, examples, prompts, or scripts that materially change the outcome for the current request.
- Execute the upstream workflow while keeping provenance and source boundaries explicit in the working notes.
- Validate the result against the upstream expectations and the evidence you can point to in the copied files.
- Escalate or hand off to a related skill when the work moves out of this imported workflow's center of gravity.
- Before merge or closure, record what was used, what changed, and what the reviewer still needs to verify.
Imported Workflow Notes
Imported: The Three Purposes of a README
- Local Development - Help any developer get the app running locally in minutes
- Understanding the System - Explain in great detail how the app works
- Production Deployment - Cover everything needed to deploy and maintain in production
Examples
Example 1: Ask for the upstream workflow directly
Use @readme to handle <task>. Start from the copied upstream workflow, load only the files that change the outcome, and keep provenance visible in the answer.
Explanation: This is the safest starting point when the operator needs the imported workflow, but not the entire repository.
Example 2: Ask for a provenance-grounded review
Review @readme against metadata.json and ORIGIN.md, then explain which copied upstream files you would load first and why.
Explanation: Use this before review or troubleshooting when you need a precise, auditable explanation of origin and file selection.
Example 3: Narrow the copied support files before execution
Use @readme for <task>. Load only the copied references, examples, or scripts that change the outcome, and name the files explicitly before proceeding.
Explanation: This keeps the skill aligned with progressive disclosure instead of loading the whole copied package by default.
Example 4: Build a reviewer packet
Review @readme using the copied upstream files plus provenance, then summarize any gaps before merge.
Explanation: This is useful when the PR is waiting for human review and you want a repeatable audit packet.
Best Practices
Treat the generated public skill as a reviewable packaging layer around the upstream repository. The goal is to keep provenance explicit and load only the copied source material that materially improves execution.
- Be Absurdly Thorough - When in doubt, include it. More detail is always better.
- Use Code Blocks Liberally - Every command should be copy-pasteable.
- Show Example Output - When helpful, show what the user should expect to see.
- Explain the Why - Don't just say "run this command," explain what it does.
- Assume Fresh Machine - Write as if the reader has never seen this codebase.
- Use Tables for Reference - Environment variables, scripts, and options work great as tables.
- Keep Commands Current - Use pnpm if the project uses it, npm if it uses npm, etc.
Imported Operating Notes
Imported: Writing Principles
-
Be Absurdly Thorough - When in doubt, include it. More detail is always better.
-
Use Code Blocks Liberally - Every command should be copy-pasteable.
-
Show Example Output - When helpful, show what the user should expect to see.
-
Explain the Why - Don't just say "run this command," explain what it does.
-
Assume Fresh Machine - Write as if the reader has never seen this codebase.
-
Use Tables for Reference - Environment variables, scripts, and options work great as tables.
-
Keep Commands Current - Use
if the project uses it,pnpm
if it uses npm, etc.npm -
Include a Table of Contents - For READMEs over ~200 lines, add a TOC at the top.
Troubleshooting
Problem: The operator skipped the imported context and answered too generically
Symptoms: The result ignores the upstream workflow in
plugins/antigravity-awesome-skills-claude/skills/readme, fails to mention provenance, or does not use any copied source files at all.
Solution: Re-open metadata.json, ORIGIN.md, and the most relevant copied upstream files. Load only the files that materially change the answer, then restate the provenance before continuing.
Problem: The imported workflow feels incomplete during review
Symptoms: Reviewers can see the generated
SKILL.md, but they cannot quickly tell which references, examples, or scripts matter for the current task.
Solution: Point at the exact copied references, examples, scripts, or assets that justify the path you took. If the gap is still real, record it in the PR instead of hiding it.
Problem: The task drifted into a different specialization
Symptoms: The imported skill starts in the right place, but the work turns into debugging, architecture, design, security, or release orchestration that a native skill handles better. Solution: Use the related skills section to hand off deliberately. Keep the imported provenance visible so the next skill inherits the right context instead of starting blind.
Imported Troubleshooting Notes
Imported: Troubleshooting
Database Connection Issues
Error:
could not connect to server: Connection refused
Solution:
- Verify PostgreSQL is running:
orpg_isreadydocker ps - Check
format:DATABASE_URLpostgresql://USER:PASSWORD@HOST:PORT/DATABASE - Ensure database exists:
bin/rails db:create
Pending Migrations
Error:
Migrations are pending
Solution: ```bash bin/rails db:migrate ```
Asset Compilation Issues
Error:
The asset "application.css" is not present in the asset pipeline
Solution: ```bash
Clear and recompile assets
bin/rails assets:clobber bin/rails assets:precompile ```
Bundle Install Failures
Error: Native extension build failures
Solution:
-
Ensure system dependencies are installed: ```bash
macOS
brew install postgresql libpq
Ubuntu
sudo apt-get install libpq-dev ```
-
Try again:
bundle install
Credentials Issues
Error:
ActiveSupport::MessageEncryptor::InvalidMessage
Solution: The master key doesn't match the credentials file. Either:
- Get the correct
from another team memberconfig/master.key - Or regenerate credentials:
rm config/credentials.yml.enc && bin/rails credentials:edit
Vite/Inertia Issues
Error:
Vite Ruby - Build failed
Solution: ```bash
Clear Vite cache
rm -rf node_modules/.vite
Reinstall JS dependencies
rm -rf node_modules && yarn install ```
Solid Queue Issues
Error: Jobs not processing
Solution: Ensure the queue worker is running: ```bash bin/jobs
or
bin/rails solid_queue:start ```
### 11. Contributing (Optional) Include if open source or team project. ### 12. License (Optional) --- ## Related Skills - `@00-andruia-consultant-v2` - Use when the work is better handled by that native specialization after this imported skill establishes context. - `@10-andruia-skill-smith-v2` - Use when the work is better handled by that native specialization after this imported skill establishes context. - `@20-andruia-niche-intelligence-v2` - Use when the work is better handled by that native specialization after this imported skill establishes context. - `@2d-games` - Use when the work is better handled by that native specialization after this imported skill establishes context. ## Additional Resources Use this support matrix and the linked files below as the operator packet for this imported skill. They should reflect real copied source material, not generic scaffolding. | Resource family | What it gives the reviewer | Example path | | --- | --- | --- | | `references` | copied reference notes, guides, or background material from upstream | `references/n/a` | | `examples` | worked examples or reusable prompts copied from upstream | `examples/n/a` | | `scripts` | upstream helper scripts that change execution or validation | `scripts/n/a` | | `agents` | routing or delegation notes that are genuinely part of the imported package | `agents/n/a` | | `assets` | supporting assets or schemas copied from the source package | `assets/n/a` | ### Imported Reference Notes #### Imported: Before Writing ### Step 1: Deep Codebase Exploration Before writing a single line of documentation, thoroughly explore the codebase. You MUST understand: **Project Structure** - Read the root directory structure - Identify the framework/language (Gemfile for Rails, package.json, go.mod, requirements.txt, etc.) - Find the main entry point(s) - Map out the directory organization **Configuration Files** - .env.example, .env.sample, or documented environment variables - Rails config files (config/database.yml, config/application.rb, config/environments/) - Credentials setup (config/credentials.yml.enc, config/master.key) - Docker files (Dockerfile, docker-compose.yml) - CI/CD configs (.github/workflows/, .gitlab-ci.yml, etc.) - Deployment configs (config/deploy.yml for Kamal, fly.toml, render.yaml, Procfile, etc.) **Database** - db/schema.rb or db/structure.sql - Migrations in db/migrate/ - Seeds in db/seeds.rb - Database type from config/database.yml **Key Dependencies** - Gemfile and Gemfile.lock for Ruby gems - package.json for JavaScript dependencies - Note any native gem dependencies (pg, nokogiri, etc.) **Scripts and Commands** - bin/ scripts (bin/dev, bin/setup, bin/ci) - Procfile or Procfile.dev - Rake tasks (lib/tasks/) ### Step 2: Identify Deployment Target Look for these files to determine deployment platform and tailor instructions: - `Dockerfile` / `docker-compose.yml` → Docker-based deployment - `vercel.json` / `.vercel/` → Vercel - `netlify.toml` → Netlify - `fly.toml` → Fly.io - `railway.json` / `railway.toml` → Railway - `render.yaml` → Render - `app.yaml` → Google App Engine - `Procfile` → Heroku or Heroku-like platforms - `.ebextensions/` → AWS Elastic Beanstalk - `serverless.yml` → Serverless Framework - `terraform/` / `*.tf` → Terraform/Infrastructure as Code - `k8s/` / `kubernetes/` → Kubernetes If no deployment config exists, provide general guidance with Docker as the recommended approach. ### Step 3: Ask Only If Critical Only ask the user questions if you cannot determine: - What the project does (if not obvious from code) - Specific deployment credentials or URLs needed - Business context that affects documentation Otherwise, proceed with exploration and writing. --- #### Imported: README Structure Write the README with these sections in order: ### 1. Project Title and Overview ```markdown # Project Name Brief description of what the project does and who it's for. 2-3 sentences max. #### Imported: Key Features - Feature 1 - Feature 2 - Feature 3
2. Tech Stack
List all major technologies:
#### Imported: Tech Stack - **Language**: Ruby 3.3+ - **Framework**: Rails 7.2+ - **Frontend**: Inertia.js with React - **Database**: PostgreSQL 16 - **Background Jobs**: Solid Queue - **Caching**: Solid Cache - **Styling**: Tailwind CSS - **Deployment**: [Detected platform]
3. Prerequisites
What must be installed before starting:
#### Imported: Prerequisites - Node.js 20 or higher - PostgreSQL 15 or higher (or Docker) - pnpm (recommended) or npm - A Google Cloud project for OAuth (optional for development)
4. Getting Started
The complete local development guide:
#### Imported: Getting Started ### 1. Clone the Repository \`\`\`bash git clone https://github.com/user/repo.git cd repo \`\`\` ### 2. Install Ruby Dependencies Ensure you have Ruby 3.3+ installed (via rbenv, asdf, or mise): \`\`\`bash bundle install \`\`\` ### 3. Install JavaScript Dependencies \`\`\`bash yarn install \`\`\` ### 4. Environment Setup Copy the example environment file: \`\`\`bash cp .env.example .env \`\`\` Configure the following variables: | Variable | Description | Example | | ------------------ | ---------------------------- | ------------------------------------------ | | `DATABASE_URL` | PostgreSQL connection string | `postgresql://localhost/myapp_development` | | `REDIS_URL` | Redis connection (if used) | `redis://localhost:6379/0` | | `SECRET_KEY_BASE` | Rails secret key | `bin/rails secret` | | `RAILS_MASTER_KEY` | For credentials encryption | Check `config/master.key` | ### 5. Database Setup Start PostgreSQL (if using Docker): \`\`\`bash docker run --name postgres -e POSTGRES_PASSWORD=postgres -p 5432:5432 -d postgres:16 \`\`\` Create and set up the database: \`\`\`bash bin/rails db:setup \`\`\` This runs `db:create`, `db:schema:load`, and `db:seed`. For existing databases, run migrations: \`\`\`bash bin/rails db:migrate \`\`\` ### 6. Start Development Server Using Foreman/Overmind (recommended, runs Rails + Vite): \`\`\`bash bin/dev \`\`\` Or manually: \`\`\`bash # Terminal 1: Rails server bin/rails server # Terminal 2: Vite dev server (for Inertia/React) bin/vite dev \`\`\` Open [http://localhost:3000](http://localhost:3000) in your browser.
Include every step. Assume the reader is setting up on a fresh machine.
5. Architecture Overview
This is where you go absurdly deep:
#### Imported: Architecture ### Directory Structure \`\`\` ├── app/ │ ├── controllers/ # Rails controllers │ │ ├── concerns/ # Shared controller modules │ │ └── api/ # API-specific controllers │ ├── models/ # ActiveRecord models │ │ └── concerns/ # Shared model modules │ ├── jobs/ # Background jobs (Solid Queue) │ ├── mailers/ # Email templates │ ├── views/ # Rails views (minimal with Inertia) │ └── frontend/ # Inertia.js React components │ ├── components/ # Reusable UI components │ ├── layouts/ # Page layouts │ ├── pages/ # Inertia page components │ └── lib/ # Frontend utilities ├── config/ │ ├── routes.rb # Route definitions │ ├── database.yml # Database configuration │ └── initializers/ # App initializers ├── db/ │ ├── migrate/ # Database migrations │ ├── schema.rb # Current schema │ └── seeds.rb # Seed data ├── lib/ │ └── tasks/ # Custom Rake tasks └── public/ # Static assets \`\`\` ### Request Lifecycle 1. Request hits Rails router (`config/routes.rb`) 2. Middleware stack processes request (authentication, sessions, etc.) 3. Controller action executes 4. Models interact with PostgreSQL via ActiveRecord 5. Inertia renders React component with props 6. Response sent to browser ### Data Flow \`\`\` User Action → React Component → Inertia Visit → Rails Controller → ActiveRecord → PostgreSQL ↓ React Props ← Inertia Response ← \`\`\` ### Key Components **Authentication** - Devise/Rodauth for user authentication - Session-based auth with encrypted cookies - `authenticate_user!` before_action for protected routes **Inertia.js Integration (`app/frontend/`)** - React components receive props from Rails controllers - `inertia_render` in controllers passes data to frontend - Shared data via `inertia_share` for layout props **Background Jobs (`app/jobs/`)** - Solid Queue for job processing - Jobs stored in PostgreSQL (no Redis required) - Dashboard at `/jobs` for monitoring **Database (`app/models/`)** - ActiveRecord models with associations - Query objects for complex queries - Concerns for shared model behavior ### Database Schema \`\`\` users ├── id (bigint, PK) ├── email (string, unique, not null) ├── encrypted_password (string) ├── name (string) ├── created_at (datetime) └── updated_at (datetime) posts ├── id (bigint, PK) ├── title (string, not null) ├── content (text) ├── published (boolean, default: false) ├── user_id (bigint, FK → users) ├── created_at (datetime) └── updated_at (datetime) solid_queue_jobs (background jobs) ├── id (bigint, PK) ├── queue_name (string) ├── class_name (string) ├── arguments (json) ├── scheduled_at (datetime) └── ... \`\`\`
6. Environment Variables
Complete reference for all env vars:
#### Imported: Environment Variables ### Required | Variable | Description | How to Get | | ------------------ | --------------------------------- | -------------------------------------- | | `DATABASE_URL` | PostgreSQL connection string | Your database provider | | `SECRET_KEY_BASE` | Rails secret for sessions/cookies | Run `bin/rails secret` | | `RAILS_MASTER_KEY` | Decrypts credentials file | Check `config/master.key` (not in git) | ### Optional | Variable | Description | Default | | ------------------- | ------------------------------------------------- | ---------------------------- | | `REDIS_URL` | Redis connection string (for caching/ActionCable) | - | | `RAILS_LOG_LEVEL` | Logging verbosity | `debug` (dev), `info` (prod) | | `RAILS_MAX_THREADS` | Puma thread count | `5` | | `WEB_CONCURRENCY` | Puma worker count | `2` | | `SMTP_ADDRESS` | Mail server hostname | - | | `SMTP_PORT` | Mail server port | `587` | ### Rails Credentials Sensitive values should be stored in Rails encrypted credentials: \`\`\`bash # Edit credentials (opens in $EDITOR) bin/rails credentials:edit # Or for environment-specific credentials RAILS_ENV=production bin/rails credentials:edit \`\`\` Credentials file structure: \`\`\`yaml secret_key_base: xxx stripe: public_key: pk_xxx secret_key: sk_xxx google: client_id: xxx client_secret: xxx \`\`\` Access in code: `Rails.application.credentials.stripe[:secret_key]` ### Environment-Specific **Development** \`\`\` DATABASE_URL=postgresql://localhost/myapp_development REDIS_URL=redis://localhost:6379/0 \`\`\` **Production** \`\`\` DATABASE_URL=<production-connection-string> RAILS_ENV=production RAILS_SERVE_STATIC_FILES=true \`\`\`
7. Available Scripts
#### Imported: Available Scripts | Command | Description | | ----------------------------- | --------------------------------------------------- | | `bin/dev` | Start development server (Rails + Vite via Foreman) | | `bin/rails server` | Start Rails server only | | `bin/vite dev` | Start Vite dev server only | | `bin/rails console` | Open Rails console (IRB with app loaded) | | `bin/rails db:migrate` | Run pending database migrations | | `bin/rails db:rollback` | Rollback last migration | | `bin/rails db:seed` | Run database seeds | | `bin/rails db:reset` | Drop, create, migrate, and seed database | | `bin/rails routes` | List all routes | | `bin/rails test` | Run test suite (Minitest) | | `bundle exec rspec` | Run test suite (RSpec, if used) | | `bin/rails assets:precompile` | Compile assets for production | | `bin/rubocop` | Run Ruby linter | | `yarn lint` | Run JavaScript/TypeScript linter |
8. Testing
#### Imported: Testing ### Running Tests \`\`\`bash # Run all tests (Minitest) bin/rails test # Run all tests (RSpec, if used) bundle exec rspec # Run specific test file bin/rails test test/models/user_test.rb bundle exec rspec spec/models/user_spec.rb # Run tests matching a pattern bin/rails test -n /creates_user/ bundle exec rspec -e "creates user" # Run system tests (browser tests) bin/rails test:system # Run with coverage (SimpleCov) COVERAGE=true bin/rails test \`\`\` ### Test Structure \`\`\` test/ # Minitest structure ├── controllers/ # Controller tests ├── models/ # Model unit tests ├── integration/ # Integration tests ├── system/ # System/browser tests ├── fixtures/ # Test data └── test_helper.rb # Test configuration spec/ # RSpec structure (if used) ├── models/ ├── requests/ ├── system/ ├── factories/ # FactoryBot factories ├── support/ └── rails_helper.rb \`\`\` ### Writing Tests **Minitest example:** \`\`\`ruby require "test_helper" class UserTest < ActiveSupport::TestCase test "creates user with valid attributes" do user = User.new(email: "test@example.com", name: "Test User") assert user.valid? end test "requires email" do user = User.new(name: "Test User") assert_not user.valid? assert_includes user.errors[:email], "can't be blank" end end \`\`\` **RSpec example:** \`\`\`ruby require "rails_helper" RSpec.describe User, type: :model do describe "validations" do it "is valid with valid attributes" do user = build(:user) expect(user).to be_valid end it "requires an email" do user = build(:user, email: nil) expect(user).not_to be_valid expect(user.errors[:email]).to include("can't be blank") end end end \`\`\` ### Frontend Testing For Inertia/React components: \`\`\`bash yarn test \`\`\` \`\`\`typescript import { render, screen } from '@testing-library/react' import { Dashboard } from './Dashboard' describe('Dashboard', () => { it('renders user name', () => { render(<Dashboard user={{ name: 'Josh' }} />) expect(screen.getByText('Josh')).toBeInTheDocument() }) }) \`\`\`
9. Deployment
Tailor this to detected platform (look for Dockerfile, fly.toml, render.yaml, kamal/, etc.):
#### Imported: Deployment ### Kamal (Recommended for Rails) If using Kamal for deployment: \`\`\`bash # Setup Kamal (first time) kamal setup # Deploy kamal deploy # Rollback to previous version kamal rollback # View logs kamal app logs # Run console on production kamal app exec --interactive 'bin/rails console' \`\`\` Configuration lives in `config/deploy.yml`. ### Docker Build and run: \`\`\`bash # Build image docker build -t myapp . # Run with environment variables docker run -p 3000:3000 \ -e DATABASE_URL=postgresql://... \ -e SECRET_KEY_BASE=... \ -e RAILS_ENV=production \ myapp \`\`\` ### Heroku \`\`\`bash # Create app heroku create myapp # Add PostgreSQL heroku addons:create heroku-postgresql:mini # Set environment variables heroku config:set SECRET_KEY_BASE=$(bin/rails secret) heroku config:set RAILS_MASTER_KEY=$(cat config/master.key) # Deploy git push heroku main # Run migrations heroku run bin/rails db:migrate \`\`\` ### Fly.io \`\`\`bash # Launch (first time) fly launch # Deploy fly deploy # Run migrations fly ssh console -C "bin/rails db:migrate" # Open console fly ssh console -C "bin/rails console" \`\`\` ### Render If `render.yaml` exists, connect your repo to Render and it will auto-deploy. Manual setup: 1. Create new Web Service 2. Connect GitHub repository 3. Set build command: `bundle install && bin/rails assets:precompile` 4. Set start command: `bin/rails server` 5. Add environment variables in dashboard ### Manual/VPS Deployment \`\`\`bash # On the server: # Pull latest code git pull origin main # Install dependencies bundle install --deployment # Compile assets RAILS_ENV=production bin/rails assets:precompile # Run migrations RAILS_ENV=production bin/rails db:migrate # Restart application server (e.g., Puma via systemd) sudo systemctl restart myapp \`\`\`
10. Troubleshooting
#### Imported: Output Format Generate a complete README.md file with: - Proper markdown formatting - Code blocks with language hints (`bash, `typescript, etc.) - Tables where appropriate - Clear section hierarchy - Linked table of contents for long documents Write the README directly to `README.md` in the project root. #### Imported: Limitations - Use this skill only when the task clearly matches the scope described above. - Do not treat the output as a substitute for environment-specific validation, testing, or expert review. - Stop and ask for clarification if required inputs, permissions, safety boundaries, or success criteria are missing.