Awesome-omni-skill pss-agent-toml
Use when creating .agent.toml profiles for Claude Code agents. Trigger with /pss-setup-agent. AI selects elements across 6 types, validates coherence, produces conflict-free profiles.
git clone https://github.com/diegosouzapw/awesome-omni-skill
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skill "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data-ai/pss-agent-toml" ~/.claude/skills/diegosouzapw-awesome-omni-skill-pss-agent-toml && rm -rf "$T"
skills/data-ai/pss-agent-toml/SKILL.mdPSS Agent TOML Profile Builder
Overview
An
.agent.toml file defines the complete configuration profile for a Claude Code agent: which skills it should use, which sub-agents complement it, which slash commands enhance its workflow, which rules constrain its behavior, which MCP servers extend its capabilities, and which LSP servers support its languages.
FUNDAMENTAL PRINCIPLE: AI Agent is ALWAYS Required
An AI agent MUST be the decision-maker for element selection. No mechanical script or automated pipeline can produce a correct agent profile. Here is why:
-
Conflict detection requires reasoning: A script cannot determine that "jest-testing" and "vitest-testing" are mutually exclusive, or that a "database-management" skill is redundant when a "postgres-mcp" server is already included.
-
Use case prediction requires understanding: Choosing the right skills means predicting what the agent will actually encounter — a "security auditor" working on a healthcare app needs HIPAA compliance skills that no keyword matcher would surface.
-
Cross-type coherence requires judgment: A skill, an MCP server, and an agent can all provide "browser automation" — deciding which combination to keep requires reading their actual content and understanding the trade-offs.
-
Framework/runtime compatibility requires knowledge: Knowing that Vitest is the correct test runner for a Vite-based project, or that Bun replaces npm/yarn, requires real-world understanding that no scoring algorithm provides.
The Rust binary provides scored candidates. The AI agent makes the decisions. This is the same principle as the prompt hook: the binary suggests, Claude chooses.
This skill teaches ANY agent or Claude model how to:
- Search the element index to find candidates for each section
- Evaluate candidates by reading their actual SKILL.md/agent.md content
- Compare alternatives to resolve conflicts — including cross-type overlap detection
- Add specific elements from any source (local, marketplace, GitHub, network)
- Validate coherence — ensure no overlapping, conflicting, or redundant elements across ALL types
- Assemble and validate the final
file.agent.toml
Default mode is autonomous: the agent executes the full pipeline, makes all decisions, produces the
.agent.toml, and reports the result. Interactive collaboration with the user or orchestrator is optional — it only happens when explicitly requested or when truly unresolvable conflicts are detected.
Prerequisites
- Skill index must exist:
— run~/.claude/cache/skill-index.json
if missing/pss-reindex-skills - PSS Rust binary must be built: Located at
$CLAUDE_PLUGIN_ROOT/rust/skill-suggester/bin/<platform> - Agent definition file: The
file describing the agent to profile.md
Quick Start
The fastest path uses the
/pss-setup-agent command, which spawns a profiler agent:
/pss-setup-agent /path/to/agent.md --requirements /path/to/prd.md
For full control, follow the step-by-step process below.
The .agent.toml Format
Every
.agent.toml has these sections:
# Auto-generated by PSS Agent Profiler # Agent: <name> # Generated: <timestamp> [agent] # REQUIRED: Agent identification name = "my-agent" # kebab-case, matches ^[a-z0-9][a-z0-9_-]*$ source = "path" # "path" or "plugin:<name>" path = "/abs/path/to/my-agent.md" [requirements] # OPTIONAL: Project context used for profiling files = ["prd.md"] # Basenames of requirement files project_type = "web-app" # web-app, cli-tool, mobile-app, library, api, microservice tech_stack = ["typescript", "react", "postgresql"] [skills] # REQUIRED: Tiered skill recommendations primary = ["skill-a", "skill-b"] # Max 7 — core daily-use skills (score >= 60%) secondary = ["skill-c", "skill-d"] # Max 12 — useful common tasks (score 30-59%) specialized = ["skill-e"] # Max 8 — niche situations (score 15-29%) [skills.excluded] # OPTIONAL: Transparency — why certain skills were rejected # "vue-frontend" = "Conflicts with React (requirements specify React)" # "jest-testing" = "Vitest preferred for Vite-based project" [agents] # OPTIONAL: Complementary sub-agents recommended = ["sleuth", "e2e-tester"] [commands] # OPTIONAL: Recommended slash commands recommended = ["commit", "describe-pr"] [rules] # OPTIONAL: Enforcement rules recommended = ["claim-verification", "observe-before-editing"] [mcp] # OPTIONAL: MCP servers for extended capabilities recommended = ["chrome-devtools"] [hooks] # OPTIONAL: Hook configurations recommended = [] [lsp] # OPTIONAL: Language servers (assigned by language detection) recommended = ["typescript-lsp", "pyright-lsp"]
Schema reference:
$CLAUDE_PLUGIN_ROOT/schemas/pss-agent-toml-schema.json
Validator: uv run scripts/pss_validate_agent_toml.py <file> --check-index --verbose
Step-by-Step Profile Building Process
Phase 1: Gather Context
1.1 Read the agent definition file
Read the agent's
.md file completely. Extract:
- name: From YAML frontmatter
field or filename stemname: - description: From frontmatter
or first non-heading paragraphdescription: - role: developer, tester, reviewer, deployer, designer, security, data-scientist
- duties: From bullet lists under headings containing "responsibilities", "duties", "tasks"
- tools: From frontmatter
/tools:
or tool mentions in bodyallowed-tools: - domains: From frontmatter or inferred (security, frontend, backend, devops, data, etc.)
1.2 Read requirements documents (if available)
Read all provided design/requirements files. Extract:
- project_type: What is being built (web-app, mobile-app, cli-tool, library, etc.)
- tech_stack: Specific technologies, frameworks, languages
- key features: Core capabilities the project needs
- constraints: Performance, compliance, platform targets
1.3 Detect project languages from cwd
Scan the working directory for:
/package.json
→ TypeScript/JavaScripttsconfig.json
/pyproject.toml
→ Pythonsetup.py
→ RustCargo.toml
→ Gogo.mod
/*.swift
→ SwiftPackage.swift
/pom.xml
→ Javabuild.gradle
→ C/C++CMakeLists.txt
This determines LSP server assignment.
Phase 1 Completion Checklist — Copy this checklist and track your progress (ALL items must be checked before proceeding to Phase 2):
- Agent
file has been read in full (not just frontmatter).md -
extracted (from frontmattername
or filename stem)name: -
extracted (frontmatter or first non-heading paragraph)description -
classified (developer/tester/reviewer/deployer/designer/security/data-scientist)role -
extracted (bullet lists under responsibilities/duties/tasks headings)duties -
extracted (from frontmattertools
/tools:
or tool mentions in body)allowed-tools: -
extracted or inferred (security/frontend/backend/devops/data/etc.)domains - All
files have been read in full (or confirmed: no requirements provided)--requirements -
identified from requirements (web-app/cli-tool/mobile-app/library/api/microservice)project_type -
extracted from requirements (specific frameworks, languages, databases)tech_stack -
noted from requirements (features that drive skill selection)key_features -
noted from requirements (performance, compliance, platform targets)constraints - Project languages detected from cwd (presence of Cargo.toml/package.json/pyproject.toml/go.mod/etc.)
- LSP server assignment pre-determined from detected languages
If ANY item is unchecked: re-read the relevant file before proceeding.
Phase 2: Get Candidates from the Index
2.1 Invoke the Rust binary for scored candidates
Build a JSON descriptor and invoke the binary:
# $$ = current shell PID, ensures unique temp file per session cat > /tmp/pss-agent-profile-input-$$.json << 'EOF' { "name": "<agent-name>", "description": "<agent description + requirements summary>", "role": "<role>", "duties": ["<duty1>", "<duty2>"], "tools": ["<tool1>", "<tool2>"], "domains": ["<domain1>", "<domain2>"], "requirements_summary": "<condensed requirements text, max 2000 chars>", "cwd": "<absolute path to working directory>" } EOF # Invoke binary — returns up to 30 scored candidates grouped by type "$BINARY_PATH" --agent-profile /tmp/pss-agent-profile-input-$$.json --format json --top 30
The binary returns scored candidates grouped by type:
{ "agent": "name", "skills": { "primary": [{"name":"...", "score":0.85, "confidence":"HIGH", "evidence":["keyword:docker"], "description":"..."}], "secondary": [...], "specialized": [...] }, "complementary_agents": ["agent-x"], "commands": [{"name":"...", "score":0.6, ...}], "rules": [{"name":"...", "score":0.5, ...}], "mcp": [{"name":"...", "score":0.4, ...}], "lsp": [{"name":"...", "score":0.3, ...}] }
CRITICAL: These are CANDIDATES, not final selections. The binary scores by keyword/intent matching only. YOU must now evaluate each candidate intelligently.
2.2 Search for additional candidates
If the binary output doesn't cover a known need from the requirements, search the index directly:
# Powerful multi-field index search — supports type, category, language, framework filters cat ~/.claude/cache/skill-index.json | python3 -c " import json, sys idx = json.load(sys.stdin) args = sys.argv[1:] query = None filters = {} for i, a in enumerate(args): if a.startswith('--type='): filters['type'] = a.split('=',1)[1] elif a.startswith('--category='): filters['category'] = a.split('=',1)[1] elif a.startswith('--language='): filters['languages'] = a.split('=',1)[1] elif a.startswith('--framework='): filters['frameworks'] = a.split('=',1)[1] elif not a.startswith('--'): query = a.lower() for name, e in idx['skills'].items(): # Apply filters first (exact match on structured fields) if 'type' in filters and e.get('type','') != filters['type']: continue if 'category' in filters and e.get('category','') != filters['category']: if filters['category'] not in e.get('secondary_categories', []): continue if 'languages' in filters and filters['languages'] not in e.get('languages', []): continue if 'frameworks' in filters and filters['frameworks'] not in e.get('frameworks', []): continue # Then keyword search across multiple fields if query: searchable = ' '.join([ name, e.get('description', ''), ' '.join(e.get('keywords', [])), ' '.join(e.get('use_cases', [])), ' '.join(e.get('intents', [])), e.get('category', ''), ]).lower() if query not in searchable: continue cat = e.get('category', '?') typ = e.get('type', 'skill') print(f'{typ:8} {cat:16} {name:30} {e.get(\"description\",\"\")[:55]}') " "<search-term>" [--type=skill|agent|command|rule|mcp|lsp] [--category=<category>] [--language=<language>] [--framework=<framework>]
Search examples:
— find all elements mentioning websocket"websocket"
— find only skills related to testing"testing" --type=skill
— list all elements in the security category"" --category=security
— find React-specific elements"react" --framework=react
— find all Python skills"" --language=python --type=skill
Phase 2 Completion Checklist (ALL items must be checked before proceeding to Phase 3):
- Temporary JSON descriptor written with session-unique filename (use PID suffix:
)pss-agent-profile-input-$$.json - Descriptor contains all 8 fields:
,name
,description
,role
,duties
,tools
,domains
,requirements_summarycwd -
is 2000 characters or fewer (truncate if needed)requirements_summary - Rust binary invoked with
,--agent-profile
,--format json--top 30 - Binary returned exit code 0 (non-zero = STOP and report error)
- Binary output is valid JSON (parse to verify)
- Candidates grouped by type:
,skills
,complementary_agents
,commands
,rules
,mcp
all presentlsp - Candidate count per type noted (for gap analysis in Phase 3)
- Additional manual index search performed for any known needs not covered by binary output
If binary fails: do NOT proceed. Report the error and stop.
Phase 3: Evaluate Each Candidate (AI Reasoning Required)
This phase is WHY an AI agent is mandatory. For every candidate returned by the binary, you must:
3.1 Read the candidate's source file
For each skill/agent/command/rule candidate, read its actual
.md file (the path is in the index entry or binary output). Understand:
- What does this element ACTUALLY do (not just what the keywords suggest)?
- What frameworks/runtimes/languages does it target?
- What tools does it use or assume are available?
- What is its scope — broad or narrow?
3.2 Evaluate relevance to the agent's role
Ask yourself:
- Does this element solve a problem the agent will ACTUALLY encounter?
- Is it relevant to the project's tech stack and domain?
- Is it the RIGHT tool for the job, or just a keyword match?
- Would a human developer working in this role want this element?
3.3 Detect mutual exclusivity
These element families are mutually exclusive — only ONE from each group:
| Category | Alternatives |
|---|---|
| JS Framework | React, Vue, Angular, Svelte, Solid |
| JS Runtime | Node, Deno, Bun |
| JS Bundler | Webpack, Vite, esbuild, Parcel, Turbopack |
| CSS Framework | Tailwind, Bootstrap, Bulma, Chakra UI |
| ORM | Prisma, TypeORM, Drizzle, Sequelize |
| Testing | Jest, Vitest, Mocha, Jasmine |
| State Mgmt | Redux, Zustand, MobX, Recoil, Jotai |
| Deployment | Vercel, Netlify, AWS, GCP, Azure |
| Python Web | Django, Flask, FastAPI, Starlette |
| Python Test | pytest, unittest, nose2 |
| Mobile | React Native, Flutter, SwiftUI, Kotlin Compose |
Resolution rule: Keep the one that matches the tech_stack in requirements. If no requirements, keep the highest-scored and document alternatives in
[skills.excluded].
3.4 Check for obsolescence
Flag elements that reference:
- Deprecated APIs or patterns (componentWillMount, var, require() in ESM)
- End-of-life runtimes (Python 2, Node 14)
- Superseded tools (TSLint → ESLint, Moment.js → Luxon/date-fns)
Use WebSearch to verify if unsure: "Is <library> deprecated in 2026?"
3.5 Verify stack compatibility
- Python-only skill for a TypeScript project → REMOVE
- iOS skill for a web-only project → REMOVE
- React skill when requirements specify Vue → REMOVE
- AWS deployment skill when requirements specify Vercel → REMOVE
3.6 Identify gaps and search for missing elements
After reviewing candidates, check if requirements mention needs not covered:
- "real-time" → search for WebSocket/SSE skills
- "i18n" → search for internationalization skills
- "HIPAA" / "PCI" → search for compliance/security skills
- "PDF generation" → search for document processing skills
- "accessibility" → search for WCAG/a11y skills
Search the index for each gap and add qualified matches.
3.7 Prune redundancy
If skill A covers everything skill B does plus more, remove skill B. Example:
exhaustive-testing subsumes unit-testing — keep only exhaustive-testing.
Phase 3 Completion Checklist (ALL items must be checked before proceeding to Phase 4):
- Every candidate's SKILL.md/agent.md has been READ IN FULL (not just the binary's description)
- Every candidate evaluated: "Does this solve a problem this agent will ACTUALLY encounter?"
- Mutual exclusivity checked for ALL 11 families (JS framework, runtime, bundler, CSS, ORM, testing, state mgmt, deployment, Python web, Python test, mobile)
- Only ONE element remains from each mutually exclusive family
- Obsolescence/deprecation check completed for all candidates
- Stack compatibility verified: no cross-stack elements (Python skill for TS project, iOS for web, etc.)
- Gap analysis done: every key requirement scanned for missing coverage
- Redundancy pruning done: no strict-subset skills remain alongside their superset
- Final candidates list assembled with intended tier assignment (primary/secondary/specialized)
If ANY candidate was NOT individually read: go back and read it before proceeding.
Phase 4: Add Elements from External Sources
Elements not in the current index can be added from local paths, installed plugins, marketplace plugins, GitHub repos, network shares, or raw URLs. For each source, read and evaluate the element using the same Phase 3 criteria before adding.
See references/external-sources.md for detailed instructions and the Phase 4 Completion Checklist.
Phase 5: Cross-Type Coherence Validation
This is the most critical phase. Validate that no overlaps or conflicts exist BETWEEN types (skill<->MCP, skill<->agent, agent<->agent, MCP<->MCP, rule<->rule). Check all 13 items on the coherence checklist.
See references/cross-type-coherence.md for detailed overlap detection rules, the coherence checklist, and resolution strategies.
Phase 6: Write and Validate
6.1 Write the
file.agent.toml
Use the template from "The .agent.toml Format" section above. Every field must be populated from the evaluation results. The
[skills.excluded] section must document WHY each rejected candidate was excluded.
6.2 Validate
Run the validator:
uv run scripts/pss_validate_agent_toml.py <output-path> --check-index --verbose
Exit codes: 0 = valid, 1 = errors found, 2 = TOML parse error.
If validation fails, fix the errors and re-validate. Common issues:
- Missing required sections (
,[agent]
)[skills] - Duplicate skill across tiers (same name in primary AND secondary)
- Tier size exceeded (primary > 7, secondary > 12, specialized > 8)
- Agent name not kebab-case
6.3 Clean up
Delete the temporary JSON descriptor file.
Phase 6 Completion Checklist (profile is ONLY complete when ALL items are checked):
-
file written to the correct output path.agent.toml -
section has[agent]
,name
,source
— all correctpath -
section present if requirements were provided; omitted if none[requirements] -
section:[skills]
has 1-7 items,primary
has 0-12,secondary
has 0-8specialized -
has a comment for every rejected candidate with the rejection reason[skills.excluded] - ALL optional sections present:
,[agents]
,[commands]
,[rules]
,[mcp]
,[hooks]
(even if[lsp]
)recommended = [] - Validator run:
uv run "$CLAUDE_PLUGIN_ROOT/scripts/pss_validate_agent_toml.py" <file> --check-index --verbose - Validator exited with code 0 (if code 1: fix errors, re-validate; if code 2: fix TOML syntax, re-validate)
- No validation errors remain — validator returned exit code 0
- Temporary descriptor file deleted
- Summary reported: X primary + Y secondary + Z specialized skills; N excluded candidates
Do NOT report success until the validator returns exit code 0.
Using the /pss-setup-agent Command
The simplest way to invoke this entire workflow:
/pss-setup-agent /path/to/agent.md /pss-setup-agent /path/to/agent.md --requirements /path/to/prd.md /path/to/tech-spec.md /pss-setup-agent plugin-name:agent-name /pss-setup-agent /path/to/agent.md --output /custom/output.agent.toml
This command spawns the
pss-agent-profiler agent, which follows the full Phase 1-6 workflow above with AI reasoning at every step.
Scoring Reference
See references/example-and-scoring.md for the scoring weight table, tier thresholds, troubleshooting guide, and a complete worked example of profiling a React frontend developer agent.
Instructions
- Phase 1 — Gather Requirements: Read agent
file, identify target domain, languages, frameworks, platforms, and constraints. Complete the Phase 1 checklist..md - Phase 2 — Search & Score: Run the Rust binary in
mode to score all indexed elements. Use the multi-field index search to find additional candidates. Complete the Phase 2 checklist.--agent-profile - Phase 3 — AI Post-Filtering: Apply mutual exclusivity, stack compatibility, and redundancy pruning. Remove conflicting, redundant, or off-stack elements. Complete the Phase 3 checklist.
- Phase 4 — Cross-Type Coherence: Verify skill-MCP overlap, agent-command alignment, and rule-agent compatibility. Complete the Phase 4 checklist.
- Phase 5 — TOML Assembly: Assemble the
with all sections populated, tier assignments justified, and exclusion comments documented. Complete the Phase 5 checklist..agent.toml - Phase 6 — Validation & Delivery: Run
, fix all errors, deliver the validated file. Complete the Phase 6 checklist.pss_validate_agent_toml.py
Output
The final output is a validated
.agent.toml file written to ~/.claude/agents/<agent-name>.agent.toml. The file conforms to the JSON Schema at ${CLAUDE_PLUGIN_ROOT}/schemas/pss-agent-toml-schema.json and passes pss_validate_agent_toml.py with exit code 0.
Error Handling
- If the Rust binary is not found or not executable, abort with an explicit error message — do not fall back to manual scoring.
- If the skill index (
) does not exist, instruct the user to run~/.claude/cache/skill-index.json
first./pss-reindex-skills - If validation fails (exit code != 0), fix all errors and re-validate — do not deliver an invalid
..agent.toml - If
is not set, abort immediately with instructions to set it.CLAUDE_PLUGIN_ROOT
Examples
See references/example-and-scoring.md for a full
.agent.toml output for a React frontend developer agent, showing all sections populated with reasoned selections and exclusion comments.
Resources
- JSON Schema:
${CLAUDE_PLUGIN_ROOT}/schemas/pss-agent-toml-schema.json - Validator:
${CLAUDE_PLUGIN_ROOT}/scripts/pss_validate_agent_toml.py - Categories:
(16 predefined categories)${CLAUDE_PLUGIN_ROOT}/schemas/pss-categories.json - Skill Index:
~/.claude/cache/skill-index.json - Rust Binary:
${CLAUDE_PLUGIN_ROOT}/rust/skill-suggester/bin/pss-<platform>