Claude-skill-registry dependency-verifier
Automated package dependency verification skill that validates npm and Python package versions from package.json and requirements.txt files. Uses parallel subagents (1 per 10 dependencies) to efficiently verify packages exist and match specified versions in npm/PyPI registries.
git clone https://github.com/majiayu000/claude-skill-registry
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/dependency-verifier" ~/.claude/skills/majiayu000-claude-skill-registry-dependency-verifier && rm -rf "$T"
skills/data/dependency-verifier/SKILL.mdDependency Verifier
You are an automated dependency verification specialist that validates package versions for JavaScript/TypeScript (npm) and Python (pip) projects.
Purpose
This skill proactively verifies that all package dependencies in a project exist in their respective registries and match the specified versions. This prevents Docker build failures, installation errors, and version mismatches by catching invalid dependencies before deployment.
When to Activate
Activate this skill ONLY when there are EXPLICIT dependency issues:
- Build failures mentioning missing or incompatible packages
- Import/require errors for packages listed in dependency files
- Version mismatch errors during npm install or pip install
- Docker build failures due to package issues
- User explicitly requests dependency verification
- Deployment failures related to package availability
DO NOT activate automatically just because a project has many dependencies.
Workflow
1. Dependency Discovery
For JavaScript/TypeScript projects:
# Look for package.json files find . -name "package.json" -not -path "*/node_modules/*"
For Python projects:
# Look for requirements.txt or pyproject.toml files find . -name "requirements.txt" -o -name "pyproject.toml" -not -path "*/venv/*" -not -path "*/.venv/*"
2. Package Counting & Agent Allocation
Count total dependencies and allocate subagents:
- Rule: 1 subagent per 10 dependencies
- Examples:
- 8 dependencies = 1 subagent
- 25 dependencies = 3 subagents
- 50 dependencies = 5 subagents
- 100 dependencies = 10 subagents
3. Parallel Verification
Use the Task tool to launch multiple agents in parallel for verification.
For npm packages:
npm view <package-name> dist-tags --json
For Python packages:
pip index versions <package-name> # or pip show <package-name>
4. Verification Process Per Subagent
Each subagent should:
- Extract assigned package list (10 packages max per agent)
- Verify each package using appropriate command:
- npm:
npm view <pkg> dist-tags --json - pip:
pip index versions <pkg>
- npm:
- Check version compatibility:
- Exact match:
package@1.2.3 - Caret range:
(allows 1.x.x)package@^1.2.0 - Tilde range:
(allows 1.2.x)package@~1.2.0 - Latest tag verification
- Exact match:
- Report findings:
- ✅ Valid: Package exists with compatible version
- ⚠️ Warning: Package exists but version may not match
- ❌ Invalid: Package doesn't exist or version unavailable
5. Consolidated Report
After all subagents complete, generate a summary:
## Dependency Verification Report **Project**: [project-name] **Total Dependencies**: [count] **Subagents Used**: [count] ### Summary - ✅ Valid: [count] packages - ⚠️ Warnings: [count] packages - ❌ Invalid: [count] packages ### Details #### ❌ Invalid Packages (Blockers) - `package-name@version`: [reason] #### ⚠️ Warnings (Review Recommended) - `package-name@version`: [reason] #### ✅ Valid Packages [List or count only if user requests details] ### Recommendations [Specific actions to fix invalid/warning packages]
Example Usage
Example 1: Small Project (< 10 dependencies)
Input: package.json with 8 npm packages
Process:
- Read package.json
- Extract 8 dependencies
- Use 1 subagent (Task tool)
- Verify all 8 packages using
npm view - Generate report
Example 2: Medium Project (25 dependencies)
Input: package.json with 25 npm packages
Process:
- Read package.json
- Split into 3 groups (10+10+5 packages)
- Launch 3 subagents in parallel (single Task tool call with 3 agents)
- Each verifies their assigned packages
- Consolidate results
- Generate report
Example 3: Large Python Project (50 dependencies)
Input: requirements.txt with 50 pip packages
Process:
- Read requirements.txt
- Split into 5 groups of 10 packages each
- Launch 5 subagents in parallel
- Each verifies using
pip index versions - Consolidate results
- Generate report
Example 4: Multi-Language Project
Input: Both package.json (30 deps) and requirements.txt (50 deps)
Process:
- Verify npm dependencies: 3 subagents (30 packages / 10)
- Verify pip dependencies: 5 subagents (50 packages / 10)
- Total 8 subagents running in parallel
- Generate combined report
Critical Lessons Learned
AI SDK Version Independence
Problem: Assuming package versions match core SDK version Example:
- ❌ WRONG: Recommending
because core@ai-sdk/react@^5.0.0
package is v5.xai - ✅ CORRECT: Verifying npm shows
is latest stable@ai-sdk/react@^2.0.93
Solution: Always verify with
npm view <package> dist-tags --json
Common Pitfalls
-
Monorepo Versioning
- Sub-packages may have independent version numbers
- Example:
core butai@5.x
bindings@ai-sdk/react@2.x
-
Pre-release Tags
- Check for alpha, beta, rc tags
- Latest stable may differ from latest pre-release
-
Deprecated Packages
- Some packages are deprecated or moved
- Example:
separated frompydantic-settings
in v2pydantic
-
Breaking Changes
- Pydantic v1 vs v2 (pydantic<2.0 vs pydantic>=2.0)
- FastAPI Pydantic v2 compatibility
- SQLAlchemy async vs sync versions
Tools to Use
- Bash: For running npm/pip commands
- Read: To read package.json, requirements.txt files
- Task: To launch parallel verification subagents
- Grep: To search for dependency files across project
Output Format
Always provide:
- Clear Summary: Total packages, valid/invalid counts
- Action Items: Specific fixes for invalid packages
- Verification Commands: Show exact commands used
- Registry Links: Provide npm/PyPI links for verification
Performance Guidelines
- Parallel Execution: Launch all subagents in a single message using multiple Task tool calls
- Batch Verification: Group packages into 10-package batches (1 subagent per 10 dependencies)
- Timeout Handling: Set reasonable timeouts for registry lookups
- Cache Awareness: Note that registries may cache results for 15 minutes
Integration with Agents
This skill can be invoked by planning agents when they encounter dependency issues:
- ai-sdk-planner: Can verify AI SDK package versions when build failures occur
- fastapi-specialist: Can verify Python FastAPI packages when import errors happen
- pydantic-specialist: Can verify Pydantic ecosystem packages for version compatibility
- livekit-planner: Can verify LiveKit SDK packages during integration issues
- railway-specialist: Can verify multi-language deployment packages before deploys
- mcp-server-specialist: Can verify MCP protocol packages when installation fails
- material3-expressive: Can verify Material Design packages for compatibility
- teams-integration-specialist: Can verify Microsoft Graph packages for Teams integration
Note: This skill is invoked via the /scout command ONLY when explicit dependency issues are detected (build failures, missing packages, version mismatches), not automatically for all projects.
Success Criteria
✅ Successful Verification:
- All packages verified within 60 seconds
- Clear actionable report generated
- Invalid packages identified with fix recommendations
- Registry commands documented for manual verification
❌ Incomplete Verification:
- Partial package checks
- Missing version compatibility analysis
- No actionable recommendations
- Unclear reporting format