Awesome-omni-skill enterprise-code-review
Performs comprehensive enterprise-grade critical code review on project folders or GitHub repositories, focusing on quality, security, performance, maintainability, and best practices
git clone https://github.com/diegosouzapw/awesome-omni-skill
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skill "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/testing-security/enterprise-code-review" ~/.claude/skills/diegosouzapw-awesome-omni-skill-enterprise-code-review && rm -rf "$T"
skills/testing-security/enterprise-code-review/SKILL.mdEnterprise Code Review Skill
Purpose
This skill provides comprehensive, enterprise-grade critical code review following industry best practices. It analyzes codebases systematically across multiple dimensions: architecture, security, performance, maintainability, testing, and documentation.
When to Use
- Reviewing pull requests or merge requests
- Conducting pre-release code audits
- Evaluating code quality for legacy systems
- Onboarding code quality assessments
- Security and compliance reviews
- Architecture and design pattern validation
Review Philosophy
Critical but Constructive
- Identify real issues, not stylistic preferences
- Provide actionable feedback with specific examples
- Suggest concrete improvements with rationale
- Prioritize issues by severity (Critical, High, Medium, Low)
Comprehensive Coverage
- Focus on what matters: bugs, security, performance, maintainability
- Delegate formatting/style to automated tools
- Consider scalability and future maintenance
- Evaluate test coverage and quality
Efficiency Standards
- Optimal review: 200-400 lines of code at a time
- Break large changes into focused reviews
- Review at maximum 500 LOC/hour for thoroughness
- Prioritize high-risk and complex code sections
Review Methodology
Phase 1: Initial Assessment
-
Understand Context
- Read PR/commit description and linked issues
- Understand the feature/fix intent
- Review related documentation
- Identify affected systems and dependencies
-
Scope Analysis
- Count lines of code changed
- Identify file types and languages
- Assess complexity level
- Plan review approach (if >400 LOC, break into sections)
Phase 2: Systematic Review
A. Code Quality & Best Practices
Review for:
- Readability: Clear naming, logical structure, appropriate abstraction
- Maintainability: Modular design, DRY principle, no hard-coded values
- Consistency: Follows project conventions and patterns
- Complexity: Cyclomatic complexity, nested depth, function length
- Error Handling: Comprehensive exception handling, graceful degradation
- Logging: Appropriate logging levels and useful error messages
B. Security Review
Check for:
- Input Validation: All user inputs sanitized and validated
- Authentication/Authorization: Proper access controls implemented
- Data Protection: Sensitive data encrypted, no credentials in code
- Injection Vulnerabilities: SQL, XSS, command injection risks
- Dependencies: Known vulnerabilities in third-party libraries
- API Security: Rate limiting, CORS policies, secure headers
- Secrets Management: No API keys, tokens, or passwords in source
C. Performance Analysis
Evaluate:
- Algorithmic Efficiency: Optimal time/space complexity
- Database Operations: N+1 queries, missing indexes, inefficient joins
- Caching Strategy: Appropriate use of caching mechanisms
- Resource Management: Proper connection pooling, memory leaks
- Async Operations: Non-blocking I/O where appropriate
- Scalability: Can handle increased load and data volume
D. Architecture & Design
Assess:
- Design Patterns: Appropriate pattern usage and implementation
- SOLID Principles: Adherence to object-oriented design principles
- Separation of Concerns: Clear boundaries between layers
- API Design: RESTful principles, consistent endpoints, versioning
- Data Modeling: Normalized schema, appropriate relationships
- Dependency Management: Loose coupling, dependency injection
E. Testing & Quality Assurance
Verify:
- Test Coverage: Minimum 80% coverage for critical paths
- Test Quality: Unit, integration, and edge case coverage
- Test Maintainability: Clear test names, isolated tests, no flaky tests
- Mocking Strategy: Appropriate use of mocks and stubs
- Assertions: Meaningful and comprehensive assertions
- Test Data: Realistic test scenarios and boundary conditions
F. Documentation & Comments
Check:
- Code Comments: Explain WHY, not WHAT (code should be self-documenting)
- Function/Method Docs: Purpose, parameters, return values, exceptions
- API Documentation: Complete endpoint documentation
- README Updates: Installation, configuration, usage instructions
- Changelog: User-facing release notes
- Architecture Docs: High-level design decisions documented
Phase 3: Cross-Cutting Concerns
Backwards Compatibility
- Breaking changes identified and documented
- Migration paths provided
- Deprecation warnings where appropriate
- Version compatibility maintained
Deployment & Operations
- Configuration changes documented
- Database migrations included and tested
- Environment variable requirements specified
- Rollback procedures considered
Observability
- Appropriate metrics and monitoring
- Tracing for distributed systems
- Health check endpoints
- Diagnostic logging for troubleshooting
Phase 4: Synthesize Findings
Structure feedback as:
# Code Review Summary ## Overall Assessment [High-level summary: Approve, Approve with minor changes, Request changes, Block] ## Critical Issues (Must Fix Before Merge) - [Issue 1 with specific location and recommendation] - [Issue 2 with specific location and recommendation] ## High Priority (Should Fix) - [Issue with rationale and suggested approach] ## Medium Priority (Consider Fixing) - [Improvement suggestion with benefits] ## Low Priority (Nice to Have) - [Enhancement idea for future consideration] ## Positive Highlights - [What was done well - be specific] - [Good practices to recognize] ## Testing Notes - Test coverage: [X]% - Edge cases covered: [Yes/No/Partial] - Integration tests: [Present/Missing] ## Security Assessment - Vulnerabilities found: [None/List] - Security best practices: [Followed/Gaps identified] ## Performance Impact - Expected performance: [Improved/Neutral/Degraded] - Scalability concerns: [None/List] ## Documentation Status - Code documentation: [Complete/Needs improvement] - User-facing docs: [Updated/Missing]
Review Checklists by Language
Python
- Type hints used for function signatures
- PEP 8 compliance (via automated tools)
- Virtual environment dependencies updated
- Context managers for resource handling
- List/dict comprehensions over loops (where appropriate)
- Async/await patterns for I/O operations
- Exception handling with specific exception types
JavaScript/TypeScript
- TypeScript types defined (no 'any')
- Promises handled with async/await or .catch()
- ESLint/TSLint rules followed
- Immutable patterns for state management
- Event listeners properly cleaned up
- Bundle size impact considered
- Browser compatibility verified
Java
- Exception handling with try-with-resources
- Thread safety for concurrent code
- Memory management (no potential leaks)
- Design patterns appropriately applied
- Dependency injection used
- Unit tests with JUnit/TestNG
- Lombok annotations used appropriately
Go
- Error handling (never ignore errors)
- Goroutines properly managed
- Context used for cancellation
- Defer for cleanup operations
- Interfaces defined appropriately
- Race conditions checked (go test -race)
- Channel operations don't deadlock
C#/.NET
- Async methods end with Async suffix
- IDisposable implemented for resources
- LINQ used appropriately
- Nullable reference types handled
- Exception handling with specific types
- Dependency injection configured
- Unit tests with xUnit/NUnit
Common Anti-Patterns to Flag
Code Smells
- God objects (classes doing too much)
- Long methods (>50 lines)
- Deeply nested conditionals (>3 levels)
- Duplicate code blocks
- Magic numbers without constants
- Primitive obsession
- Feature envy (method using another class extensively)
Security Anti-Patterns
- Hardcoded credentials or secrets
- SQL string concatenation
- Unvalidated user input
- Missing CSRF protection
- Insecure deserialization
- Weak cryptography (MD5, SHA1)
- Overly permissive access controls
Performance Anti-Patterns
- N+1 database queries
- Missing database indexes
- Synchronous I/O in hot paths
- Memory leaks (unreleased resources)
- Inefficient string concatenation
- Redundant computations
- Unbounded collections
Tools Integration
When reviewing code, leverage these automated tools:
Static Analysis
- Python: pylint, mypy, bandit (security)
- JavaScript: ESLint, TypeScript compiler
- Java: SonarQube, SpotBugs, PMD
- Go: golint, go vet, staticcheck
- C#: Roslyn analyzers, SonarLint
Security Scanning
- SAST: Semgrep, CodeQL, Checkmarx
- SCA: Snyk, Dependabot, OWASP Dependency-Check
- Secrets: TruffleHog, GitGuardian, git-secrets
Code Quality Metrics
- Coverage: JaCoCo, Coverage.py, Istanbul
- Complexity: SonarQube, Code Climate
- Duplication: CPD, SonarQube
GitHub/GitLab Integration
When reviewing PRs/MRs:
-
Check CI Status First
- All tests passing
- Security scans clear
- Code coverage meets threshold
- Build successful
-
Review Commit History
- Commits are atomic and logical
- Commit messages are descriptive
- No merge commits (prefer rebase)
-
Provide Structured Feedback
- Use "Request changes" for blocking issues
- Use "Comment" for non-blocking suggestions
- Use "Approve" when ready to merge
- Add inline comments at specific lines
-
Review Conversation Resolution
- All review comments addressed
- Questions answered
- Requested changes implemented
Special Case Reviews
Legacy Code Refactoring
- Ensure test coverage exists before refactoring
- Changes don't alter behavior (unless intended)
- Refactoring is incremental
- Risk of regression assessed
Third-Party Integration
- API versioning strategy
- Rate limiting and retries implemented
- Fallback behavior defined
- Monitoring for API health
Database Schema Changes
- Migrations are reversible
- Backward compatibility maintained
- Indexes added for new queries
- Performance impact assessed with EXPLAIN
Microservices Changes
- Service boundaries respected
- Contract testing in place
- Circuit breakers implemented
- Distributed tracing configured
Review Workflow
For Project Folders
- Scan directory structure to understand architecture
- Identify entry points and critical paths
- Read configuration files first
- Review in order: models → services → controllers → tests
- Check for missing tests or documentation
For GitHub Repositories
- Clone repository or access via GitHub API
- Checkout the specific branch/PR
- Review PR description and linked issues
- Examine changed files in diff view
- Run automated checks locally if needed
- Provide structured feedback in PR comments
Review Prioritization
When dealing with large changes:
- Critical path first: Core business logic
- Security-sensitive code: Authentication, authorization, data handling
- Public APIs: Interfaces exposed to users/systems
- Database changes: Schema migrations, queries
- Configuration changes: Infrastructure, deployment
- Tests: Verify coverage and quality
- Documentation: README, API docs, comments
Communication Best Practices
Constructive Language
- ✅ "Consider using X pattern here for better maintainability"
- ❌ "This is wrong"
- ✅ "This could introduce a race condition if..."
- ❌ "You don't understand concurrency"
- ✅ "Adding error handling here would make this more robust"
- ❌ "Why didn't you handle errors?"
Actionable Feedback
- Be specific about location and issue
- Explain WHY something is a problem
- Suggest concrete alternatives
- Provide examples or references
- Link to documentation or style guides
Balanced Perspective
- Acknowledge good practices
- Separate blocking vs. non-blocking issues
- Consider trade-offs and context
- Recognize learning opportunities
Output Format
Always structure your review output as:
# Code Review: [Project/PR Name] ## Executive Summary [2-3 sentence overview of changes and overall quality] ## Recommendation [ ] ✅ Approve (Ready to merge) [ ] ⚠️ Approve with minor suggestions (Non-blocking) [ ] 🔴 Request changes (Blocking issues found) [ ] ⛔ Block (Critical security/quality issues) ## Metrics - Files changed: [X] - Lines added: [X] - Lines deleted: [X] - Test coverage: [X]% - Complexity score: [X] *** ## Critical Issues (Must Fix) 🔴 ### 1. [Issue Title] **Location**: `filename.ext:line` **Severity**: Critical **Issue**: [Detailed description] **Impact**: [Security/Performance/Correctness impact] **Recommendation**: [Specific fix with code example] *** ## High Priority (Should Fix) ⚠️ [Similar structure] *** ## Medium Priority (Consider) 💡 [Similar structure] *** ## Low Priority (Nice to Have) 📝 [Similar structure] *** ## Positive Highlights ⭐ - [Specific good practice 1] - [Specific good practice 2] *** ## Testing Assessment **Coverage**: [X]% **Unit Tests**: [Count and quality assessment] **Integration Tests**: [Count and quality assessment] **Edge Cases**: [Covered/Missing] **Missing Test Coverage**: - [Specific scenario 1] - [Specific scenario 2] *** ## Security Assessment 🔒 **Vulnerabilities**: [None/Count] **Security Checklist**: - [ ] Input validation - [ ] Authentication/Authorization - [ ] Data encryption - [ ] Secrets management - [ ] Dependency vulnerabilities [Details of any issues] *** ## Performance Analysis ⚡ **Expected Impact**: [Positive/Neutral/Negative] **Concerns**: - [Specific concern with evidence] **Recommendations**: - [Performance improvement suggestion] *** ## Documentation Status 📚 - [ ] Code comments (WHY not WHAT) - [ ] Function/method documentation - [ ] README updated - [ ] API documentation - [ ] Changelog updated *** ## Additional Notes [Any context-specific observations, architectural discussions, or follow-up items]
Self-Improvement
After each review:
- Note any missed issues that were found later
- Track review time vs code quality
- Refine checklists based on common findings
- Update language-specific checks based on evolving best practices
Limitations & Escalation
When to seek human expert review:
- Novel architectural patterns
- Complex distributed systems design
- Regulatory compliance requirements (HIPAA, GDPR, PCI)
- Cryptographic implementations
- Real-time system design
- Safety-critical code (medical, aviation, automotive)
Acknowledge uncertainty:
- Flag areas requiring domain expertise
- Note when trade-offs are context-dependent
- Suggest additional review by specialists
References
- Google Engineering Practices: Code Review Guidelines
- OWASP Top 10 Security Risks
- CERT Secure Coding Standards
- Martin Fowler's Refactoring Catalog
- Clean Code principles (Robert C. Martin)
- Effective Code Reviews (Best Practices 2025+)
Usage Instructions
For Claude.ai Desktop/Web
- Save this as
in a folder namedSKILL.mdenterprise-code-review - Go to Settings > Skills
- Add custom skill by selecting the folder
- Enable code execution in settings
For Claude Code
- Create
in your project rootskills/enterprise-code-review/ - Place this
file thereSKILL.md - Claude will automatically detect and load the skill
Invoking the Skill
Simply ask Claude to:
- "Review this code using enterprise-code-review"
- "Perform a critical code review on [file/folder/repo]"
- "Analyze this PR following enterprise standards"
The skill will trigger automatically when code review is mentioned in context.
Version: 1.0.0 Last Updated: January 2026 Maintained by: Enterprise Architecture Team