install
source · Clone the upstream repo
git clone https://github.com/mdbabumiamssm/LLMs-Universal-Life-Science-and-Clinical-Skills-
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/mdbabumiamssm/LLMs-Universal-Life-Science-and-Clinical-Skills- "$T" && mkdir -p ~/.claude/skills && cp -r "$T/Skills/External_Collections/General_Productivity" ~/.claude/skills/mdbabumiamssm-llms-universal-life-science-and-clinical-skills-general-productivi && rm -rf "$T"
manifest:
Skills/External_Collections/General_Productivity/SKILL.mdsource content
<!--
# COPYRIGHT NOTICE
# This file is part of the "Universal Biomedical Skills" project.
# Copyright (c) 2026 MD BABU MIA, PhD <md.babu.mia@mssm.edu>
# All Rights Reserved.
#
# This code is proprietary and confidential.
# Unauthorized copying of this file, via any medium is strictly prohibited.
#
# Provenance: Authenticated by MD BABU MIA
-->
name: 'code-reviewer' description: 'Provides comprehensive code review feedback based on best practices, style guides, and potential bug detection. Use when the user requests a code review, asks for improvements to code, or needs to ensure code quality.' measurable_outcome: Execute skill workflow successfully with valid output within 15 minutes. allowed-tools:
- read_file
- run_shell_command
Code Review Skill
This skill helps to perform thorough code reviews, focusing on readability, maintainability, performance, security, and adherence to project-specific coding standards.
When to Use This Skill
- When a user explicitly asks for a "code review" of a file or set of files.
- When a user asks to "improve the quality" or "refactor" a piece of code.
- When a user submits code and asks for "feedback" or "suggestions".
Core Capabilities
- Syntax and Style Check: Verify adherence to established coding standards (e.g., PEP 8 for Python, ESLint rules for JavaScript).
- Best Practices: Identify deviations from common best practices for the given language/framework.
- Potential Bugs/Errors: Highlight common pitfalls, edge cases, or logical errors.
- Performance Optimization: Suggest areas where code could be made more efficient.
- Security Vulnerabilities: Point out potential security risks.
- Readability and Maintainability: Provide feedback on code clarity, comments, variable naming, and overall structure.
- Testability: Assess if the code is easily testable and suggest improvements.
Workflow
- Identify Scope: Determine which files or code snippets are part of the review request.
- Read Code: Use
to access the content of the specified files.read_file - Analyze:
- Apply language-specific linting/static analysis tools if available (e.g.,
,pylint
,flake8
).eslint - Perform a semantic analysis based on the description and context.
- Cross-reference with project-specific style guides or documentation if linked in
.references/
- Apply language-specific linting/static analysis tools if available (e.g.,
- Generate Feedback:
- Structure feedback clearly, categorizing by type (e.g., "Style", "Potential Bug", "Suggestion").
- Provide specific line numbers or code snippets for each piece of feedback.
- Explain why a change is suggested and, if possible, offer a concrete example of how to fix it.
- Prioritize critical issues (bugs, security) over stylistic suggestions.
- Present Review: Output the comprehensive review to the user.
Example Usage
User Prompt: "Please review
src/main.py for any issues."
Agent Action:
read_file("src/main.py")- Run
(if configured).pylint src/main.py - Analyze code content.
- Generate a markdown-formatted review with findings.