Skills grant-proposal-assistant
Grant proposal writing assistant for NIH (R01/R21), NSF and other mainstream
install
source · Clone the upstream repo
git clone https://github.com/openclaw/skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/aipoch-ai/grant-proposal-assistant" ~/.claude/skills/openclaw-skills-grant-proposal-assistant && rm -rf "$T"
OpenClaw · Install into ~/.openclaw/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.openclaw/skills && cp -r "$T/skills/aipoch-ai/grant-proposal-assistant" ~/.openclaw/skills/openclaw-skills-grant-proposal-assistant && rm -rf "$T"
manifest:
skills/aipoch-ai/grant-proposal-assistant/SKILL.mdsource content
Grant Proposal Assistant
A comprehensive tool for writing competitive grant proposals targeting NIH (R01/R21), NSF, and other major funding agencies.
Capabilities
- Section Templates: Standard templates for all major grant sections
- Specific Aims Generator: Structured approach to crafting compelling Specific Aims pages
- Budget Justification Helper: Equipment, personnel, and other cost justifications
- Review & Critique: Self-assessment checklists for proposal quality
Usage
Command Line
# Generate Specific Aims template python3 scripts/main.py --section aims --output my_aims.md # Generate full proposal template python3 scripts/main.py --section full --agency NIH --type R01 --output proposal.md # Budget justification helper python3 scripts/main.py --section budget --category personnel --output budget.md # Review existing proposal python3 scripts/main.py --review --input my_proposal.md
As Library
from scripts.main import GrantProposalAssistant assistant = GrantProposalAssistant(agency="NIH", grant_type="R01") template = assistant.generate_section("specific_aims") budget = assistant.generate_budget_justification(category="equipment", items=[...])
Parameters
| Parameter | Description | Options |
|---|---|---|
| Section to generate | , , , , |
| Funding agency | , , , |
| Grant mechanism | , , , , |
| Budget category | , , , , |
| Input file for review | Path to existing proposal |
| Output file path | Path for generated content |
Technical Difficulty
Medium - Requires understanding of grant structure, funding agency requirements, and scientific writing best practices.
References
- NIH R01 full proposal templatereferences/NIH_R01_template.md
- NSF standard grant templatereferences/NSF_template.md
- Budget templates by categoryreferences/budget_templates.xlsx
- Proposal quality checklistreferences/review_checklist.md
- Example Specific Aims pagesreferences/specific_aims_examples.md
Best Practices
- Start with Specific Aims: This 1-page summary drives the entire proposal
- Follow Page Limits: NIH R01 Research Strategy = 12 pages, Specific Aims = 1 page
- Use Significance-Innovation-Approach Structure: Standard for NIH applications
- Justify Everything: Every budget item needs a clear justification
- Review with Checklist: Use the built-in review tool before submission
Agency-Specific Notes
NIH R01/R21
- Page limits strictly enforced
- Significance, Innovation, Approach structure required
- Vertebrate animals and human subjects sections if applicable
- Resubmission strategy for A1 applications
NSF
- Project Summary (1 page) and Project Description (15 pages)
- Broader impacts criterion weighted equally with intellectual merit
- Data management plan required
- Facilities and resources section
Version
1.0.0 - Initial release with NIH and NSF support
Risk Assessment
| Risk Indicator | Assessment | Level |
|---|---|---|
| Code Execution | Python/R scripts executed locally | Medium |
| Network Access | No external API calls | Low |
| File System Access | Read input files, write output files | Medium |
| Instruction Tampering | Standard prompt guidelines | Low |
| Data Exposure | Output files saved to workspace | Low |
Security Checklist
- No hardcoded credentials or API keys
- No unauthorized file system access (../)
- Output does not expose sensitive information
- Prompt injection protections in place
- Input file paths validated (no ../ traversal)
- Output directory restricted to workspace
- Script execution in sandboxed environment
- Error messages sanitized (no stack traces exposed)
- Dependencies audited
Prerequisites
No additional Python packages required.
Evaluation Criteria
Success Metrics
- Successfully executes main functionality
- Output meets quality standards
- Handles edge cases gracefully
- Performance is acceptable
Test Cases
- Basic Functionality: Standard input → Expected output
- Edge Case: Invalid input → Graceful error handling
- Performance: Large dataset → Acceptable processing time
Lifecycle Status
- Current Stage: Draft
- Next Review Date: 2026-03-06
- Known Issues: None
- Planned Improvements:
- Performance optimization
- Additional feature support