Skills q-and-a-prep-partner
Predict challenging questions for presentations and prepare responses
install
source · Clone the upstream repo
git clone https://github.com/openclaw/skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/aipoch-ai/qa-prep-partner" ~/.claude/skills/openclaw-skills-q-and-a-prep-partner && rm -rf "$T"
OpenClaw · Install into ~/.openclaw/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.openclaw/skills && cp -r "$T/skills/aipoch-ai/qa-prep-partner" ~/.openclaw/skills/openclaw-skills-q-and-a-prep-partner && rm -rf "$T"
manifest:
skills/aipoch-ai/qa-prep-partner/SKILL.mdsource content
Q&A Prep Partner
Predict challenging questions for presentations and prepare structured responses.
Usage
python scripts/main.py --abstract abstract.txt --field oncology python scripts/main.py --topic "CRISPR therapy" --audience experts
Parameters
: Abstract text or file--abstract
: Research topic--topic
: Research field--field
: Audience type (general/experts/peers)--audience
: Number of questions to generate (default: 10)--n-questions
Question Types
- Methodology questions
- Statistical questions
- Interpretation questions
- Limitation questions
- Future work questions
- Comparison questions
Output
- Predicted questions
- Suggested response frameworks
- Key points to address
Risk Assessment
| Risk Indicator | Assessment | Level |
|---|---|---|
| Code Execution | Python/R scripts executed locally | Medium |
| Network Access | No external API calls | Low |
| File System Access | Read input files, write output files | Medium |
| Instruction Tampering | Standard prompt guidelines | Low |
| Data Exposure | Output files saved to workspace | Low |
Security Checklist
- No hardcoded credentials or API keys
- No unauthorized file system access (../)
- Output does not expose sensitive information
- Prompt injection protections in place
- Input file paths validated (no ../ traversal)
- Output directory restricted to workspace
- Script execution in sandboxed environment
- Error messages sanitized (no stack traces exposed)
- Dependencies audited
Prerequisites
No additional Python packages required.
Evaluation Criteria
Success Metrics
- Successfully executes main functionality
- Output meets quality standards
- Handles edge cases gracefully
- Performance is acceptable
Test Cases
- Basic Functionality: Standard input → Expected output
- Edge Case: Invalid input → Graceful error handling
- Performance: Large dataset → Acceptable processing time
Lifecycle Status
- Current Stage: Draft
- Next Review Date: 2026-03-06
- Known Issues: None
- Planned Improvements:
- Performance optimization
- Additional feature support