Skills interview-mock-partner
Simulates behavioral interview questions for medical professionals.
install
source · Clone the upstream repo
git clone https://github.com/openclaw/skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/aipoch-ai/interview-mock-partner" ~/.claude/skills/openclaw-skills-interview-mock-partner && rm -rf "$T"
OpenClaw · Install into ~/.openclaw/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.openclaw/skills && cp -r "$T/skills/aipoch-ai/interview-mock-partner" ~/.openclaw/skills/openclaw-skills-interview-mock-partner && rm -rf "$T"
manifest:
skills/aipoch-ai/interview-mock-partner/SKILL.mdsource content
Interview Mock Partner
Simulates medical job interview scenarios.
Features
- Behavioral questions
- Response feedback
- Common scenarios
- Improvement tips
Parameters
| Parameter | Type | Default | Required | Description |
|---|---|---|---|---|
| string | - | Yes | Target position title |
| string | entry | No | Experience level (entry, mid, senior) |
| string | - | No | Medical specialty area |
| int | 5 | No | Number of questions to generate |
, | string | stdout | No | Output file path |
Output Format
{ "questions": ["string"], "sample_answers": ["string"], "tips": ["string"] }
Risk Assessment
| Risk Indicator | Assessment | Level |
|---|---|---|
| Code Execution | Python/R scripts executed locally | Medium |
| Network Access | No external API calls | Low |
| File System Access | Read input files, write output files | Medium |
| Instruction Tampering | Standard prompt guidelines | Low |
| Data Exposure | Output files saved to workspace | Low |
Security Checklist
- No hardcoded credentials or API keys
- No unauthorized file system access (../)
- Output does not expose sensitive information
- Prompt injection protections in place
- Input file paths validated (no ../ traversal)
- Output directory restricted to workspace
- Script execution in sandboxed environment
- Error messages sanitized (no stack traces exposed)
- Dependencies audited
Prerequisites
No additional Python packages required.
Evaluation Criteria
Success Metrics
- Successfully executes main functionality
- Output meets quality standards
- Handles edge cases gracefully
- Performance is acceptable
Test Cases
- Basic Functionality: Standard input → Expected output
- Edge Case: Invalid input → Graceful error handling
- Performance: Large dataset → Acceptable processing time
Lifecycle Status
- Current Stage: Draft
- Next Review Date: 2026-03-06
- Known Issues: None
- Planned Improvements:
- Performance optimization
- Additional feature support