Skills interview-mock-partner

Simulates behavioral interview questions for medical professionals.

install
source · Clone the upstream repo
git clone https://github.com/openclaw/skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/aipoch-ai/interview-mock-partner" ~/.claude/skills/openclaw-skills-interview-mock-partner && rm -rf "$T"
OpenClaw · Install into ~/.openclaw/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.openclaw/skills && cp -r "$T/skills/aipoch-ai/interview-mock-partner" ~/.openclaw/skills/openclaw-skills-interview-mock-partner && rm -rf "$T"
manifest: skills/aipoch-ai/interview-mock-partner/SKILL.md
source content

Interview Mock Partner

Simulates medical job interview scenarios.

Features

  • Behavioral questions
  • Response feedback
  • Common scenarios
  • Improvement tips

Parameters

ParameterTypeDefaultRequiredDescription
--position
string-YesTarget position title
--experience-level
stringentryNoExperience level (entry, mid, senior)
--specialty
string-NoMedical specialty area
--questions
int5NoNumber of questions to generate
--output
,
-o
stringstdoutNoOutput file path

Output Format

{
  "questions": ["string"],
  "sample_answers": ["string"],
  "tips": ["string"]
}

Risk Assessment

Risk IndicatorAssessmentLevel
Code ExecutionPython/R scripts executed locallyMedium
Network AccessNo external API callsLow
File System AccessRead input files, write output filesMedium
Instruction TamperingStandard prompt guidelinesLow
Data ExposureOutput files saved to workspaceLow

Security Checklist

  • No hardcoded credentials or API keys
  • No unauthorized file system access (../)
  • Output does not expose sensitive information
  • Prompt injection protections in place
  • Input file paths validated (no ../ traversal)
  • Output directory restricted to workspace
  • Script execution in sandboxed environment
  • Error messages sanitized (no stack traces exposed)
  • Dependencies audited

Prerequisites

No additional Python packages required.

Evaluation Criteria

Success Metrics

  • Successfully executes main functionality
  • Output meets quality standards
  • Handles edge cases gracefully
  • Performance is acceptable

Test Cases

  1. Basic Functionality: Standard input → Expected output
  2. Edge Case: Invalid input → Graceful error handling
  3. Performance: Large dataset → Acceptable processing time

Lifecycle Status

  • Current Stage: Draft
  • Next Review Date: 2026-03-06
  • Known Issues: None
  • Planned Improvements:
    • Performance optimization
    • Additional feature support