Skills sample-size-basic

Basic sample size calculator for clinical research planning with common

install
source · Clone the upstream repo
git clone https://github.com/openclaw/skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/aipoch-ai/sample-size-basic" ~/.claude/skills/clawdbot-skills-sample-size-basic && rm -rf "$T"
manifest: skills/aipoch-ai/sample-size-basic/SKILL.md
source content

Sample Size (Basic)

Basic sample size estimation for clinical research planning.

Use Cases

  • Quick sample size estimates for grant proposals
  • Preliminary study design calculations
  • Educational purposes for statistics training

Parameters

  • test_type
    : Type of test (t_test, chi_square, proportion)
  • alpha
    : Significance level (default 0.05)
  • power
    : Statistical power (default 0.80)
  • effect_size
    : Expected effect size
  • baseline_rate
    : Baseline proportion (for proportion tests)

Returns

  • Required sample size per group
  • Total sample size
  • Statistical assumptions summary

Example

Input: Two-sample t-test, alpha=0.05, power=0.80, effect_size=0.5 Output: n=64 per group, total=128 subjects

Risk Assessment

Risk IndicatorAssessmentLevel
Code ExecutionPython/R scripts executed locallyMedium
Network AccessNo external API callsLow
File System AccessRead input files, write output filesMedium
Instruction TamperingStandard prompt guidelinesLow
Data ExposureOutput files saved to workspaceLow

Security Checklist

  • No hardcoded credentials or API keys
  • No unauthorized file system access (../)
  • Output does not expose sensitive information
  • Prompt injection protections in place
  • Input file paths validated (no ../ traversal)
  • Output directory restricted to workspace
  • Script execution in sandboxed environment
  • Error messages sanitized (no stack traces exposed)
  • Dependencies audited

Prerequisites

# Python dependencies
pip install -r requirements.txt

Evaluation Criteria

Success Metrics

  • Successfully executes main functionality
  • Output meets quality standards
  • Handles edge cases gracefully
  • Performance is acceptable

Test Cases

  1. Basic Functionality: Standard input → Expected output
  2. Edge Case: Invalid input → Graceful error handling
  3. Performance: Large dataset → Acceptable processing time

Lifecycle Status

  • Current Stage: Draft
  • Next Review Date: 2026-03-06
  • Known Issues: None
  • Planned Improvements:
    • Performance optimization
    • Additional feature support