Skills equipment-maintenance-log

Track lab equipment calibration dates and send maintenance reminders

install
source · Clone the upstream repo
git clone https://github.com/openclaw/skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/aipoch-ai/equipment-maintenance-log-2" ~/.claude/skills/clawdbot-skills-equipment-maintenance-log-8bafa0 && rm -rf "$T"
manifest: skills/aipoch-ai/equipment-maintenance-log-2/SKILL.md
source content

Equipment Maintenance Log

Track calibration dates for pipettes, balances, centrifuges and send maintenance reminders.

Usage

python scripts/main.py --add "Pipette P100" --calibration-date 2024-01-15 --interval 12
python scripts/main.py --check

Parameters

ParameterTypeDefaultRequiredDescription
--add
string-*Equipment name to add
--calibration-date
string-*Last calibration date (YYYY-MM-DD)
--interval
int-*Calibration interval in months
--check
flag-**Check for upcoming maintenance
--list
flag-**List all equipment

* Required when adding equipment
** Alternative to --add (mutually exclusive)

Output

  • Maintenance schedule
  • Overdue alerts
  • Upcoming reminders (30/60/90 days)

Risk Assessment

Risk IndicatorAssessmentLevel
Code ExecutionPython/R scripts executed locallyMedium
Network AccessNo external API callsLow
File System AccessRead input files, write output filesMedium
Instruction TamperingStandard prompt guidelinesLow
Data ExposureOutput files saved to workspaceLow

Security Checklist

  • No hardcoded credentials or API keys
  • No unauthorized file system access (../)
  • Output does not expose sensitive information
  • Prompt injection protections in place
  • Input file paths validated (no ../ traversal)
  • Output directory restricted to workspace
  • Script execution in sandboxed environment
  • Error messages sanitized (no stack traces exposed)
  • Dependencies audited

Prerequisites

No additional Python packages required.

Evaluation Criteria

Success Metrics

  • Successfully executes main functionality
  • Output meets quality standards
  • Handles edge cases gracefully
  • Performance is acceptable

Test Cases

  1. Basic Functionality: Standard input → Expected output
  2. Edge Case: Invalid input → Graceful error handling
  3. Performance: Large dataset → Acceptable processing time

Lifecycle Status

  • Current Stage: Draft
  • Next Review Date: 2026-03-06
  • Known Issues: None
  • Planned Improvements:
    • Performance optimization
    • Additional feature support