Skills equipment-maintenance-log
Track lab equipment calibration dates and send maintenance reminders
install
source · Clone the upstream repo
git clone https://github.com/openclaw/skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/aipoch-ai/equipment-maintenance-log-2" ~/.claude/skills/clawdbot-skills-equipment-maintenance-log-8bafa0 && rm -rf "$T"
manifest:
skills/aipoch-ai/equipment-maintenance-log-2/SKILL.mdsource content
Equipment Maintenance Log
Track calibration dates for pipettes, balances, centrifuges and send maintenance reminders.
Usage
python scripts/main.py --add "Pipette P100" --calibration-date 2024-01-15 --interval 12 python scripts/main.py --check
Parameters
| Parameter | Type | Default | Required | Description |
|---|---|---|---|---|
| string | - | * | Equipment name to add |
| string | - | * | Last calibration date (YYYY-MM-DD) |
| int | - | * | Calibration interval in months |
| flag | - | ** | Check for upcoming maintenance |
| flag | - | ** | List all equipment |
* Required when adding equipment
** Alternative to --add (mutually exclusive)
Output
- Maintenance schedule
- Overdue alerts
- Upcoming reminders (30/60/90 days)
Risk Assessment
| Risk Indicator | Assessment | Level |
|---|---|---|
| Code Execution | Python/R scripts executed locally | Medium |
| Network Access | No external API calls | Low |
| File System Access | Read input files, write output files | Medium |
| Instruction Tampering | Standard prompt guidelines | Low |
| Data Exposure | Output files saved to workspace | Low |
Security Checklist
- No hardcoded credentials or API keys
- No unauthorized file system access (../)
- Output does not expose sensitive information
- Prompt injection protections in place
- Input file paths validated (no ../ traversal)
- Output directory restricted to workspace
- Script execution in sandboxed environment
- Error messages sanitized (no stack traces exposed)
- Dependencies audited
Prerequisites
No additional Python packages required.
Evaluation Criteria
Success Metrics
- Successfully executes main functionality
- Output meets quality standards
- Handles edge cases gracefully
- Performance is acceptable
Test Cases
- Basic Functionality: Standard input → Expected output
- Edge Case: Invalid input → Graceful error handling
- Performance: Large dataset → Acceptable processing time
Lifecycle Status
- Current Stage: Draft
- Next Review Date: 2026-03-06
- Known Issues: None
- Planned Improvements:
- Performance optimization
- Additional feature support