Skills promptcache
Estimate the cost savings from caching frequently-used prompts across AI models.
install
source · Clone the upstream repo
git clone https://github.com/openclaw/skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/avale-slai/promptcache" ~/.claude/skills/openclaw-skills-promptcache && rm -rf "$T"
OpenClaw · Install into ~/.openclaw/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.openclaw/skills && cp -r "$T/skills/avale-slai/promptcache" ~/.openclaw/skills/openclaw-skills-promptcache && rm -rf "$T"
manifest:
skills/avale-slai/promptcache/SKILL.mdsource content
promptcache — LoomLens Advisor
What It Does
Estimates the cost savings from caching frequently-used prompts. Compares the cost of re-sending full context every call vs. cached prompt mode across all major models.
When to Use
- When you run the same system prompt repeatedly
- Before enabling prompt caching on a production pipeline
- When evaluating cost savings from prompt template reuse
Syntax
/promptcache "You are an expert radiologist..." --calls-per-day 50 /promptcache "Summarize this in 3 bullets" --model openai/gpt-4o-mini
Free Tier
3 analyses/day free with any Signalloom API key.
Get your free key: https://signalloomai.com/signup