Babysitter memory-summarization
Conversation summarization for memory compression and context management
install
source · Clone the upstream repo
git clone https://github.com/a5c-ai/babysitter
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/a5c-ai/babysitter "$T" && mkdir -p ~/.claude/skills && cp -r "$T/library/specializations/ai-agents-conversational/skills/memory-summarization" ~/.claude/skills/a5c-ai-babysitter-memory-summarization && rm -rf "$T"
manifest:
library/specializations/ai-agents-conversational/skills/memory-summarization/SKILL.mdsource content
Memory Summarization Skill
Capabilities
- Implement conversation summarization strategies
- Configure rolling summary updates
- Design hierarchical summarization
- Implement token-aware summarization
- Create extractive and abstractive summaries
- Design summary quality evaluation
Target Processes
- conversational-memory-system
- long-term-memory-management
Implementation Details
Summarization Strategies
- Rolling Summary: Update summary with new messages
- Hierarchical: Multi-level summarization
- Token-Budget: Fit within token limits
- Extractive: Key message selection
- Abstractive: LLM-generated summaries
Configuration Options
- LLM for summarization
- Summary token budget
- Update frequency
- Summary template
- Quality thresholds
Best Practices
- Balance detail vs compression
- Preserve key information
- Monitor summary quality
- Test with long conversations
- Handle context window limits
Dependencies
- langchain-core
- LLM provider