Everything-react-native-expo continuous-learning-v2
Auto-generate skills and rules from observed React Native development patterns
install
source · Clone the upstream repo
git clone https://github.com/JubaKitiashvili/everything-react-native-expo
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/JubaKitiashvili/everything-react-native-expo "$T" && mkdir -p ~/.claude/skills && cp -r "$T/.claude/skills/continuous-learning-v2" ~/.claude/skills/jubakitiashvili-everything-react-native-expo-continuous-learning-v2 && rm -rf "$T"
manifest:
.claude/skills/continuous-learning-v2/SKILL.mdsource content
Continuous Learning v2
This skill manages the continuous learning pipeline — observing patterns during development sessions and converting them into persistent rules and skills.
Architecture
PostToolUse hook (real-time) → `continuous-learning-observer.cjs` (lightweight pattern capture) → patterns stored in .claude/memory/observations/ /learn command (manual, comprehensive) → `extract-session-patterns.js` (full session analysis) → `analyze-patterns.js` (pattern clustering + dedup) → skill-generator prompt (create new content) → `validate-content.js` (verify new content is valid) /retrospective command (session end) → `evaluate-session.js` (quality metrics + suggestions)
How It Works
Real-Time (Automatic)
The
continuous-learning-observer.cjs hook runs on PostToolUse events. It:
- Captures the tool name, file paths, and outcome
- Detects repeated patterns (same fix applied > 3 times)
- Stores observations in
as JSON.claude/memory/observations/ - Lightweight — adds < 50ms to each tool call
Manual Analysis (/learn
)
/learnWhen the user runs
/learn, the pipeline:
- Reads all observations from the current session
- Clusters them by type (style fix, import pattern, architecture choice)
- Compares against existing rules and skills
- Generates candidates for new content
- Presents candidates for user approval
- Writes approved content to
or.claude/rules/.claude/skills/
Session Evaluation (/retrospective
)
/retrospectiveAt session end,
evaluate-session.js:
- Aggregates all metrics (files changed, tests added, build status)
- Evaluates which rules triggered and their usefulness
- Suggests rule calibration (tighten/loosen globs, adjust content)
- Generates a session quality report
Configuration
See
config.json for tuning parameters:
: How many times a pattern must repeat before flagging (default: 3)observationThreshold
: Prevent memory bloat (default: 100)maxObservationsPerSession
: If true, auto-approve low-risk content (default: false)autoApprove
: What types to generate —contentTypes["rule", "skill"]