EasyPlatform learn
[Utilities] Teach Claude lessons that persist across sessions. Triggers on 'remember this', 'always do', 'never do', 'learn this', 'from now on'. Smart routing to all 12 project-reference docs with /prompt-enhance finalization.
git clone https://github.com/duc01226/EasyPlatform
T=$(mktemp -d) && git clone --depth=1 https://github.com/duc01226/EasyPlatform "$T" && mkdir -p ~/.claude/skills && cp -r "$T/.claude/skills/learn" ~/.claude/skills/duc01226-easyplatform-learn && rm -rf "$T"
.claude/skills/learn/SKILL.md<!-- SYNC:critical-thinking-mindset -->[IMPORTANT] Use
to break ALL work into small tasks BEFORE starting — including tasks for each file read. This prevents context loss from long files. For simple tasks, AI MUST ATTENTION ask user whether to skip.TaskCreateFinal task is ALWAYS: "Run
to optimize lesson content for AI attention anchoring." Do NOT mark the skill complete until this runs./prompt-enhance <modified-file>
<!-- /SYNC:critical-thinking-mindset --> <!-- SYNC:ai-mistake-prevention -->Critical Thinking Mindset — Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence >80% to act. Anti-hallucination: Never present guess as fact — cite sources for every claim, admit uncertainty freely, self-check output for errors, cross-reference independently, stay skeptical of own confidence — certainty without evidence root of all hallucination.
<!-- /SYNC:ai-mistake-prevention -->AI Mistake Prevention — Failure modes to avoid on every task:
- Check downstream references before deleting. Deleting components causes documentation and code staleness cascades. Map all referencing files before removal.
- Verify AI-generated content against actual code. AI hallucinates APIs, class names, and method signatures. Always grep to confirm existence before documenting or referencing.
- Trace full dependency chain after edits. Changing a definition misses downstream variables and consumers derived from it. Always trace the full chain.
- Trace ALL code paths when verifying correctness. Confirming code exists is not confirming it executes. Always trace early exits, error branches, and conditional skips — not just happy path.
- When debugging, ask "whose responsibility?" before fixing. Trace whether bug is in caller (wrong data) or callee (wrong handling). Fix at responsible layer — never patch symptom site.
- Assume existing values are intentional — ask WHY before changing. Before changing any constant, limit, flag, or pattern: read comments, check git blame, examine surrounding code.
- Verify ALL affected outputs, not just the first. Changes touching multiple stacks require verifying EVERY output. One green check is not all green checks.
- Holistic-first debugging — resist nearest-attention trap. When investigating any failure, list EVERY precondition first (config, env vars, DB names, endpoints, DI registrations, data preconditions), then verify each against evidence before forming any code-layer hypothesis.
- Surgical changes — apply the diff test. Bug fix: every changed line must trace directly to the bug. Don't restyle or improve adjacent code. Enhancement task: implement improvements AND announce them explicitly.
- Surface ambiguity before coding — don't pick silently. If request has multiple interpretations, present each with effort estimate and ask. Never assume all-records, file-based, or more complex path.
Quick Summary
Goal: Teach Claude lessons that persist across sessions by saving to the most relevant reference doc.
Workflow:
- Capture -- Identify the lesson from user instruction or experience
- Route -- Analyze lesson content against Reference Doc Catalog, select best target file
- Save -- Append lesson to the selected file
- Confirm -- Acknowledge what was saved and where
- Enhance -- Run
on modified file(s) to optimize AI attention anchoring/prompt-enhance
Key Rules:
- Triggers on "remember this", "always do X", "never do Y"
- Triage first: pass Recurrence gate + Auto-fix gate BEFORE routing or saving
- Smart-route to the most relevant file, NOT always
docs/project-reference/lessons.md - Check for existing entries before creating duplicates
- Confirm target file with user before writing
Be skeptical. Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence percentages (Idea should be more than 80%).
Usage
Add a lesson
/learn always use the validation framework fluent API instead of throwing ValidationException /learn never call external APIs in command handlers - use Entity Event Handlers /learn prefer async/await over .then() chains
List lessons
/learn list
Remove a lesson
/learn remove 3
Clear all lessons
/learn clear
Reference Doc Catalog (READ before routing)
Each
docs/project-reference/ file is auto-initialized by session-init-docs.cjs hook and populated by /scan-* skills. Understanding their roles is critical for correct routing.
| File | Role & Content | Injected By | Injection Trigger | Scan Skill |
|---|---|---|---|---|
| Architecture, directory tree, tech stack, module registry, service map | (18 hooks) | Agent spawn | |
| Backend/hook patterns: CJS modules, CQRS, repositories, validation, message bus, background jobs | , | Edit/Write backend files | |
| Frontend patterns: components, state mgmt, API services, styling conventions, directives | , | Edit/Write frontend files | |
| Test architecture: base classes, fixtures, helpers, service-specific setup, test runners | Referenced in config | Test file edits | |
| Feature doc templates, app-to-service mapping, doc structure conventions | On-demand (skill reads) | Skill activation | |
| Review rules, conventions, anti-patterns, decision trees, checklists | | Review skill activation | |
| General lessons — fallback catch-all. Injected on EVERY prompt (budget-controlled) | | Every UserPromptSubmit + Edit/Write | Managed by |
| SCSS/CSS: BEM methodology, mixins, variables, theming, responsive patterns | | Styling file edits | |
| Design system: tokens overview, component inventory, app-to-doc mapping | | Design file edits | |
| E2E test patterns: framework, page objects, config, best practices | | E2E file edits | |
| Domain entities, data models, DTOs, aggregate boundaries, ER diagrams, cross-service sync | , | Backend/frontend file edits | |
| Documentation tree, file counts, doc relationships, keyword-to-doc lookup | On-demand (manual) | Manual reference | |
Key insight: Files injected automatically by hooks have higher visibility — lessons placed there are enforced during edits. Files injected on-demand are only seen when skills explicitly read them. Prefer auto-injected files for high-recurrence lessons.
Smart File Routing (CRITICAL)
Lesson Triage Gate (MANDATORY — run FIRST, before routing or saving)
| Gate | Question | Pass | Fail → Action |
|---|---|---|---|
| Recurrence | "Would this mistake recur in a future session WITHOUT this reminder?" | Yes → continue | No → skip ; mistake is situational |
| Auto-fix | "Could , , , or catch this automatically?" | No → continue | Yes → skip ; update the review skill instead |
Both gates must pass. A lesson review skills already catch adds noise without value. A one-off situational mistake won't be prevented by a persisted rule.
Routing Table
Route to the most relevant file based on lesson content:
| If lesson is about... | Route to | Section hint |
|---|---|---|
| Code review rules, anti-patterns, review checklists, YAGNI/KISS/DRY, naming conventions, review process | | Add to most relevant section (anti-patterns, rules, checklists) |
| Backend/hook patterns: CJS modules, CQRS, repositories, entities, validation, message bus, background jobs, migrations, EF Core, MongoDB | | Add to relevant section or Anti-Patterns section |
| Frontend Angular/TS patterns: components, stores, forms, API services, BEM, RxJS, directives, pipes | | Add to relevant section or Anti-Patterns section |
| Integration/unit tests: test base classes, fixtures, test helpers, test patterns, assertions, test runners | | Add to relevant section |
| E2E tests: Playwright, Cypress, Selenium, page objects, E2E config, browser automation, visual regression | | Add to relevant section |
| Domain entities, data models, DTOs, aggregates, entity relationships, cross-service data sync, ER diagrams | | Add to Entity Catalog or Relationships section |
| Project structure, directory organization, module boundaries, tech stack choices, service architecture | | Add to relevant architecture section |
| SCSS/CSS styling, BEM methodology, mixins, variables, theming, responsive design, CSS conventions | | Add to relevant styling section |
| Design system, design tokens, component library, UI kit conventions, Figma-to-code patterns | | Add to relevant design section |
| Feature documentation, doc templates, doc structure conventions, app-to-service doc mapping | | Add to relevant conventions section |
| Documentation indexing, doc organization, doc-to-code relationships, doc lookup patterns | | Add to relevant section |
| General lessons, workflow tips, tooling, AI behavior, project conventions, anything not matching above | | Append as dated list entry |
Prevention Depth Assessment (MANDATORY before saving)
Before saving any lesson, critically evaluate whether a doc update alone is sufficient or a deeper prevention mechanism is needed:
| Prevention Layer | When to use | Example |
|---|---|---|
| Doc update only | One-off awareness, rare edge case, team convention | "Always use fluent validation API" → |
Prompt rule () | Rule that ALL agents must follow on every task (injected on UserPromptSubmit) | "Grep after bulk edits" → |
System Lesson () | Universal AI mistake, high recurrence, silent failure, any project | "Re-read files after context compaction" → |
Hook () | Automated enforcement, must never be forgotten | "Dedup markers must match" → + consistency test |
Test () | Regression prevention, verifiable invariant | "All hooks import from shared module" → test in |
Skill update () | Workflow step that should always include this check | "Review changes must check doc staleness" → skill SKILL.md update |
Decision flow:
- Capture the lesson
- Ask: "Could this mistake recur if the AI forgets this lesson?" If yes → needs more than a doc update
- Ask: "Can this be caught automatically by a test or hook?" If yes → recommend hook/test
- Evaluate System Lesson promotion (see below)
- Present options to user with
:AskUserQuestion- "Doc update only" — save to the best-fit reference file (default for most lessons)
- "Doc + prompt rule" — also add to
so all agents see itdevelopment-rules.md - "Doc + System Lesson" — also add to
System Lessons (see criteria below)prompt-injections.cjs - "Full prevention" — plan a hook, test, or shared module to enforce it automatically
- Execute the chosen option. For "Full prevention", create a plan via
instead of just saving./plan
System Lesson Promotion (MANDATORY evaluation)
After generalizing a lesson, evaluate whether it qualifies as a System Lesson in
.claude/hooks/lib/prompt-injections.cjs. System Lessons are injected into EVERY prompt — they are the highest-visibility prevention layer.
Qualification criteria (ALL must be true):
- Universal — Applies to ANY AI coding project, not just this codebase
- High recurrence — AI agents make this mistake repeatedly across sessions without the reminder
- Silent failure — The mistake produces no error/warning; it silently degrades output quality
- Not already covered — No existing System Lesson addresses the same root cause
System Lessons — Universal AI mistake prevention rules injected into EVERY prompt. Stored in
→ "Common AI Mistake Prevention" array. Each must be universal, high-recurrence, and silent-failure. READinjectAiMistakePrevention()to check for duplicates before adding..claude/hooks/lib/prompt-injections.cjs
If qualified: Recommend "Doc + System Lesson" option. On user approval, append the lesson as a new bullet to the System Lessons array in
prompt-injections.cjs following the existing format: `- **Bold title.** Explanation sentence.`
If NOT qualified: Explain why (e.g., "Too project-specific", "Already covered by existing System Lesson about X", "Low recurrence — only happens in rare edge cases"). Proceed with doc-only or prompt-rule option.
Lesson Quality Gate (MANDATORY before saving)
Every lesson MUST be root-cause level and generic across any codebase. Apply this 3-step extraction before saving:
Step 1 — Name the FAILURE MODE, not the symptom:
The failure mode is the reasoning or assumption that broke — not what the output looked like.
| Symptom (BAD — reject this) | Failure mode (GOOD — save this) |
|---|---|
| "Used wrong enum value" | "Generated code using an assumed API without verifying it exists in source" |
| "Wrong namespace/import" | "Assumed project setup from convention without reading project-specific config files first" |
| "Happy-path test failed in CI" | "Wrote assertions without tracing what runtime infrastructure the code path requires" |
| "Set properties that don't exist" | "Assumed all types in a hierarchy share the same interface without reading the base class" |
| "Always read file X before Y" | "Assumed execution context without reading the owning layer's contract — fixed at symptom site instead of cause" |
Step 2 — Verify generality:
Does this failure mode apply to ≥3 different contexts or codebases? If only one file or one specific case → go up one abstraction level. A good lesson prevents an entire class of mistakes.
Step 3 — Write as a universal rule:
- Strip ALL project-specific names, file paths, class names, and tool names
- Must be useful on any codebase, any language, any task type
- If multiple mistakes share the same failure mode → consolidate to ONE lesson, not many
- Test: "Would an AI working in Java, Go, or Python on a different project benefit from this?" If yes → good. If no → rewrite.
Anti-pattern examples:
- BAD: "Always check
for marker strings" → project-specific pathlib/dedup-constants.cjs - GOOD: "When consolidating modules, ensure shared constants are imported from a single source of truth — never define inline duplicates."
- BAD: "Update
after deleting hooks" → project-specific file.claude/docs/hooks/README.md - GOOD: "Deleting components causes documentation staleness cascades — map all referencing docs before removal."
- BAD: "Read GlobalUsings.cs before adding usings in *.IntegrationTests" → project-specific file
- GOOD: "Before generating code that uses project conventions (imports, namespaces, annotations), read the project's bootstrap/configuration files for that layer — convention files override framework defaults silently."
Routing Decision Process
- Run Triage Gate — recurrence + auto-fix filters; stop here if either fails
- Read the lesson text — identify keywords and domain
- Apply Lesson Quality Gate — analyze root cause, generalize, verify universality
- Run Prevention Depth Assessment — determine if doc-only or deeper prevention needed
- Match against Routing Table — pick the best-fit file
- Tell the user: "This lesson fits best in
. Confirm? [Y/n]"docs/{file} - On confirm — read target file, find the right section, append the lesson
- On reject — ask user which file to use instead
Format by Target File
For
(general lessons):docs/project-reference/lessons.md
- [YYYY-MM-DD] <lesson text>
For pattern/rules files (code-review-rules, backend-patterns, frontend-patterns, integration-test):
- Find the most relevant existing section in the file
- Append the lesson as a rule, anti-pattern entry, or code example
- Use the file's existing format (tables, code blocks, bullet lists)
- If no section fits, append to the Anti-Patterns or general rules section
Budget Enforcement (MANDATORY for docs/project-reference/lessons.md
)
docs/project-reference/lessons.mddocs/project-reference/lessons.md is injected into EVERY prompt and EVERY file edit. Token budget must be controlled.
Hard limit: 10000 characters (~3333 tokens). Check BEFORE saving any new lesson.
Workflow when adding to
:docs/project-reference/lessons.md
- Read file, count characters (
)wc -c docs/project-reference/lessons.md - If current + new lesson > 10000 chars → trigger Budget Trim before saving
- If under budget → save normally
Budget Trim process:
- Display all current lessons with char count each
- Evaluate each lesson on two axes:
- Universality — How often does this apply? (every session vs rare edge case)
- Recurrence risk — How likely is the AI to repeat this mistake without the reminder?
- Score each: HIGH (keep as-is), MEDIUM (candidate to condense), LOW (candidate to remove)
- Present to user with
: "Budget exceeded. Recommend removing/condensing these LOW/MEDIUM items: [list]. Approve?"AskUserQuestion - On approval: condense MEDIUM items (shorten wording), remove LOW items, then save new lesson
- On rejection: ask user which to remove/condense
Condensing rules:
- Remove examples, keep the rule:
→ just state the rule"Patterns like X break Y syntax" - Merge related lessons into one if they share the same root cause
- Target: each lesson ≤ 250 chars (one concise sentence + bold title)
Does NOT apply to: Other routing targets (
backend-patterns-reference.md, code-review-rules.md, etc.) — those files have their own size and are injected contextually, not on every prompt.
Behavior
— Route and append lesson to the best-fit file (check budget if target is/learn <text>
)lessons.md
— Read and display lessons from ALL 12 target files (show file grouping + char count for/learn list
)lessons.md
— Remove lesson from/learn remove <N>
by line numberdocs/project-reference/lessons.md
— Clear all lessons from/learn clear
only (confirm first)docs/project-reference/lessons.md
— Manually trigger Budget Trim on/learn trimdocs/project-reference/lessons.md- File creation — If target file doesn't exist, create with header only
Auto-Inferred Activation
When Claude detects correction phrases in conversation (e.g., "always use X", "remember this", "never do Y", "from now on"), this skill auto-activates. When auto-inferred (not explicit
/learn), confirm with the user before saving: "Save this as a lesson? [Y/n]"
Injection
Lessons are injected by
lessons-injector.cjs hook on:
- UserPromptSubmit —
content (with dedup)docs/project-reference/lessons.md - PreToolUse(Edit|Write|MultiEdit) —
content (always)docs/project-reference/lessons.md - Pattern reference files are injected by their respective hooks (
,code-patterns-injector.cjs
, etc.)code-review-rules-injector.cjs
Prompt Enhancement (MANDATORY final step)
After saving a lesson to any target file, run
/prompt-enhance on the modified file(s) to optimize AI attention anchoring and token quality.
When to run:
- After EVERY successful lesson save (regardless of target file)
- Pass the specific file path(s) that were modified
What it does:
- Ensures the new lesson integrates with existing top/bottom summary anchoring
- Optimizes token usage — tightens prose, merges redundant content
- Verifies no content loss from the save operation
How to invoke:
/prompt-enhance docs/project-reference/<modified-file>.md
Skip conditions (do NOT run prompt-enhance if):
- The save was to
AND the file is under 1500 chars (too small to benefit)lessons.md - The user explicitly requests "save only, no enhance"
Closing Reminders
- IMPORTANT MUST ATTENTION run Triage Gate FIRST — if recurrence is low OR review skills can catch it, skip
entirely/learn - IMPORTANT MUST ATTENTION check Reference Doc Catalog to find the best target file — NOT always
lessons.md - IMPORTANT MUST ATTENTION final task is ALWAYS: run
— do NOT mark complete until this runs/prompt-enhance <modified-file> - IMPORTANT MUST ATTENTION break work into small todo tasks using
BEFORE startingTaskCreate - IMPORTANT MUST ATTENTION prefer auto-injected files for high-recurrence lessons (higher visibility) <!-- SYNC:critical-thinking-mindset:reminder -->
- MUST ATTENTION apply critical thinking — every claim needs traced proof, confidence >80% to act. Anti-hallucination: never present guess as fact. <!-- /SYNC:critical-thinking-mindset:reminder --> <!-- SYNC:ai-mistake-prevention:reminder -->
- MUST ATTENTION apply AI mistake prevention — holistic-first debugging, fix at responsible layer, surface ambiguity before coding, re-read files after compaction. <!-- /SYNC:ai-mistake-prevention:reminder -->