install
source · Clone the upstream repo
git clone https://github.com/ai-analyst-lab/ai-analyst
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/ai-analyst-lab/ai-analyst "$T" && mkdir -p ~/.claude/skills && cp -r "$T/.claude/skills/log-correction" ~/.claude/skills/ai-analyst-lab-ai-analyst-log-correction && rm -rf "$T"
manifest:
.claude/skills/log-correction/skill.mdsource content
Skill: Log Correction
Purpose
Record analyst mistakes and their fixes so future analyses learn from past errors. Manual counterpart to automatic feedback capture.
When to Use
- User says "log a correction", "that was wrong because...", or similar
- Feedback-capture skill routes here for detailed correction entry
- After discovering and fixing an error mid-analysis
Instructions
Step 1: Gather Details
Extract from conversation context or ask the user:
- What was wrong? — One-sentence description of the error
- What is the correct answer? — The fix or corrected approach
- Which dataset/tables? — Dataset name and affected table(s)
- How severe? —
(wrong numbers shared) |critical
(changes conclusions) |high
(directionally correct) |medium
(no impact)low - SQL before/after? — If the error involved a query, capture both versions
If any required field is unclear, ask the user. Do not guess severity.
Step 2: Categorize
Assign one category based on the error type:
| Category | Description |
|---|---|
| Wrong query — bad join, missing filter, incorrect aggregation |
| Wrong metric definition — numerator/denominator error, wrong time window |
| Wrong column or table reference — stale schema, misnamed field |
| Flawed reasoning — Simpson's paradox missed, survivorship bias, wrong comparison |
| Anything that does not fit the above |
Step 3: Write the Correction
- Read
using.knowledge/corrections/index.yamlsafe_read_yaml() - Derive next ID: if
is null, uselast_correction_id
; otherwise parse the numeric suffix, increment, and zero-pad to 3 digitsCORR-001 - Build the entry following
:.knowledge/corrections/log.template.yaml
- id: "CORR-{N}" date: "{YYYY-MM-DD}" severity: "{severity}" category: "{category}" dataset: "{dataset_name}" tables: ["{table1}", "{table2}"] description: "{what was wrong}" fix: "{what the correct approach is}" sql_before: "{original query, if applicable, else null}" sql_after: "{corrected query, if applicable, else null}" prevented_by: "{which validation layer should have caught this}"
- Read
using.knowledge/corrections/log.yamlsafe_read_yaml() - Append the new entry to the
listcorrections - Write back using
atomic_write_yaml()
Step 4: Update Index
- Read
(already loaded in Step 3).knowledge/corrections/index.yaml - Increment
total_corrections - Increment the matching
counterby_severity.{severity} - Increment
(create the key if it does not exist)by_category.{category} - Set
to the new IDlast_correction_id - Set
to today's datelast_updated - Write back using
atomic_write_yaml()
Step 5: Confirm
Report to the user:
Correction logged: {id} Severity: {severity} | Category: {category} Description: {description} Fix: {fix} Future analyses will check for this pattern during validation.
Rules
- Never overwrite existing corrections -- always append
- Always read current state before writing (no blind overwrites)
- If
orlog.yaml
is missing or corrupt, create from scratch with schema_version 1index.yaml - SQL snippets in
/sql_before
should be trimmed to the relevant clause, not the entire multi-hundred-line querysql_after
should reference a specific validation layer: structural, logical, business-rules, Simpson's check, or source tie-outprevented_by
Edge Cases
- No SQL involved: Set
andsql_before
to nullsql_after - Dataset unknown: Set
to "unknown" and note in descriptiondataset - Duplicate correction: Still log it -- repeated errors signal a systemic gap
- Correction to a correction: Log as a new entry referencing the prior ID in description