openclaw-knowledge-coach
OpenClaw-native knowledge retention skill. Imports local documents, generates retrieval practice, evaluates answers, and produces insight cards — all using the host agent's LLM with zero extra API key configuration. Use when users ask to ingest local knowledge, generate practice exercises, or master stored knowledge.
git clone https://github.com/Sibo-Zhao/OpenPraxis
T=$(mktemp -d) && git clone --depth=1 https://github.com/Sibo-Zhao/OpenPraxis "$T" && mkdir -p ~/.claude/skills && cp -r "$T/openclaw-knowledge-coach" ~/.claude/skills/sibo-zhao-openpraxis-openclaw-knowledge-coach && rm -rf "$T"
T=$(mktemp -d) && git clone --depth=1 https://github.com/Sibo-Zhao/OpenPraxis "$T" && mkdir -p ~/.openclaw/skills && cp -r "$T/openclaw-knowledge-coach" ~/.openclaw/skills/sibo-zhao-openpraxis-openclaw-knowledge-coach && rm -rf "$T"
openclaw-knowledge-coach/SKILL.mdOpenClaw Knowledge Coach
An OpenClaw-native skill for local knowledge retention. Import knowledge, generate practice, evaluate answers, and produce insight cards — all powered by the host agent's model, with zero extra API key configuration.
OpenClaw Users (Recommended)
When running inside an OpenClaw agent, the host provides model configuration. No
praxis llm setup or API key setup is required.
Install the library:
pip install openpraxis
The skill uses the host agent's LLM capability automatically. Set the environment variable to enable OpenClaw mode:
export OPENPRAXIS_MODE=openclaw
Standalone CLI (Fallback)
For use outside of an OpenClaw agent, configure your own provider:
pip install openpraxis praxis llm setup praxis llm show
Environment variables override config file values:
export OPENAI_API_KEY="your_key_here" # or ARK_API_KEY / MOONSHOT_API_KEY / DEEPSEEK_API_KEY based on provider
Core Workflow
- Confirm scope and source
- Confirm knowledge domains, source folders, and accepted file types.
- Confirm whether to preserve existing metadata (tags, dates, project names).
- Define import contract
- Normalize each source into a record with
,doc_id
,title
,source_path
,tags
, andcreated_at
.content - Split long content into chunks with stable IDs such as
.doc_id#chunk-001
- Import into OpenClaw
- Ingest normalized records into the local OpenClaw knowledge base.
- Keep a deterministic mapping between source file and imported IDs for later updates.
- Generate exercises at import time
- For each chunk, create at least one retrieval exercise.
- Prefer three exercise types:
: ask the user to explain from memory.free-recall
: ask direct question-answer pairs.qa
: ask scenario-based transfer questions.application
- Save answer keys and concise grading rubrics.
- Build review queue
- Group exercises by topic and difficulty.
- Schedule spaced review windows (for example: day 1, day 3, day 7, day 14).
- Validate quality
- Reject exercises that can be answered without the imported knowledge.
- Reject ambiguous or duplicate questions.
- Ensure every exercise points back to
anddoc_id
.chunk_id
CLI Command Playbook
Run this sequence when the user asks to import local knowledge and create practice:
- Add a local file
praxis add "/absolute/path/to/note.md" --type report
- List recent inputs and capture target
input_id
praxis list --limit 20
- Force-generate a new practice scene for an existing input
praxis practice <input_id>
- Submit answer by file (preferred for deterministic runs)
praxis answer <scene_id> --file "/absolute/path/to/answer.md"
- Inspect pipeline results and insight cards
praxis show <input_id> praxis insight <input_id>
- Export insights to Markdown/JSON
praxis export --format md --output "/absolute/path/to/insights.md" praxis export --format json --output "/absolute/path/to/insights.json"
Agent Execution Rules
- Prefer
for import and initial exercise generation.praxis add - Parse IDs from CLI output, then chain
andpraxis practice
.praxis answer - Use
instead of interactive stdin in automation flows.praxis answer --file - If duplicate content is skipped, rerun with
when user wants reprocessing.praxis add ... --force - Use one-shot runtime model override only when requested:
praxis --provider openai --model gpt-4.1-mini add "/absolute/path/to/note.md"
- For image notes, pass image file path directly to
; OCR extraction is built in.praxis add - Always finish with
pluspraxis show
orpraxis insight
so user gets concrete output artifacts.praxis export
Output Contract
When executing tasks with this skill, always provide these outputs:
- Import summary: files processed, chunks created, failures.
- Exercise summary: counts by type/topic/difficulty.
- Review plan: next due batches and estimated workload.
- Traceability map:
.source -> doc_id -> chunk_id -> exercise_id
Exercise Format
Use this compact JSON-like structure per exercise:
{ "exercise_id": "ex-...", "doc_id": "...", "chunk_id": "...", "type": "free-recall | qa | application", "question": "...", "answer_key": "...", "rubric": ["point 1", "point 2"], "difficulty": "easy | medium | hard", "next_review": "YYYY-MM-DD" }
For more generation patterns, read
references/exercise-patterns.md.