NekoCore-OS book-ingestion
Extract characters from a book and create them as NekoCore OS entities with POV-isolated memories. Supports main-only, all, or specific character selection.
install
source · Clone the upstream repo
git clone https://github.com/voardwalker-code/NekoCore-OS
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/voardwalker-code/NekoCore-OS "$T" && mkdir -p ~/.claude/skills && cp -r "$T/project/MA/MA-skills/book-ingestion" ~/.claude/skills/voardwalker-code-nekocore-os-book-ingestion && rm -rf "$T"
manifest:
project/MA/MA-skills/book-ingestion/SKILL.mdsource content
Book-to-Entity Ingestion Skill
Extract characters from a book (novel, story, transcript) and create each as a NekoCore OS entity with isolated first-person memories.
When This Skill Applies
- User wants to ingest a book and extract characters as entities
- User mentions book characters, novel characters, character extraction
- User wants to turn a story's cast into NekoCore OS entities
- User mentions "book to entity", "extract characters from book", "ingest novel"
Workflow Overview
Book ingestion is a multi-phase pipeline:
- Upload book text → server chunks it into ~2500-char segments
- Process all chunks in batches → discover characters, aliases, roles, traits, scene appearances
- Classify characters (main/supporting/minor/background)
- Present list to user — wait for selection (Main Only / All / Specific names)
- For each selected character, extract POV-isolated memories from ONLY their scenes
- Create each character as an entity via NekoCore OS API
- Inject memories chronologically with cognitive ticks between chapters
- Summary report
API Endpoints
MA Server (http://localhost:3850
)
http://localhost:3850| Endpoint | Method | Purpose |
|---|---|---|
| POST | Upload + chunk book text. Body: or |
| GET | List chunk metadata (index, preview, charCount) |
| GET | Read one chunk's full text |
NekoCore OS (http://localhost:3847
)
http://localhost:3847| Endpoint | Method | Purpose |
|---|---|---|
| POST | Create entity. Body: |
| POST | Inject one memory. Body: |
| POST | Process memories through cognitive pipeline |
| GET | Read neurochemistry, beliefs, mood, persona |
Character Selection Modes
When presenting the character list, offer three modes:
- Main Characters Only — protagonist, antagonist, love interest. Appears in >25% of chunks.
- All Characters — all non-background characters (main + supporting + minor).
- Specific Characters — user names exactly which characters to extract.
CRITICAL: Always wait for user selection before proceeding to memory extraction.
POV Isolation Rules — MANDATORY
These rules define the entire purpose of this system:
- Scene presence required — a character only gets memories from chunks where they appear
- First-person only — every memory is written from that character's perspective
- No cross-contamination — Character A NEVER knows about scenes they weren't in
- Internal thoughts — belong ONLY to the POV narrator character
- Shared scenes — generate SEPARATE memories for each present character, with different emotional framing and importance
- Importance varies — the same event can be high-importance for one character and trivial for another
Memory Schema
Each injected memory follows this format:
{ "content": "First-person memory text — ONLY what this character experienced/witnessed", "type": "episodic", "emotion": "joy|wonder|love|hope|pride|gratitude|sadness|fear|anger|grief|longing|nostalgia|curiosity|neutral|resignation|melancholic|determined|content", "topics": ["tag1", "tag2"], "importance": 0.3, "narrative": "Brief third-person summary", "phase": "chapter_1" }
IMPORTANT
- Always upload the book FIRST via
before any processing/api/book/upload - Process ALL chunks during discovery — do not skip chunks
- Never create entity files manually with
— always use the NekoCore OS APIws_write - Use
prefix memories (the API handles this) — NOTmem_*
which are excluded from retrievaldoc_* - Run cognitive ticks between chapter groups (every ~5-10 memories per character)
- After all memories, inject one semantic relationship memory per significant character relationship