git clone https://github.com/GeorgeDoors888/GB-Power-Market-JJ
T=$(mktemp -d) && git clone --depth=1 https://github.com/GeorgeDoors888/GB-Power-Market-JJ "$T" && mkdir -p ~/.claude/skills && cp -r "$T/openclaw-skills/skills/ankechenlab-node/amber-hunter" ~/.claude/skills/georgedoors888-gb-power-market-jj-amber-hunter && rm -rf "$T"
T=$(mktemp -d) && git clone --depth=1 https://github.com/GeorgeDoors888/GB-Power-Market-JJ "$T" && mkdir -p ~/.openclaw/skills && cp -r "$T/openclaw-skills/skills/ankechenlab-node/amber-hunter" ~/.openclaw/skills/georgedoors888-gb-power-market-jj-amber-hunter && rm -rf "$T"
openclaw-skills/skills/ankechenlab-node/amber-hunter/SKILL.mdAmber-Hunter Skill
Gives any AI client long-term memory — captures, encrypts, and recalls personal context across sessions Version: 1.2.30 | 2026-04-04
Tags: ai-memory | second-brain | local-encrypted | proactive-recall | cross-platform | context-management | RAG | long-term-memory | AI-personal-assistant | privacy-first
amber-hunter runs on the user's local machine (Mac / Linux / Windows). Local AI clients communicate via
localhost:18998. External AI clients (ChatGPT, Claude.ai) use the cloud API at huper.org/api.
What It Does
Amber-Hunter is the capture and recall layer of Huper琥珀 — a personal memory protocol that works across any AI client and any platform.
- AI long-term memory — gives ChatGPT, Claude, and any AI client persistent context across conversations
- Proactive capture — AI-initiated writes via
; user reviews and approves before memories are stored/ingest - Instant recall —
retrieves relevant past memories before responding (hybrid semantic + keyword search)/recall?q=<query> - Second brain — builds a personal knowledge base that survives context windows and session boundaries
- E2E encrypted — AES-256-GCM, master_password in OS keychain, never uploaded in plaintext
- Cross-platform — macOS / Windows / Linux (desktop + headless server)
- Cloud sync — optional encrypted upload to huper.org for cross-device access
- RAG-ready —
endpoint returns structured context for Retrieval Augmented Generation pipelines/recall
Memory Category System (v1.1.9+)
琥珀 uses a two-level taxonomy: category (8 fixed domains) + tags (specific labels).
The 8 Categories
| category | emoji | Label | Covers |
|---|---|---|---|
| 💭 | 想法 | Fleeting ideas, insights, eureka moments |
| 📖 | 学习 | Reading notes, courses, new knowledge |
| 🎯 | 决策 | Choices made, directions set |
| 🌱 | 成长 | Reflections, reviews, emotional records |
| 🤝 | 关系 | Conversations with others, notes about people |
| 🏃 | 生活 | Health, food, daily observations |
| 🎨 | 创意 | Design ideas, things to build |
| 💻 | 开发 | All developer-specific content (code, errors, APIs, etc.) |
Auto-detection Keywords
The system auto-tags based on content keywords. AI clients should also suggest
category when calling /ingest:
thought → "想到", "突然想", "realize", "just thought" learning → "读了", "看了", "reading", "book says" decision → "决定", "选择了", "decided", "going with" reflection → "反思", "复盘", "reflecting", "looking back" people → "和...聊", "talked to", "met with" life → "运动", "睡眠", "sleep", "exercise" creative → creative/design keywords dev → python/js/git/docker/api/sql/error keywords (all existing dev rules)
Multi-Client Integration Guide
Which endpoint to use
| AI Client | Network | Endpoint | Auth |
|---|---|---|---|
| openclaw | localhost | | Bearer token |
| Claude Code | localhost | | Bearer token |
| Claude in Cowork | localhost (Desktop Commander) | | Bearer token |
| ChatGPT | internet (cloud) | | User JWT / API key |
| Claude.ai | internet (cloud) | | User JWT / API key |
Get the local API token
curl http://localhost:18998/token # → {"api_key": "ahk_xxxx..."}
What's Worth Capturing — Judgment Rules
Use these rules when deciding whether to call
/ingest during a conversation:
| Signal | Example | confidence | review_required |
|---|---|---|---|
| save_request | "记住这个" / "save this" / "提醒我" | 1.0 | false |
| decision | "决定用 SQLite" / "we're going with plan B" / "用 FastAPI" | 0.9 | true |
| preference | "我更喜欢..." / "I prefer TypeScript" | 0.85 | true |
| personal_fact | 我的名字是... / 我住在... / 我在...工作 | 0.8 | true |
| summary | "总结一下..." / "key takeaways" / "tl;dr" | 0.7 | true |
| insight | "没想到..." / "discovered that" / "game changer" | 0.6 | true |
Proactive Hook (v1.2.13):
handler.js/ts auto-detects these 6 signals from agent:response events and calls /ingest with review_required: true. All captured signals appear in the review queue before becoming permanent memories.
Default behavior: when in doubt, set
review_required: true. The user reviews in the dashboard and accepts/rejects. Accepted/rejected history improves future judgment.
Never capture: conversation scaffolding ("can you help me"), ephemeral context ("right now I need"), common knowledge, task details that won't recur.
API Endpoints (v1.2.9)
Core
| Endpoint | Method | Auth | Description |
|---|---|---|---|
| GET | none | Service health + capsule_count + queue_pending + last_sync + semantic_model_loaded |
| GET | none | Root info + version |
| GET | localhost only | Get local API key |
| GET | localhost only | Local memory snapshot (no auth required) |
Memory Retrieval
| Endpoint | Method | Auth | Description |
|---|---|---|---|
| GET | Bearer / ?token= | Retrieve relevant memories (); hybrid mode: ; 支持前缀匹配(如 匹配 ); 时若存在对应路径的 insight 缓存则优先返回压缩摘要(v1.2.17);返回 category/source_type |
| POST | Bearer / ?token= | LLM re-rank candidates; body: → |
| PATCH | Bearer / ?token= | Increment capsule access count (updates hotness) |
| GET | none | Topic classify; → ; keyword primary, LLM fallback |
Memory Writes
| Endpoint | Method | Auth | Description |
|---|---|---|---|
| GET | Bearer | List local capsules (); returns category/source_type |
| POST | Bearer | Create capsule manually |
| GET | Bearer | Get capsule by ID |
| DELETE | Bearer | Delete capsule |
| POST | Bearer / ?token= | AI pushes memory → direct capsule if confidence≥0.95+review_required=false, else → queue |
| POST | Bearer / ?token= | Structured LLM extraction; body: → extracted memories |
Queue Management
| Endpoint | Method | Auth | Description |
|---|---|---|---|
| GET | Bearer / ?token= | List pending memories awaiting review |
| POST | Bearer / ?token= | Accept → writes to capsules |
| POST | Bearer / ?token= | Dismiss → status=rejected |
| POST | Bearer / ?token= | Edit then accept → writes modified to capsules |
| GET | Bearer / ?token= | Terminal-friendly queue list (v1.2.9) |
| POST | Bearer / ?token= | approve/reject queue item from CLI (v1.2.9) |
Session Context (proactive capture)
| Endpoint | Method | Auth | Description |
|---|---|---|---|
| GET/POST | Bearer / ?token= | Capture current dev session context |
| GET | Bearer | Get current session summary |
| GET | Bearer | Get open files in current session |
| GET | Bearer | Get preloaded memories for current scene (v1.2.19) |
DID Identity (v1.2.20 — multi-device)
| Endpoint | Method | Auth | Description |
|---|---|---|---|
| POST | Bearer | Generate mnemonic + derive device key → save locally (mnemonic shown once) |
| GET | Bearer | Check if local DID identity is configured |
| POST | Bearer | Register device public key to cloud (cloud account must have DID set up) |
Sync & Config
| Endpoint | Method | Auth | Description |
|---|---|---|---|
| GET | Bearer / ?token= | Sync to huper.org cloud |
| GET/POST | Bearer / ?token= | Read/set config (auto_sync etc.) |
| GET/PUT | Bearer / ?token= | Read/set LLM provider (minimax/openai/claude/local) |
Localhost-only (security restricted)
| Endpoint | Method | Description |
|---|---|---|
| POST | Set master_password (stored in OS keychain) |
| POST | Update huper.org API key in config |
/ingest
Request Format
/ingestPOST http://localhost:18998/ingest?token={api_key} Content-Type: application/json { "memo": "Anke prefers SQLite over Postgres for simpler deployment", "context": "During database selection discussion for amber project", "category": "decision", "tags": "decided,database", "source": "claude_cowork", "confidence": 0.9, "review_required": true, "agent_tag": "openclaw" // v2.0.0: optional, adds #agent:openclaw tag for color-coding in UI }
Response:
// Goes to review queue: {"queued": true, "queue_id": "abc123", "category": "decision", "source_type": "ingest"} // Written directly (confidence≥0.95 and review_required=false): {"queued": false, "capsule_id": "xyz456", "category": "decision", "source_type": "ingest"} // First ingest (capsule_count==0, v2.0.0): {"queued": false, "capsule_id": "xyz456", "welcome": true, "message": "这是你的第一条记忆!...", "sample_count": 3}
LLM Provider Configuration (v1.2.1+)
amber-hunter supports multiple LLM providers:
| Provider | Config key | Notes |
|---|---|---|
| MiniMax | | Default; auto-detects API key from OpenClaw config |
| OpenAI | | GPT-4o mini etc. |
| Claude | | Claude 3.5 Haiku etc. |
| Local | | Ollama / LM Studio |
# Set provider curl -X PUT http://localhost:18998/config/llm?token={api_key} \ -H "Content-Type: application/json" \ -d '{"provider": "openai"}' # Get current provider curl http://localhost:18998/config/llm?token={api_key}
Usage Patterns
openclaw / Claude Code
# 1. At conversation start — retrieve relevant context TOKEN=$(curl -s http://localhost:18998/token | python3 -c "import sys,json; print(json.load(sys.stdin)['api_key'])") curl "http://localhost:18998/recall?token=$TOKEN&q=YOUR_QUERY&limit=3" # 2. During conversation — push a memory when something worth keeping surfaces curl -X POST "http://localhost:18998/ingest?token=$TOKEN" \ -H "Content-Type: application/json" \ -d '{ "memo": "User decided to use SQLite for simpler ops", "category": "decision", "tags": "decided", "source": "claude_code", "confidence": 0.9, "review_required": true }' # 3. End of conversation — auto-extract 1-2 key takeaways (confidence=0.7) curl -X POST "http://localhost:18998/ingest?token=$TOKEN" \ -d '{"memo":"Summary: ...", "source":"claude_code", "confidence":0.7, "review_required":true}'
ChatGPT (via GPT Action / cloud API)
curl -X POST https://huper.org/api/ingest \ -H "Authorization: Bearer USER_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "memo": "User mentioned they prefer async Python patterns", "category": "dev", "tags": "python", "source": "chatgpt", "confidence": 0.8, "review_required": true }'
Platform Support
| Feature | macOS | Linux | Windows |
|---|---|---|---|
| amber-hunter service | ✅ LaunchAgent | ✅ systemd | ✅ Planned |
| Keychain storage | ✅ security CLI | ✅ secret-tool / config.json | ✅ cmdkey |
| Semantic search | ✅ | ✅ | ✅ |
| Proactive capture | ✅ | ✅ | ❌ |
session | ✅ | ✅ | ❌ |
Troubleshooting
# Service not running curl http://localhost:18998/status tail -f ~/.amber-hunter/amber-hunter.log # Linux: secret-tool not found sudo apt install libsecret-tools # Ubuntu/Debian sudo dnf install libsecret # Fedora # Check pending memories curl "http://localhost:18998/queue?token=$(curl -s localhost:18998/token | python3 -c 'import sys,json;print(json.load(sys.stdin)[\"api_key\"])')"
FAQ & Known Issues
Q: amber-hunter runs on a VPS, not my local Mac. How do I configure the API key?
When amber-hunter is on a different machine than your browser, the dashboard's "Generate API Key" button can't auto-bind (it POSTs to
127.0.0.1:18998 which is your local machine, not the VPS).
Manual setup on the VPS:
# Option 1: environment variable (recommended for VPS) echo 'export AMBER_TOKEN="your_api_key_here"' >> ~/.bashrc source ~/.bashrc # Option 2: write directly to config.json mkdir -p ~/.amber-hunter cat >> ~/.amber-hunter/config.json << 'EOF' {"api_token": "your_api_key_here"} EOF # Then restart amber-hunter launchctl unload ~/Library/LaunchAgents/com.huper.amber-hunter.plist launchctl load ~/Library/LaunchAgents/com.huper.amber-hunter.plist
How to get the API key:
- Go to huper.org → Dashboard → Account → API Key
- Click "Generate API Key" and copy the key immediately (it's only shown once)
Q: Dashboard shows "尚未生成API Key" even after I generated one
This is a UI bug in older versions. Update to the latest version, or refresh the dashboard page. The key is stored correctly in the database — the display just wasn't updating after generation.
Q: Sync shows "network unreachable" or "Token 无效" errors
If amber-hunter is on a VPS: The
api_token in config.json may be empty or wrong. Verify:
cat ~/.amber-hunter/config.json | grep api_token
If empty, the VPS can't reach huper.org cloud sync. Manually set the
api_token as shown in the FAQ above.
If amber-hunter is on your local Mac: Make sure
bind-apikey completed successfully (it runs automatically after generating a key). Check:
curl -s http://localhost:18998/config | python3 -c "import sys,json; d=json.load(sys.stdin); print('api_token:', d.get('api_key','(not set)')[:10]+'...')"
Q: I generated a new API Key but amber-hunter on VPS stopped syncing
Each key can only be used by one amber-hunter instance at a time. If you generate a new key from the dashboard, the old key (still configured on VPS) becomes invalid. Either:
- Copy the new key to the VPS config and restart
- Or keep using the old key (don't click "Generate New Key" unless you mean to rotate it)
Q: "尚未生成API Key" never goes away on first use
This means you haven't generated an API key yet. Click the orange "生成 API Key" button on the Dashboard → Account → API Key page. The key is shown once — copy it immediately and save it somewhere before leaving the page.
Version History
- v1.2.29 (2026-04-04): G1 Self-Correction Loop —
SQLite 表记录每次校正事件;correction_log
应用用户校正规则(5分钟缓存);_normalize_tag
/record_tag_correction
在 queue edit 时调用;record_category_correction
分析校正模式;GET /corrections/stats
采纳替换规则。POST /corrections/apply - v1.2.28 (2026-04-04): P2-1 Mem0 Auto-extraction —
从对话自动抽取 facts/preferences/decisions;core/extractor.py
高置信直接入库/中置信进队列;POST /extract/auto
查看抽取统计;结合 WAL 信号 + 偏好提取 + LLM 结构化抽取三重机制。GET /extract/status - v1.2.27 (2026-04-04): P1-1 Structured User Profile —
SQLite 表;user_profile
LLM extraction;core/profile.py
返回四段画像;GET /profile
手动更新;PUT /profile/{section}
从 session 构建;recall 响应注入POST /profile/build
字段。profile - v1.2.26 (2026-04-04): P0-2续 WAL GC —
删除已处理条目;wal_gc(age_hours=24)
新增get_wal_stats()
;懒 GC(>50 条已处理时自动清理);processed_count
端点支持手动 GC。POST /wal/gc - v1.2.25 (2026-04-04): P0-3 可解释召回 —
返回_kw_score
;(score, matched_terms)
新增breakdown
+matched_terms
;wal_signal
改为详细自然语言说明(含具体匹配词、语义相似度%、WAL信号类型)。reason - v1.2.24 (2026-04-04): P0-2 WAL 热存储 —
新增 Session State WAL 模块;core/wal.py
返回前检测偏好/决定/修正信号并写入recall_memories
;新增~/.amber-hunter/session_wal.jsonl
+/wal/status
端点。/wal/entries - v1.2.23 (2026-04-04): P0-1 LanceDB 向量搜索 —
新增 LanceDB 封装;胶囊入库时同步写向量;recall 优先 LanceDB top_k 检索(0.50权重),on-the-fly 回退;升级 torch 2.8.0。core/vector.py - v1.2.22 (2026-04-04): Fix line 1570 bug (
→row[2]
) instored_challenge
; add/api/did/auth/verify
+POST /did/auth/challenge
in amber_hunter; fixPOST /did/auth/sign-challenge
to usedid_register_device
; fixget_api_token()
reference before definition.HOME - v1.2.21 (2026-04-04): D2 DID Challenge-Response Auth + capsule key derivation —
wired intoderive_capsule_key
/create_capsule
(DID device key → AES-256-GCM, PBKDF2 fallback);get_capsule
+POST /api/did/auth/challenge
endpoints;POST /api/did/auth/verify
supports DID token;require_auth
saved todevice_priv
.did.json - v1.2.8 (2026-04-01): Fix proactive-check.js — filter log lines from session transcript; memo truncation 60→80 chars.
- v1.2.4 (2026-04-01): Fix
/source_type
missing in sync payload;category
reuse for sync;httpx.Client
limit param;/capsules
new fields./memories - v1.2.3 (2026-04-01): Fix
semantic search on full corpus; hybrid mode/recall
;0.4×keyword + 0.6×semantic
enhanced with capsule_count/queue_pending/last_sync./status - v1.2.1 (2026-03-31): LLM abstraction layer (
);core/llm.py
endpoint;/rerank
LLM fallback; proactive session selection by message count./classify - v1.1.9 (2026-03-31): Universal memory taxonomy (8 life categories);
+ queue management;/ingest
+source_type
fields; ChatGPT GPT Action.category - v0.9.5 (2026-03-28): amber-proactive V4 — self-contained cron, LLM extraction.
- v0.8.4 (2026-03-22): E2E encryption, cross-platform keychain,
no-auth./memories