GB-Power-Market-JJ amber-hunter

Amber-Hunter Skill

install
source · Clone the upstream repo
git clone https://github.com/GeorgeDoors888/GB-Power-Market-JJ
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/GeorgeDoors888/GB-Power-Market-JJ "$T" && mkdir -p ~/.claude/skills && cp -r "$T/openclaw-skills/skills/ankechenlab-node/amber-hunter" ~/.claude/skills/georgedoors888-gb-power-market-jj-amber-hunter && rm -rf "$T"
OpenClaw · Install into ~/.openclaw/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/GeorgeDoors888/GB-Power-Market-JJ "$T" && mkdir -p ~/.openclaw/skills && cp -r "$T/openclaw-skills/skills/ankechenlab-node/amber-hunter" ~/.openclaw/skills/georgedoors888-gb-power-market-jj-amber-hunter && rm -rf "$T"
manifest: openclaw-skills/skills/ankechenlab-node/amber-hunter/SKILL.md
source content

Amber-Hunter Skill

Gives any AI client long-term memory — captures, encrypts, and recalls personal context across sessions Version: 1.2.30 | 2026-04-04


Tags: ai-memory | second-brain | local-encrypted | proactive-recall | cross-platform | context-management | RAG | long-term-memory | AI-personal-assistant | privacy-first


amber-hunter runs on the user's local machine (Mac / Linux / Windows). Local AI clients communicate via

localhost:18998
. External AI clients (ChatGPT, Claude.ai) use the cloud API at
huper.org/api
.


What It Does

Amber-Hunter is the capture and recall layer of Huper琥珀 — a personal memory protocol that works across any AI client and any platform.

  • AI long-term memory — gives ChatGPT, Claude, and any AI client persistent context across conversations
  • Proactive capture — AI-initiated writes via
    /ingest
    ; user reviews and approves before memories are stored
  • Instant recall
    /recall?q=<query>
    retrieves relevant past memories before responding (hybrid semantic + keyword search)
  • Second brain — builds a personal knowledge base that survives context windows and session boundaries
  • E2E encrypted — AES-256-GCM, master_password in OS keychain, never uploaded in plaintext
  • Cross-platform — macOS / Windows / Linux (desktop + headless server)
  • Cloud sync — optional encrypted upload to huper.org for cross-device access
  • RAG-ready
    /recall
    endpoint returns structured context for Retrieval Augmented Generation pipelines

Memory Category System (v1.1.9+)

琥珀 uses a two-level taxonomy: category (8 fixed domains) + tags (specific labels).

The 8 Categories

categoryemojiLabelCovers
thought
💭想法Fleeting ideas, insights, eureka moments
learning
📖学习Reading notes, courses, new knowledge
decision
🎯决策Choices made, directions set
reflection
🌱成长Reflections, reviews, emotional records
people
🤝关系Conversations with others, notes about people
life
🏃生活Health, food, daily observations
creative
🎨创意Design ideas, things to build
dev
💻开发All developer-specific content (code, errors, APIs, etc.)

Auto-detection Keywords

The system auto-tags based on content keywords. AI clients should also suggest

category
when calling
/ingest
:

thought    → "想到", "突然想", "realize", "just thought"
learning   → "读了", "看了", "reading", "book says"
decision   → "决定", "选择了", "decided", "going with"
reflection → "反思", "复盘", "reflecting", "looking back"
people     → "和...聊", "talked to", "met with"
life       → "运动", "睡眠", "sleep", "exercise"
creative   → creative/design keywords
dev        → python/js/git/docker/api/sql/error keywords (all existing dev rules)

Multi-Client Integration Guide

Which endpoint to use

AI ClientNetworkEndpointAuth
openclawlocalhost
POST /ingest
Bearer token
Claude Codelocalhost
POST /ingest
Bearer token
Claude in Coworklocalhost (Desktop Commander)
POST /ingest
Bearer token
ChatGPTinternet (cloud)
POST https://huper.org/api/ingest
User JWT / API key
Claude.aiinternet (cloud)
POST https://huper.org/api/ingest
User JWT / API key

Get the local API token

curl http://localhost:18998/token
# → {"api_key": "ahk_xxxx..."}

What's Worth Capturing — Judgment Rules

Use these rules when deciding whether to call

/ingest
during a conversation:

SignalExampleconfidencereview_required
save_request"记住这个" / "save this" / "提醒我"1.0false
decision"决定用 SQLite" / "we're going with plan B" / "用 FastAPI"0.9true
preference"我更喜欢..." / "I prefer TypeScript"0.85true
personal_fact我的名字是... / 我住在... / 我在...工作0.8true
summary"总结一下..." / "key takeaways" / "tl;dr"0.7true
insight"没想到..." / "discovered that" / "game changer"0.6true

Proactive Hook (v1.2.13):

handler.js/ts
auto-detects these 6 signals from
agent:response
events and calls
/ingest
with
review_required: true
. All captured signals appear in the review queue before becoming permanent memories.

Default behavior: when in doubt, set

review_required: true
. The user reviews in the dashboard and accepts/rejects. Accepted/rejected history improves future judgment.

Never capture: conversation scaffolding ("can you help me"), ephemeral context ("right now I need"), common knowledge, task details that won't recur.


API Endpoints (v1.2.9)

Core

EndpointMethodAuthDescription
/status
GETnoneService health + capsule_count + queue_pending + last_sync + semantic_model_loaded
/
GETnoneRoot info + version
/token
GETlocalhost onlyGet local API key
/memories
GETlocalhost onlyLocal memory snapshot (no auth required)

Memory Retrieval

EndpointMethodAuthDescription
/recall
GETBearer / ?token=Retrieve relevant memories (
?q=<query>&limit=3&rerank=true&category_path=<path>&use_insights=true
); hybrid mode:
0.4×keyword + 0.6×semantic
;
category_path
支持前缀匹配(如
knowledge
匹配
knowledge/python
);
use_insights=true
时若存在对应路径的 insight 缓存则优先返回压缩摘要(v1.2.17);返回 category/source_type
/rerank
POSTBearer / ?token=LLM re-rank candidates; body:
{query, memories[]}
{memories: [...]}
/recall/{id}/hit
PATCHBearer / ?token=Increment capsule access count (updates hotness)
/classify
GETnoneTopic classify;
?text=<text>
{"topics": "tag1,tag2"}
; keyword primary, LLM fallback

Memory Writes

EndpointMethodAuthDescription
/capsules
GETBearerList local capsules (
?limit=1-300
); returns category/source_type
/capsules
POSTBearerCreate capsule manually
/capsules/{id}
GETBearerGet capsule by ID
/capsules/{id}
DELETEBearerDelete capsule
/ingest
POSTBearer / ?token=AI pushes memory → direct capsule if confidence≥0.95+review_required=false, else → queue
/extract
POSTBearer / ?token=Structured LLM extraction; body:
{text, source}
→ extracted memories

Queue Management

EndpointMethodAuthDescription
/queue
GETBearer / ?token=List pending memories awaiting review
/queue/{id}/approve
POSTBearer / ?token=Accept → writes to capsules
/queue/{id}/reject
POSTBearer / ?token=Dismiss → status=rejected
/queue/{id}/edit
POSTBearer / ?token=Edit then accept → writes modified to capsules
/-review
GETBearer / ?token=Terminal-friendly queue list (v1.2.9)
/-review/{qid}
POSTBearer / ?token=approve/reject queue item from CLI (v1.2.9)

Session Context (proactive capture)

EndpointMethodAuthDescription
/freeze
GET/POSTBearer / ?token=Capture current dev session context
/session/summary
GETBearerGet current session summary
/session/files
GETBearerGet open files in current session
/session/preload
GETBearerGet preloaded memories for current scene (v1.2.19)

DID Identity (v1.2.20 — multi-device)

EndpointMethodAuthDescription
/did/setup
POSTBearerGenerate mnemonic + derive device key → save locally (mnemonic shown once)
/did/status
GETBearerCheck if local DID identity is configured
/did/register-device
POSTBearerRegister device public key to cloud (cloud account must have DID set up)

Sync & Config

EndpointMethodAuthDescription
/sync
GETBearer / ?token=Sync to huper.org cloud
/config
GET/POSTBearer / ?token=Read/set config (auto_sync etc.)
/config/llm
GET/PUTBearer / ?token=Read/set LLM provider (minimax/openai/claude/local)

Localhost-only (security restricted)

EndpointMethodDescription
/master-password
POSTSet master_password (stored in OS keychain)
/bind-apikey
POSTUpdate huper.org API key in config

/ingest
Request Format

POST http://localhost:18998/ingest?token={api_key}
Content-Type: application/json

{
  "memo": "Anke prefers SQLite over Postgres for simpler deployment",
  "context": "During database selection discussion for amber project",
  "category": "decision",
  "tags": "decided,database",
  "source": "claude_cowork",
  "confidence": 0.9,
  "review_required": true,
  "agent_tag": "openclaw"    // v2.0.0: optional, adds #agent:openclaw tag for color-coding in UI
}

Response:

// Goes to review queue:
{"queued": true, "queue_id": "abc123", "category": "decision", "source_type": "ingest"}

// Written directly (confidence≥0.95 and review_required=false):
{"queued": false, "capsule_id": "xyz456", "category": "decision", "source_type": "ingest"}

// First ingest (capsule_count==0, v2.0.0):
{"queued": false, "capsule_id": "xyz456", "welcome": true, "message": "这是你的第一条记忆!...", "sample_count": 3}

LLM Provider Configuration (v1.2.1+)

amber-hunter supports multiple LLM providers:

ProviderConfig keyNotes
MiniMax
minimax
Default; auto-detects API key from OpenClaw config
OpenAI
openai
GPT-4o mini etc.
Claude
claude
Claude 3.5 Haiku etc.
Local
local
Ollama / LM Studio
# Set provider
curl -X PUT http://localhost:18998/config/llm?token={api_key} \
  -H "Content-Type: application/json" \
  -d '{"provider": "openai"}'

# Get current provider
curl http://localhost:18998/config/llm?token={api_key}

Usage Patterns

openclaw / Claude Code

# 1. At conversation start — retrieve relevant context
TOKEN=$(curl -s http://localhost:18998/token | python3 -c "import sys,json; print(json.load(sys.stdin)['api_key'])")
curl "http://localhost:18998/recall?token=$TOKEN&q=YOUR_QUERY&limit=3"

# 2. During conversation — push a memory when something worth keeping surfaces
curl -X POST "http://localhost:18998/ingest?token=$TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "memo": "User decided to use SQLite for simpler ops",
    "category": "decision",
    "tags": "decided",
    "source": "claude_code",
    "confidence": 0.9,
    "review_required": true
  }'

# 3. End of conversation — auto-extract 1-2 key takeaways (confidence=0.7)
curl -X POST "http://localhost:18998/ingest?token=$TOKEN" \
  -d '{"memo":"Summary: ...", "source":"claude_code", "confidence":0.7, "review_required":true}'

ChatGPT (via GPT Action / cloud API)

curl -X POST https://huper.org/api/ingest \
  -H "Authorization: Bearer USER_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "memo": "User mentioned they prefer async Python patterns",
    "category": "dev",
    "tags": "python",
    "source": "chatgpt",
    "confidence": 0.8,
    "review_required": true
  }'

Platform Support

FeaturemacOSLinuxWindows
amber-hunter service✅ LaunchAgent✅ systemd✅ Planned
Keychain storage✅ security CLI✅ secret-tool / config.json✅ cmdkey
Semantic search
Proactive capture
/freeze
session

Troubleshooting

# Service not running
curl http://localhost:18998/status
tail -f ~/.amber-hunter/amber-hunter.log

# Linux: secret-tool not found
sudo apt install libsecret-tools        # Ubuntu/Debian
sudo dnf install libsecret             # Fedora

# Check pending memories
curl "http://localhost:18998/queue?token=$(curl -s localhost:18998/token | python3 -c 'import sys,json;print(json.load(sys.stdin)[\"api_key\"])')"

FAQ & Known Issues

Q: amber-hunter runs on a VPS, not my local Mac. How do I configure the API key?

When amber-hunter is on a different machine than your browser, the dashboard's "Generate API Key" button can't auto-bind (it POSTs to

127.0.0.1:18998
which is your local machine, not the VPS).

Manual setup on the VPS:

# Option 1: environment variable (recommended for VPS)
echo 'export AMBER_TOKEN="your_api_key_here"' >> ~/.bashrc
source ~/.bashrc

# Option 2: write directly to config.json
mkdir -p ~/.amber-hunter
cat >> ~/.amber-hunter/config.json << 'EOF'
{"api_token": "your_api_key_here"}
EOF

# Then restart amber-hunter
launchctl unload ~/Library/LaunchAgents/com.huper.amber-hunter.plist
launchctl load ~/Library/LaunchAgents/com.huper.amber-hunter.plist

How to get the API key:

  1. Go to huper.org → Dashboard → Account → API Key
  2. Click "Generate API Key" and copy the key immediately (it's only shown once)

Q: Dashboard shows "尚未生成API Key" even after I generated one

This is a UI bug in older versions. Update to the latest version, or refresh the dashboard page. The key is stored correctly in the database — the display just wasn't updating after generation.

Q: Sync shows "network unreachable" or "Token 无效" errors

If amber-hunter is on a VPS: The

api_token
in config.json may be empty or wrong. Verify:

cat ~/.amber-hunter/config.json | grep api_token

If empty, the VPS can't reach huper.org cloud sync. Manually set the

api_token
as shown in the FAQ above.

If amber-hunter is on your local Mac: Make sure

bind-apikey
completed successfully (it runs automatically after generating a key). Check:

curl -s http://localhost:18998/config | python3 -c "import sys,json; d=json.load(sys.stdin); print('api_token:', d.get('api_key','(not set)')[:10]+'...')"

Q: I generated a new API Key but amber-hunter on VPS stopped syncing

Each key can only be used by one amber-hunter instance at a time. If you generate a new key from the dashboard, the old key (still configured on VPS) becomes invalid. Either:

  • Copy the new key to the VPS config and restart
  • Or keep using the old key (don't click "Generate New Key" unless you mean to rotate it)

Q: "尚未生成API Key" never goes away on first use

This means you haven't generated an API key yet. Click the orange "生成 API Key" button on the Dashboard → Account → API Key page. The key is shown once — copy it immediately and save it somewhere before leaving the page.


Version History

  • v1.2.29 (2026-04-04): G1 Self-Correction Loop —
    correction_log
    SQLite 表记录每次校正事件;
    _normalize_tag
    应用用户校正规则(5分钟缓存);
    record_tag_correction
    /
    record_category_correction
    在 queue edit 时调用;
    GET /corrections/stats
    分析校正模式;
    POST /corrections/apply
    采纳替换规则。
  • v1.2.28 (2026-04-04): P2-1 Mem0 Auto-extraction —
    core/extractor.py
    从对话自动抽取 facts/preferences/decisions;
    POST /extract/auto
    高置信直接入库/中置信进队列;
    GET /extract/status
    查看抽取统计;结合 WAL 信号 + 偏好提取 + LLM 结构化抽取三重机制。
  • v1.2.27 (2026-04-04): P1-1 Structured User Profile —
    user_profile
    SQLite 表;
    core/profile.py
    LLM extraction;
    GET /profile
    返回四段画像;
    PUT /profile/{section}
    手动更新;
    POST /profile/build
    从 session 构建;recall 响应注入
    profile
    字段。
  • v1.2.26 (2026-04-04): P0-2续 WAL GC —
    wal_gc(age_hours=24)
    删除已处理条目;
    get_wal_stats()
    新增
    processed_count
    ;懒 GC(>50 条已处理时自动清理);
    POST /wal/gc
    端点支持手动 GC。
  • v1.2.25 (2026-04-04): P0-3 可解释召回 —
    _kw_score
    返回
    (score, matched_terms)
    breakdown
    新增
    matched_terms
    +
    wal_signal
    reason
    改为详细自然语言说明(含具体匹配词、语义相似度%、WAL信号类型)。
  • v1.2.24 (2026-04-04): P0-2 WAL 热存储 —
    core/wal.py
    新增 Session State WAL 模块;
    recall_memories
    返回前检测偏好/决定/修正信号并写入
    ~/.amber-hunter/session_wal.jsonl
    ;新增
    /wal/status
    +
    /wal/entries
    端点。
  • v1.2.23 (2026-04-04): P0-1 LanceDB 向量搜索 —
    core/vector.py
    新增 LanceDB 封装;胶囊入库时同步写向量;recall 优先 LanceDB top_k 检索(0.50权重),on-the-fly 回退;升级 torch 2.8.0。
  • v1.2.22 (2026-04-04): Fix line 1570 bug (
    row[2]
    stored_challenge
    ) in
    /api/did/auth/verify
    ; add
    POST /did/auth/challenge
    +
    POST /did/auth/sign-challenge
    in amber_hunter; fix
    did_register_device
    to use
    get_api_token()
    ; fix
    HOME
    reference before definition.
  • v1.2.21 (2026-04-04): D2 DID Challenge-Response Auth + capsule key derivation —
    derive_capsule_key
    wired into
    create_capsule
    /
    get_capsule
    (DID device key → AES-256-GCM, PBKDF2 fallback);
    POST /api/did/auth/challenge
    +
    POST /api/did/auth/verify
    endpoints;
    require_auth
    supports DID token;
    device_priv
    saved to
    did.json
    .
  • v1.2.8 (2026-04-01): Fix proactive-check.js — filter log lines from session transcript; memo truncation 60→80 chars.
  • v1.2.4 (2026-04-01): Fix
    source_type
    /
    category
    missing in sync payload;
    httpx.Client
    reuse for sync;
    /capsules
    limit param;
    /memories
    new fields.
  • v1.2.3 (2026-04-01): Fix
    /recall
    semantic search on full corpus; hybrid mode
    0.4×keyword + 0.6×semantic
    ;
    /status
    enhanced with capsule_count/queue_pending/last_sync.
  • v1.2.1 (2026-03-31): LLM abstraction layer (
    core/llm.py
    );
    /rerank
    endpoint;
    /classify
    LLM fallback; proactive session selection by message count.
  • v1.1.9 (2026-03-31): Universal memory taxonomy (8 life categories);
    /ingest
    + queue management;
    source_type
    +
    category
    fields; ChatGPT GPT Action.
  • v0.9.5 (2026-03-28): amber-proactive V4 — self-contained cron, LLM extraction.
  • v0.8.4 (2026-03-22): E2E encryption, cross-platform keychain,
    /memories
    no-auth.

Built with 🔒 by Anke Chen for the Huper琥珀 ecosystem.