git clone https://github.com/titanwings/colleague-skill
git clone --depth=1 https://github.com/titanwings/colleague-skill ~/.claude/skills/titanwings-colleague-skill-colleague-skill
SKILL.mdname: dot-skill description: "Unified meta-skill engine for distilling colleague, relationship, or celebrity characters into reusable Skills. | 统一的 meta-skill 引擎,把 colleague、relationship、celebrity 三类对象蒸馏成可复用 Skill。" argument-hint: "[character] [name-or-slug]" version: "1.0.0" user-invocable: true allowed-tools: Read, Write, Edit, Bash
Language / 语言: This skill supports both English and Chinese. Detect the user's language from their first message and respond in the same language throughout. Below are instructions in both languages — follow the one matching the user's language.
本 Skill 支持中英文。根据用户第一条消息的语言,全程使用同一语言回复。下方提供了两种语言的指令,按用户语言选择对应版本执行。
Execution Root / 执行根目录: Run all
commands from the directory that contains thisBash. AllSKILL.mdandtools/...paths below are relative to the skill root.prompts/...Critical rule / 关键规则: Do not prepend commands with guessed host-specific paths such as
,cd ~/.hermes/...,cd ~/.claude/...,cd ~/.openclaw/..., or hard-codedcd ~/.codex/...paths. The current working directory is already the correct skill root. Run/Users/.../dot-skilldirectly.python3 tools/...所有
命令都必须在当前Bash所在目录执行。下文出现的SKILL.md和tools/...均为相对于 skill 根目录的相对路径。prompts/...
dot-skill 创建器(兼容宿主版)
触发条件
当用户说以下任意内容时启动:
/dot-skill- "帮我创建一个 skill"
- "我想蒸馏一个人"
- "新建一个 skill"
- "给我做一个 XX 的 skill"
兼容宿主:
- Claude Code
- OpenClaw
- Hermes
- Codex
统一主入口是
dot-skill。在支持 slash command 的宿主中,使用 /dot-skill。
对 Hermes 而言,只保证 /dot-skill 这一条 slash 入口稳定;colleague、relationship、celebrity 的兼容语义保留在工具层和 preset 层,但不保证每个兼容名称都能作为 Hermes slash command 被路由。
当用户对已有 Skill 说以下内容时,进入进化模式:
- "我有新文件" / "追加"
- "这不对" / "他不会这样" / "他应该是"
/update-skill {character} {slug}
兼容更新别名:
/update-colleague {slug}
当用户要求查看已生成的 Skill 时,执行下方“管理操作”里的列出命令。
工具使用规则
本 Skill 运行在任意兼容宿主中,只要求宿主能够读取本地文件并执行 Bash / Python 命令。使用以下工具约定:
| 任务 | 使用工具 |
|---|---|
| 读取 PDF 文档 | 工具(原生支持 PDF) |
| 读取图片截图 | 工具(原生支持图片) |
| 读取 MD/TXT 文件 | 工具 |
| 解析飞书消息 JSON 导出 | → |
| 飞书全自动采集(推荐) | → |
| 飞书文档(浏览器登录态) | → |
| 飞书文档(MCP App Token) | → |
| 钉钉全自动采集 | → |
| 解析邮件 .eml/.mbox | → |
| 写入/更新 Skill 文件 | / 工具 |
| 版本管理 | → |
| 列出已有 Skill | → |
基础目录:
→colleague./skills/colleague/{slug}/
→relationship./skills/relationship/{slug}/
→celebrity./skills/celebrity/{slug}/
如需改为全局路径,用
--base-dir 指向对应 character family 的根目录。
主流程:创建新 Skill
Step 0:确认 character family
如果用户使用的是
/dot-skill,先确认本次要蒸馏的是哪一类:
colleaguerelationshipcelebrity
如果上层宿主已经显式把 family 传进来,则直接固定对应的 character family。
如果当前 family 是
celebrity,还必须确认 research profile:
budget-friendlybudget-unfriendly
默认使用
budget-friendly。只有当用户明确要求更深研究、更高置信度、或者愿意接受更慢更贵的蒸馏流程时,才切到 budget-unfriendly。
Step 1:基础信息录入
根据 character family 选择对应 intake prompt:
→colleagueprompts/intake.md
→relationshipprompts/relationship/intake.md
→celebrityprompts/celebrity/intake.md
colleague 和 relationship 只问 3 个问题。
celebrity 按 prompts/celebrity/intake.md 问 4 个问题,其中第 4 个问题必须确认 research_profile。
默认的 3 个基础问题:
- 花名/代号(必填)
- 基本信息(一句话:公司、职级、职位、性别,想到什么写什么)
- 示例:
字节 2-1 后端工程师 男
- 示例:
- 性格画像(一句话:MBTI、星座、个性标签、企业文化、印象)
- 示例:
INTJ 摩羯座 甩锅高手 字节范 CR很严格但从来不解释原因
- 示例:
除姓名外均可跳过。收集完后汇总确认,再进入下一步。
Step 2:原材料导入
询问用户提供原材料,展示四种方式供选择:
原材料怎么提供? [A] 飞书自动采集(推荐) 输入姓名,自动拉取消息记录 + 文档 + 多维表格 [B] 钉钉自动采集 输入姓名,自动拉取文档 + 多维表格 消息记录通过浏览器采集(钉钉 API 不支持历史消息) [C] 飞书链接 直接给文档/Wiki 链接(浏览器登录态 或 MCP) [D] 上传文件 PDF / 图片 / 导出 JSON / 邮件 .eml [E] 直接粘贴内容 把文字复制进来 可以混用,也可以跳过(仅凭手动信息生成)。
方式 A:飞书自动采集(推荐)
首次使用需配置:
python3 tools/feishu_auto_collector.py --setup
群聊采集(使用 tenant_access_token,需 bot 在群内):
python3 tools/feishu_auto_collector.py \ --name "{name}" \ --output-dir ./knowledge/{slug} \ --msg-limit 1000 \ --doc-limit 20
私聊采集(需要 user_access_token + 私聊 chat_id):
私聊消息只能通过用户身份(user_access_token)获取,应用身份无权访问私聊。
前置条件:
用户需要提供以下信息:
- 飞书应用凭证:
和app_id
(在飞书开放平台创建自建应用获取)app_secret - 用户权限:应用需开通以下用户权限(scope):
— 以用户身份读取/发送消息im:message
— 以用户身份读取会话列表im:chat
- OAuth 授权码(code):用户在浏览器中完成 OAuth 授权后,从回调 URL 中获取
如果用户缺少以上任何信息,引导他们完成配置。不要假设用户已经配好了。
获取 user_access_token 的完整流程:
当用户提供了 app_id、app_secret,并确认已开通用户权限后:
-
帮用户生成 OAuth 授权链接:
https://open.feishu.cn/open-apis/authen/v1/authorize?app_id={APP_ID}&redirect_uri=http://www.example.com&scope=im:message%20im:chat⚠️ 注意:
需要在飞书应用的「安全设置 → 重定向 URL」中添加redirect_urihttp://www.example.com -
用户在浏览器打开链接,登录并授权
-
页面会跳转到
,用户复制 code 给你http://www.example.com?code=xxx -
用 code 换取 token:
python3 tools/feishu_auto_collector.py --exchange-code {CODE}或者你自己写 Python 脚本调飞书 API 换取:
# 1. 获取 app_access_token POST https://open.feishu.cn/open-apis/auth/v3/app_access_token/internal Body: {"app_id": "xxx", "app_secret": "xxx"} # 2. 用 code 换 user_access_token POST https://open.feishu.cn/open-apis/authen/v1/oidc/access_token Header: Authorization: Bearer {app_access_token} Body: {"grant_type": "authorization_code", "code": "xxx"}
获取私聊 chat_id:
用户通常不知道 chat_id。当用户有了 user_access_token 但没有 chat_id 时,你应该自己写 Python 脚本来获取:
- 方法:用 user_access_token 向对方的 open_id 发一条消息,返回值中会包含 chat_id
POST https://open.feishu.cn/open-apis/im/v1/messages?receive_id_type=open_id Header: Authorization: Bearer {user_access_token} Body: {"receive_id": "{对方open_id}", "msg_type": "text", "content": "{\"text\":\"你好\"}"} # 返回值中的 chat_id 就是私聊会话 ID - 注意:
不会返回私聊会话,这是飞书 API 的限制,不是权限问题,不要尝试用这个接口找私聊GET /im/v1/chats - 如果用户不知道对方的 open_id,可以用 tenant_access_token 调通讯录 API 搜索:
GET https://open.feishu.cn/open-apis/contact/v3/scopes # 返回应用可见范围内所有用户的 open_id
执行采集:
拿到 user_access_token 和 chat_id 后:
python3 tools/feishu_auto_collector.py \ --open-id {对方open_id} \ --p2p-chat-id {chat_id} \ --user-token {user_access_token} \ --name "{name}" \ --output-dir ./knowledge/{slug} \ --msg-limit 1000
灵活性原则:以上 API 调用不一定要用 collector 脚本,如果脚本跑不通或者场景不匹配,你可以直接写 Python 脚本调飞书 API 完成任务。核心 API 参考:
- 获取 token:
、POST /auth/v3/app_access_token/internalPOST /authen/v1/oidc/access_token - 发消息(获取 chat_id):
POST /im/v1/messages?receive_id_type=open_id - 拉消息:
GET /im/v1/messages?container_id_type=chat&container_id={chat_id} - 查通讯录:
、GET /contact/v3/scopesGET /contact/v3/users/{user_id}
自动采集内容:
- 群聊:所有与他共同群聊中他发出的消息(过滤系统消息、表情包)
- 私聊:与他的私聊完整对话(含双方消息,用于理解对话语境)
- 他创建/编辑的飞书文档和 Wiki
- 相关多维表格(如有权限)
采集完成后用
Read 读取输出目录下的文件:
→ 消息记录(群聊 + 私聊)knowledge/{slug}/messages.txt
→ 文档内容knowledge/{slug}/docs.txt
→ 采集摘要knowledge/{slug}/collection_summary.json
如果采集失败,根据报错自行判断原因并尝试修复,常见问题:
- 群聊采集:bot 未添加到群聊
- 私聊采集:user_access_token 过期(有效期 2 小时,可用 refresh_token 刷新)
- 权限不足:引导用户在飞书开放平台开通对应权限并重新授权
- 或改用方式 B/C
方式 B:钉钉自动采集
首次使用需配置:
python3 tools/dingtalk_auto_collector.py --setup
然后输入姓名,一键采集:
python3 tools/dingtalk_auto_collector.py \ --name "{name}" \ --output-dir ./knowledge/{slug} \ --msg-limit 500 \ --doc-limit 20 \ --show-browser # 首次使用加此参数,完成钉钉登录
采集内容:
- 他创建/编辑的钉钉文档和知识库
- 多维表格
- 消息记录(⚠️ 钉钉 API 不支持历史消息拉取,自动切换浏览器采集)
采集完成后
Read 读取:
knowledge/{slug}/docs.txtknowledge/{slug}/bitables.txtknowledge/{slug}/messages.txt
如消息采集失败,提示用户截图聊天记录后上传。
方式 D:上传文件
- PDF / 图片:
工具直接读取Read - 飞书消息 JSON 导出:
然后python3 tools/feishu_parser.py --file {path} --target "{name}" --output /tmp/feishu_out.txtRead /tmp/feishu_out.txt - 邮件文件 .eml / .mbox:
然后python3 tools/email_parser.py --file {path} --target "{name}" --output /tmp/email_out.txtRead /tmp/email_out.txt - Markdown / TXT:
工具直接读取Read
方式 C:飞书链接
用户提供飞书文档/Wiki 链接时,询问读取方式:
检测到飞书链接,选择读取方式: [1] 浏览器方案(推荐) 复用你本机 Chrome 的登录状态 ✅ 内部文档、需要权限的文档都能读 ✅ 无需配置 token ⚠️ 需要本机安装 Chrome + playwright [2] MCP 方案 通过飞书 App Token 调用官方 API ✅ 稳定,不依赖浏览器 ✅ 可以读消息记录(需要群聊 ID) ⚠️ 需要先配置 App ID / App Secret ⚠️ 内部文档需要管理员给应用授权 选择 [1/2]:
选 1(浏览器方案):
python3 tools/feishu_browser.py \ --url "{feishu_url}" \ --target "{name}" \ --output /tmp/feishu_doc_out.txt
首次使用若未登录,会弹出浏览器窗口要求登录(一次性)。
选 2(MCP 方案):
首次使用需初始化配置:
python3 tools/feishu_mcp_client.py --setup
之后直接读取:
python3 tools/feishu_mcp_client.py \ --url "{feishu_url}" \ --output /tmp/feishu_doc_out.txt
读取消息记录(需要群聊 ID,格式
oc_xxx):
python3 tools/feishu_mcp_client.py \ --chat-id "oc_xxx" \ --target "{name}" \ --limit 500 \ --output /tmp/feishu_msg_out.txt
两种方式输出后均用
Read 读取结果文件,进入分析流程。
方式 E:直接粘贴
用户粘贴的内容直接作为文本原材料,无需调用任何工具。
如果用户说"没有文件"或"跳过",仅凭 Step 1 的手动信息生成 Skill。
Step 3:分析原材料
先根据 character family 解析本次的执行矩阵:
| character | intake | persona analyzer | persona builder | merger | storage root |
|---|---|---|---|---|---|
| | | | | |
| | | | | |
| | | | | |
所有 family 共用:
- Work analyzer:
prompts/work_analyzer.md - Work builder:
prompts/work_builder.md - Correction handler:
prompts/correction_handler.md
如果当前是
celebrity,必须先走 research 子流程,再进入分析。
celebrity / budget-friendly
- 读取
,按其中的 6 维度并行采集策略 做 research planningprompts/celebrity/research.md - 先创建目录:
mkdir -p "{skill_dir}/knowledge/research/raw" "{skill_dir}/knowledge/research/merged" - 确认采集策略(在 intake 阶段已确定):
- Local-first:先分析用户本地材料,标记覆盖了哪些维度,只对缺失维度做网络补充
- Web + local:全量 6 维度网络研究,同时与本地材料合并,交叉验证
- Web-only:标准 6 维度网络研究
- 如果用户明确提供了可处理的视频链接或字幕来源,而且处理结果不会作为长文本落盘:
bash tools/research/download_subtitles.sh "{url}" "{skill_dir}/knowledge/subtitles" python3 tools/research/srt_to_transcript.py "{subtitle_file}" "{skill_dir}/knowledge/transcripts/{name}.txt" - 按 6 维度 研究,原始 research 笔记至少要拆成 3 个文件(每个文件覆盖 2 个维度),不能只写一个
:research_notes.md
(维度 1 著作 + 维度 6 时间线)knowledge/research/raw/01_core_profile.md
(维度 2 对话 + 维度 4 决策)knowledge/research/raw/02_conversations_and_material.md
(维度 3 表达 DNA + 维度 5 他者视角)knowledge/research/raw/03_expression_and_reception.md
- 研究过程中必须遵守 品味原则(详见 research prompt):
- 长文 > 金句,争议 > 共识,变化 > 固定,一手 > 二手
- 遵守 信源黑名单:永不引用知乎、微信公众号、百度百科、内容农场
- 遵守 信源优先级:用户本地材料 > 一手著作 > 长访谈 > 决策记录 > 社交媒体 > 外部分析 > 二手转述
- 合并 research:
输出:python3 tools/research/merge_research.py "{skill_dir}"knowledge/research/merged/summary.md - 读取
,确认:knowledge/research/merged/summary.mdFiles scanned >= 3Unique URLs >= 2Potential long quote lines = 0- research notes 里的 URL 必须是实际打开过的具体页面,不是平台首页、搜索页、话题页或占位路径 如果不满足,继续补 research notes,直到满足或明确记录搜集受限原因。
- 质量关卡(Phase 1.5):在进入分析之前,必须向用户展示结构化采集摘要:
等待用户确认后再继续。如果用户指出问题或需要某个维度更深入,先补充研究。┌──────────────────────────────┬──────────┬─────────────────────────────┐ │ 维度 │ 来源数 │ 关键发现 │ ├──────────────────────────────┼──────────┼─────────────────────────────┤ │ 1 著作 │ N │ [核心论点 / 缺失] │ │ 2 对话 │ N │ [关键模式 / 缺失] │ │ 3 表达 DNA │ N │ [风格标记 / 缺失] │ │ 4 决策 │ N │ [决策模式 / 缺失] │ │ 5 他者视角 │ N │ [外部观点 / 缺失] │ │ 6 时间线 │ N │ [认知轨迹 / 缺失] │ ├──────────────────────────────┼──────────┼─────────────────────────────┤ │ 矛盾点 │ N │ [摘要] │ │ 薄弱维度 │ [列表] │ 补充方案:[计划] │ │ 冷门人物? │ 是/否 │ │ └──────────────────────────────┴──────────┴─────────────────────────────┘ - 冷门人物检测:如果总来源 < 10 条,按冷门人物协议处理:
- 心智模型限制为 2–3 个
- 薄弱模型标注"基于有限信息"
- 扩大诚实边界章节
- 告知用户提供什么补充材料可以改善质量
- celebrity 的后续分析输入必须优先使用:
- 一手材料(信源权重 1-3)
- merged research summary
- 用户提供的补充描述
celebrity / budget-unfriendly
- 先读取:
prompts/celebrity/budget_unfriendly/research.mdreferences/celebrity_budget_unfriendly_framework.md
- 先创建目录:
mkdir -p "{skill_dir}/knowledge/research/raw" "{skill_dir}/knowledge/research/merged" "{skill_dir}/knowledge/research/reviews" - 确认采集策略(在 intake 阶段已确定):local-first / web+local / web-only
- 按 6-track 独立文件结构 写 research notes(不可合并,不可克隆观察):
(维度 1:著作与系统思考)knowledge/research/raw/01_writings.md
(维度 2:即兴对话与压力应对)knowledge/research/raw/02_conversations.md
(维度 3:语言指纹)knowledge/research/raw/03_expression_dna.md
(维度 4:行为与选择)knowledge/research/raw/04_decisions.md
(维度 5:他者视角与批评)knowledge/research/raw/05_external_views.md
(维度 6:认知轨迹)knowledge/research/raw/06_timeline.md
- 研究过程必须遵守 品味原则 + 信源黑名单 + 信源优先级(见 research prompt),每条 evidence 必须标注 source weight (1-7)。
- 合并 research:
python3 tools/research/merge_research.py "{skill_dir}" - 读取
,确认最低门槛:knowledge/research/merged/summary.mdFiles scanned >= 6Unique URLs >= 8Primary-source markers >= 3Source metadata blocks >= 6Contradiction bullets >= 6Inference bullets >= 6Potential long quote lines = 0Track coverage count = 6- research notes 里的 URL 必须是实际打开过的具体页面,不是平台首页、搜索页、话题页或占位路径 如果不满足,继续补对应 track,而不是直接进入后续 review。
- 质量关卡(Phase 1.5):在进入 audit 之前,向用户展示结构化采集摘要(含 primary 比例、矛盾数、候选 mental models、known-answer 候选、薄弱维度、冷门人物判定)。等待用户确认后再继续。
- 再读取:
prompts/celebrity/budget_unfriendly/audit.mdprompts/celebrity/budget_unfriendly/synthesis.mdreferences/celebrity_budget_unfriendly_template.md
- 先生成
knowledge/research/reviews/research_audit.md- 审计必须明确给出
PASS / FAIL - audit 必须检查:信源层级合规(无黑名单)、primary 比例 > 50%、品味原则遵守、冷门人物评估
- 如果 audit 是
,按 audit 给出的 Backfill Tasks 补齐,不要跳到 synthesisFAIL
- 审计必须明确给出
- 提炼关卡(Phase 2.5):audit 通过后,向用户展示候选 mental models 摘要(含三重门判定、evidence anchors、failure modes)。确认合理性后再进入 synthesis。
- 再生成
knowledge/research/reviews/synthesis.md- 必须对候选 mental models 做 triple-gate 判断:
- cross-context recurrence
- generative power
- exclusivity
- 同时提取智识谱系种子(influenced by / diverged from)和 Agentic Protocol 种子(该人物会如何分析新问题的维度列表)
- 必须对候选 mental models 做 triple-gate 判断:
- 再按
生成:prompts/celebrity/budget_unfriendly/validation.mdknowledge/research/reviews/validation.md- validation 必须明确给出
PASS / FAIL - 必须做 known-answer check(至少 2 题)+ edge-case check(1 题)+ voice check(100 字盲测)+ copyright check + Agentic Protocol check
- 如果 validation 是
,必须先修 draft 再继续FAIL
- budget-unfriendly 的后续分析输入必须优先使用:
- 6-track raw notes
- merged research summary
- research audit
- synthesis review(含智识谱系种子、Agentic Protocol 种子)
- validation review
- 用户补充材料
两种 celebrity profile 的共同约束:
- 如果外部搜集失败或被平台验证拦截:
- 明确告诉用户搜集受限的原因
- 保留已有 research 原始材料和 merged summary
- 继续生成,但把
视为未完成source_grounding - 不要为了通过质量检查而编造 URL、引用、书名、视频标题,或塞入泛化主页链接
- 不要把完整 transcript、完整字幕、长段原文抄进仓库
- 只允许保留结构化摘要、来源元信息和极短引用,避免版权风险
完成 family 解析后,再按两条线分析:
线路 A(Work Skill):
- 参考
prompts/work_analyzer.md - 提取:负责系统、技术规范、工作流程、输出偏好、经验知识
- celebrity 场景下,
更偏方法论、判断框架、决策习惯,不要机械套成“工作职责”work
线路 B(Persona):
- 使用当前 family 对应的 persona analyzer
- 如果
且celebrity
,改用:research_profile=budget-unfriendlyprompts/celebrity/budget_unfriendly/persona_analyzer.md
- 将用户填写的标签翻译为具体行为规则
- 从原材料中提取:表达风格、决策模式、人际行为
- celebrity 场景下,必须保留:
- mental models
- decision heuristics
- expression DNA
- contradictions
- honest boundaries
Step 4:生成并预览
使用
prompts/work_builder.md 生成 Work 内容。
使用当前 family 对应的 persona builder 生成 Persona 内容。
具体映射:
→colleagueprompts/persona_builder.md
→relationshipprompts/relationship/persona_builder.md
→celebrityprompts/celebrity/persona_builder.md
+celebrity
→budget-unfriendlyprompts/celebrity/budget_unfriendly/persona_builder.md
向用户展示摘要(各 5-8 行),询问:
Work Skill 摘要: - 负责:{xxx} - 技术栈:{xxx} - CR 重点:{xxx} ... Persona 摘要: - 核心性格:{xxx} - 表达风格:{xxx} - 决策模式:{xxx} ... 确认生成?还是需要调整?
Step 5:写入文件
用户确认后,不要手工拼接
skills/colleague/{slug} 这类文件树。统一走 writer:
- 先解析当前 storage root:
→colleague./skills/colleague
→relationship./skills/relationship
→celebrity./skills/celebrity
- 用
工具写三个临时文件:Write/tmp/dot_skill_{slug}_meta.json/tmp/dot_skill_{slug}_work.md/tmp/dot_skill_{slug}_persona.md
至少包含:meta.jsonnamedisplay_namecharacter
(当 character=research_profile
时必填)celebrity
(必须设置为用户当前语言,例如classification.language
或zh-CN
)enprofiletagsknowledge_sources
- 然后调用:
python3 tools/skill_writer.py \ --action create \ --character {character} \ --research-profile {research_profile} \ --slug {slug} \ --name "{name}" \ --meta /tmp/dot_skill_{slug}_meta.json \ --work /tmp/dot_skill_{slug}_work.md \ --persona /tmp/dot_skill_{slug}_persona.md \ --base-dir {resolved_base_dir} - 该命令会统一生成:
SKILL.mdwork.mdpersona.mdwork_skill.mdpersona_skill.mdmanifest.jsonmeta.json- 如需把生成后的角色 Skill 安装到宿主:
- Claude Code:追加
--install-claude-skill - OpenClaw:追加
--install-openclaw-skill - Codex:追加
--install-codex-skill - Claude Code on Windows:可再追加
--install-claude-command-shim
- Claude Code:追加
- 如果当前是
,创建完成后必须再跑一次质量检查:celebritypython3 tools/research/quality_check.py "{resolved_base_dir}/{slug}/SKILL.md" --profile {research_profile} - 如果
的质量检查仍然提示celebrity
失败:source_grounding- 可以补写诚实的来源说明和局限说明
- 但只有在拿到真实、具体、可追溯的外部来源时,才能补充 URL
- 不要用站点首页、topic 页、搜索页、个人空间首页等泛化链接来“刷过”检查
- 如果没有真实来源,就保留 FAIL,并向用户说明后续需要补哪些材料
告知用户时,文件位置必须按当前 family 返回,不要默认写成 colleague。
进化模式:追加文件
用户提供新文件或文本时:
- 按 Step 2 的方式读取新内容
- 根据当前 family 解析 base dir
- 用
读取现有Read
和{resolved_base_dir}/{slug}/work.mdpersona.md - 使用当前 family 对应的 merger prompt 分析增量内容
- 存档当前版本(用 Bash):
python3 tools/version_manager.py \ --action backup \ --character {character} \ --slug {slug} \ --base-dir {resolved_base_dir} - 把 work/persona 增量分别写到临时 patch 文件
- 调用:
python3 tools/skill_writer.py \ --action update \ --character {character} \ --slug {slug} \ --work-patch /tmp/dot_skill_{slug}_work_patch.md \ --persona-patch /tmp/dot_skill_{slug}_persona_patch.md \ --base-dir {resolved_base_dir} - 如果当前是
,更新后再次执行 quality checkcelebrity
进化模式:对话纠正
用户表达"不对"/"应该是"时:
- 参考
识别纠正内容prompts/correction_handler.md - 判断属于 Work(技术/流程)还是 Persona(性格/沟通)
- 如果属于 Work:
- 生成
/tmp/dot_skill_{slug}_work_patch.md - patch 必须是可替换的
section,不要直接手改最终文件## - 调用:
python3 tools/skill_writer.py \ --action update \ --character {character} \ --slug {slug} \ --work-patch /tmp/dot_skill_{slug}_work_patch.md \ --base-dir {resolved_base_dir}
- 生成
- 如果属于 Persona:
- 将 correction 写入
/tmp/dot_skill_{slug}_correction.json - 单条纠正可直接写成
{scene, wrong, correct} - 多条 persona 纠正可写成
{"persona_corrections": [{...}, {...}]} - 调用:
python3 tools/skill_writer.py \ --action update \ --character {character} \ --slug {slug} \ --correction-json /tmp/dot_skill_{slug}_correction.json \ --base-dir {resolved_base_dir}
- 将 correction 写入
- 如果当前是
,更新后再次执行 quality checkcelebrity - 不要直接手改
、work.md
、persona.md
、SKILL.md
;统一通过 writer 更新meta.json
管理操作
列出三类 Skill:
python3 tools/skill_writer.py --action list --character colleague --base-dir ./skills/colleague python3 tools/skill_writer.py --action list --character relationship --base-dir ./skills/relationship python3 tools/skill_writer.py --action list --character celebrity --base-dir ./skills/celebrity
回滚某个 Skill 版本:
# colleague python3 tools/version_manager.py --action rollback --character colleague --slug {slug} --version {version} --base-dir ./skills/colleague # relationship python3 tools/version_manager.py --action rollback --character relationship --slug {slug} --version {version} --base-dir ./skills/relationship # celebrity python3 tools/version_manager.py --action rollback --character celebrity --slug {slug} --version {version} --base-dir ./skills/celebrity
删除某个 Skill: 确认 character 后执行:
# colleague rm -rf skills/colleague/{slug} # relationship rm -rf skills/relationship/{slug} # celebrity rm -rf skills/celebrity/{slug}
English Version
dot-skill Creator (Compatible Host Edition)
Trigger Conditions
Activate when the user says any of the following:
/dot-skill- "Help me create a skill"
- "I want to distill someone"
- "Create a new skill"
- "Make a skill for XX"
Compatible hosts:
- Claude Code
- OpenClaw
- Hermes
- Codex
The canonical entrypoint is
dot-skill. In hosts that expose slash commands, use /dot-skill.
Under Hermes specifically, only /dot-skill is guaranteed as a stable slash entrypoint. Compatibility semantics for colleague, relationship, and celebrity remain in the tool layer and preset layer, but Hermes does not guarantee that every compatibility name will be routed as a slash command.
Enter evolution mode when the user says:
- "I have new files" / "append"
- "That's wrong" / "He wouldn't do that" / "He should be"
/update-skill {character} {slug}
Compatibility update alias:
/update-colleague {slug}
When the user asks to see generated skills, use the list commands in "Management Operations" below.
Tool Usage Rules
This Skill runs in any compatible host that can read local files and execute Bash / Python commands. Use the following tool conventions:
| Task | Tool |
|---|---|
| Read PDF documents | tool (native PDF support) |
| Read image screenshots | tool (native image support) |
| Read MD/TXT files | tool |
| Parse Feishu message JSON export | → |
| Feishu auto-collect (recommended) | → |
| Feishu docs (browser session) | → |
| Feishu docs (MCP App Token) | → |
| DingTalk auto-collect | → |
| Parse email .eml/.mbox | → |
| Write/update Skill files | / tool |
| Version management | → |
| List existing Skills | → |
Base directories:
→colleague./skills/colleague/{slug}/
→relationship./skills/relationship/{slug}/
→celebrity./skills/celebrity/{slug}/
For a global path, use
--base-dir with the storage root for that character family.
Main Flow: Create a New Skill
Step 0: Confirm the character family
If the user entered
/dot-skill, first confirm which family should be distilled:
colleaguerelationshipcelebrity
If the host already passed an explicit family, lock the character family immediately.
If the current family is
celebrity, also confirm the research profile:
budget-friendlybudget-unfriendly
Default to
budget-friendly. Only switch to budget-unfriendly when the user explicitly wants deeper research, higher confidence, or accepts a slower and more expensive distillation pass.
Step 1: Basic Info Collection
Choose the intake prompt by character family:
→colleagueprompts/intake.md
→relationshipprompts/relationship/intake.md
→celebrityprompts/celebrity/intake.md
For
colleague and relationship, ask only 3 questions.
For celebrity, use the 4-question intake in prompts/celebrity/intake.md; the fourth question must confirm research_profile.
The default 3 base questions are:
- Alias / Codename (required)
- Basic info (one sentence: company, level, role, gender — say whatever comes to mind)
- Example:
ByteDance L2-1 backend engineer male
- Example:
- Personality profile (one sentence: MBTI, zodiac, traits, corporate culture, impressions)
- Example:
INTJ Capricorn blame-shifter ByteDance-style strict in CR but never explains why
- Example:
Everything except the alias can be skipped. Summarize and confirm before moving to the next step.
Step 2: Source Material Import
Ask the user how they'd like to provide materials:
How would you like to provide source materials? [A] Feishu Auto-Collect (recommended) Enter name, auto-pull messages + docs + spreadsheets [B] DingTalk Auto-Collect Enter name, auto-pull docs + spreadsheets Messages collected via browser (DingTalk API doesn't support message history) [C] Feishu Link Provide doc/Wiki link (browser session or MCP) [D] Upload Files PDF / images / exported JSON / email .eml [E] Paste Text Copy-paste text directly Can mix and match, or skip entirely (generate from manual info only).
Option A: Feishu Auto-Collect (Recommended)
First-time setup:
python3 tools/feishu_auto_collector.py --setup
Group chat collection (uses tenant_access_token, bot must be in the group):
python3 tools/feishu_auto_collector.py \ --name "{name}" \ --output-dir ./knowledge/{slug} \ --msg-limit 1000 \ --doc-limit 20
Private chat (P2P) collection (requires user_access_token + p2p chat_id):
Private messages can only be accessed via user identity (user_access_token). App identity cannot access private chats.
Prerequisites:
The user needs to provide:
- Feishu app credentials:
andapp_id
(from Feishu Open Platform)app_secret - User scopes: The app must have these user scopes enabled:
— read/send messages as userim:message
— read chat list as userim:chat
- OAuth authorization code: obtained after user completes OAuth in browser
If the user is missing any of these, guide them through setup. Don't assume anything is pre-configured.
Getting user_access_token:
Once the user provides app_id, app_secret, and confirms scopes are enabled:
-
Generate the OAuth URL for them:
https://open.feishu.cn/open-apis/authen/v1/authorize?app_id={APP_ID}&redirect_uri=http://www.example.com&scope=im:message%20im:chat⚠️ The redirect_uri must be added in the app's "Security Settings → Redirect URLs"
-
User opens URL, logs in, authorizes
-
Page redirects to
, user copies the codehttp://www.example.com?code=xxx -
Exchange code for token:
python3 tools/feishu_auto_collector.py --exchange-code {CODE}Or write a Python script to call the Feishu API directly:
# 1. Get app_access_token POST https://open.feishu.cn/open-apis/auth/v3/app_access_token/internal Body: {"app_id": "xxx", "app_secret": "xxx"} # 2. Exchange code for user_access_token POST https://open.feishu.cn/open-apis/authen/v1/oidc/access_token Header: Authorization: Bearer {app_access_token} Body: {"grant_type": "authorization_code", "code": "xxx"}
Getting the p2p chat_id:
Users typically don't know their chat_id. When the user has a user_access_token but no chat_id, write a Python script yourself to obtain it:
- Method: Send a message to the other user's open_id — the response includes the chat_id
POST https://open.feishu.cn/open-apis/im/v1/messages?receive_id_type=open_id Header: Authorization: Bearer {user_access_token} Body: {"receive_id": "{target_open_id}", "msg_type": "text", "content": "{\"text\":\"hello\"}"} # The chat_id in the response is the p2p chat ID - Important:
does NOT return p2p chats — this is a Feishu API limitation, not a permission issue. Do not try to use it for finding private chats.GET /im/v1/chats - If the user doesn't know the target's open_id, use tenant_access_token to search contacts:
GET https://open.feishu.cn/open-apis/contact/v3/scopes # Returns open_ids of all users visible to the app
Running collection:
Once you have user_access_token and chat_id:
python3 tools/feishu_auto_collector.py \ --open-id {target_open_id} \ --p2p-chat-id {chat_id} \ --user-token {user_access_token} \ --name "{name}" \ --output-dir ./knowledge/{slug} \ --msg-limit 1000
Flexibility principle: The above API calls don't have to go through the collector script. If the script doesn't work or doesn't fit the scenario, write Python scripts directly to call Feishu APIs. Key API reference:
- Get token:
,POST /auth/v3/app_access_token/internalPOST /authen/v1/oidc/access_token - Send message (get chat_id):
POST /im/v1/messages?receive_id_type=open_id - Fetch messages:
GET /im/v1/messages?container_id_type=chat&container_id={chat_id} - Search contacts:
,GET /contact/v3/scopesGET /contact/v3/users/{user_id}
Auto-collected content:
- Group chats: messages sent by them (system messages and stickers filtered)
- Private chats: full conversation with both parties (for context understanding)
- Feishu docs and Wikis they created/edited
- Related spreadsheets (if accessible)
After collection,
Read the output files:
→ messages (group + private)knowledge/{slug}/messages.txt
→ document contentknowledge/{slug}/docs.txt
→ collection summaryknowledge/{slug}/collection_summary.json
If collection fails, diagnose the error and attempt to fix it. Common issues:
- Group chat: bot not added to the group
- Private chat: user_access_token expired (2-hour TTL, refresh with refresh_token)
- Insufficient permissions: guide user to enable scopes and re-authorize
- Or switch to Option B/C
Option B: DingTalk Auto-Collect
First-time setup:
python3 tools/dingtalk_auto_collector.py --setup
Then enter the name:
python3 tools/dingtalk_auto_collector.py \ --name "{name}" \ --output-dir ./knowledge/{slug} \ --msg-limit 500 \ --doc-limit 20 \ --show-browser # add this flag on first use to complete DingTalk login
Collected content:
- DingTalk docs and knowledge bases they created/edited
- Spreadsheets
- Messages (⚠️ DingTalk API doesn't support message history — auto-switches to browser scraping)
After collection,
Read:
knowledge/{slug}/docs.txtknowledge/{slug}/bitables.txtknowledge/{slug}/messages.txt
If message collection fails, prompt user to upload chat screenshots.
Option D: Upload Files
- PDF / Images:
tool directlyRead - Feishu message JSON export:
Thenpython3 tools/feishu_parser.py --file {path} --target "{name}" --output /tmp/feishu_out.txtRead /tmp/feishu_out.txt - Email files .eml / .mbox:
Thenpython3 tools/email_parser.py --file {path} --target "{name}" --output /tmp/email_out.txtRead /tmp/email_out.txt - Markdown / TXT:
tool directlyRead
Option C: Feishu Link
When the user provides a Feishu doc/Wiki link, ask which method to use:
Feishu link detected. Choose read method: [1] Browser Method (recommended) Reuses your local Chrome login session ✅ Works with internal docs requiring permissions ✅ No token configuration needed ⚠️ Requires Chrome + playwright installed locally [2] MCP Method Uses Feishu App Token via official API ✅ Stable, no browser dependency ✅ Can read messages (needs chat ID) ⚠️ Requires App ID / App Secret setup ⚠️ Internal docs need admin authorization for the app Choose [1/2]:
Option 1 (Browser):
python3 tools/feishu_browser.py \ --url "{feishu_url}" \ --target "{name}" \ --output /tmp/feishu_doc_out.txt
First use will open a browser window for login (one-time).
Option 2 (MCP):
First-time setup:
python3 tools/feishu_mcp_client.py --setup
Then read directly:
python3 tools/feishu_mcp_client.py \ --url "{feishu_url}" \ --output /tmp/feishu_doc_out.txt
Read messages (needs chat ID, format
oc_xxx):
python3 tools/feishu_mcp_client.py \ --chat-id "oc_xxx" \ --target "{name}" \ --limit 500 \ --output /tmp/feishu_msg_out.txt
Both methods output to files, then use
Read to load results into analysis.
Option E: Paste Text
User-pasted content is used directly as text material. No tools needed.
If the user says "no files" or "skip", generate Skill from Step 1 manual info only.
Step 3: Analyze Source Material
First resolve the execution matrix for the selected character family:
| character | intake | persona analyzer | persona builder | merger | storage root |
|---|---|---|---|---|---|
| | | | | |
| | | | | |
| | | | | |
Shared across all families:
- Work analyzer:
prompts/work_analyzer.md - Work builder:
prompts/work_builder.md - Correction handler:
prompts/correction_handler.md
If the current family is
celebrity, run the research subflow before analysis.
celebrity / budget-friendly
- Read
and follow its 6-dimension parallel collection strategyprompts/celebrity/research.md - Create the research directories first:
mkdir -p "{skill_dir}/knowledge/research/raw" "{skill_dir}/knowledge/research/merged" - Confirm the collection strategy (determined during intake):
- Local-first: analyze user-provided materials first, identify which dimensions are covered, only search web for gaps
- Web + local: full 6-dimension web research, then merge with local materials for cross-validation
- Web-only: standard 6-dimension web research pass
- If the user explicitly provided a processable video URL or subtitle source, and the result will not be stored as a long transcript:
bash tools/research/download_subtitles.sh "{url}" "{skill_dir}/knowledge/subtitles" python3 tools/research/srt_to_transcript.py "{subtitle_file}" "{skill_dir}/knowledge/transcripts/{name}.txt" - Cover the 6 dimensions across at least 3 separate files (each file covers 2 dimensions), never one monolithic
:research_notes.md
(Dim 1 Writings + Dim 6 Timeline)knowledge/research/raw/01_core_profile.md
(Dim 2 Conversations + Dim 4 Decisions)knowledge/research/raw/02_conversations_and_material.md
(Dim 3 Expression DNA + Dim 5 External Views)knowledge/research/raw/03_expression_and_reception.md
- Research must follow taste principles (see research prompt):
- Long-form > snippets, controversy > consensus, change > fixity, firsthand > secondhand
- Source blacklist — never cite: 知乎, 微信公众号, 百度百科, content farms, AI-generated bios
- Source hierarchy: user local materials > first-person works > long interviews > decision records > short-form firsthand > external analysis > secondhand summaries
- Merge the research notes:
Output:python3 tools/research/merge_research.py "{skill_dir}"knowledge/research/merged/summary.md - Read
and confirm:knowledge/research/merged/summary.mdFiles scanned >= 3Unique URLs >= 2Potential long quote lines = 0- URLs in notes are actual inspected pages, not platform roots, search/topic pages, or placeholder paths If these do not hold, extend the research notes before continuing or explicitly record the collection limits.
- Quality checkpoint (Phase 1.5): before entering analysis, show the user a structured collection summary:
Wait for user confirmation before continuing. If the user flags issues or wants more depth, extend research first.┌──────────────────────────────┬──────────┬─────────────────────────────┐ │ Dimension │ Sources │ Key Finding │ ├──────────────────────────────┼──────────┼─────────────────────────────┤ │ 1 Writings │ N │ [core thesis / gap] │ │ 2 Conversations │ N │ [key pattern / gap] │ │ 3 Expression DNA │ N │ [style marker / gap] │ │ 4 Decisions │ N │ [decision pattern / gap] │ │ 5 External Views │ N │ [outside view / gap] │ │ 6 Timeline │ N │ [trajectory / gap] │ ├──────────────────────────────┼──────────┼─────────────────────────────┤ │ Contradictions │ N │ [summary] │ │ Thin dimensions │ [list] │ Backfill plan: [plan] │ │ Cold figure? │ yes/no │ │ └──────────────────────────────┴──────────┴─────────────────────────────┘ - Cold figure detection: if total sources < 10, apply the cold figure protocol:
- Limit mental models to 2–3
- Mark thin models as "based on limited information"
- Expand the honest boundaries section
- Tell the user what additional material would improve quality
- Celebrity analysis must prioritize:
- primary materials (source weight 1-3)
- merged research summary
- explicit user notes
celebrity / budget-unfriendly
- First read:
prompts/celebrity/budget_unfriendly/research.mdreferences/celebrity_budget_unfriendly_framework.md
- Create the research directories first:
mkdir -p "{skill_dir}/knowledge/research/raw" "{skill_dir}/knowledge/research/merged" "{skill_dir}/knowledge/research/reviews" - Confirm the collection strategy (determined during intake): local-first / web+local / web-only
- Build the six-track research set as independent files (never merged, never clone observations):
(Dim 1: Writings / systematic thought)knowledge/research/raw/01_writings.md
(Dim 2: Conversations under pressure)knowledge/research/raw/02_conversations.md
(Dim 3: Linguistic fingerprint)knowledge/research/raw/03_expression_dna.md
(Dim 4: Behavior and choices)knowledge/research/raw/04_decisions.md
(Dim 5: External views and criticism)knowledge/research/raw/05_external_views.md
(Dim 6: Cognitive trajectory)knowledge/research/raw/06_timeline.md
- Research must follow taste principles + source blacklist + source hierarchy (see research prompt). Every evidence item must carry a source weight (1-7) annotation.
- Merge the research notes:
python3 tools/research/merge_research.py "{skill_dir}" - Read
and confirm the minimum floor:knowledge/research/merged/summary.mdFiles scanned >= 6Unique URLs >= 8Primary-source markers >= 3Source metadata blocks >= 6Contradiction bullets >= 6Inference bullets >= 6Potential long quote lines = 0Track coverage count = 6- URLs in notes are actual inspected pages, not platform roots, search/topic pages, or placeholder paths If these do not hold, keep filling the weak tracks before continuing to any review stage.
- Quality checkpoint (Phase 1.5): before entering audit, show the user a structured collection summary (with primary-source ratio, contradiction count, candidate mental models, known-answer candidates, thin dimensions, cold figure assessment). Wait for user confirmation before continuing.
- Then read:
prompts/celebrity/budget_unfriendly/audit.mdprompts/celebrity/budget_unfriendly/synthesis.mdreferences/celebrity_budget_unfriendly_template.md
- First write
knowledge/research/reviews/research_audit.md- The audit must produce an explicit
PASS / FAIL - The audit must verify: source hierarchy compliance (no blacklisted sources), primary-source ratio > 50%, taste principle compliance, cold figure assessment
- If the audit says
, follow the Backfill Tasks before synthesisFAIL
- The audit must produce an explicit
- Extraction checkpoint (Phase 2.5): after audit PASS, show the user a summary of candidate mental models (with triple-gate verdict, evidence anchors, failure modes). Confirm reasonableness before synthesis.
- Then write
knowledge/research/reviews/synthesis.md- Apply the triple gate to candidate mental models:
- cross-context recurrence
- generative power
- exclusivity
- Also extract intellectual genealogy seeds (influenced by / diverged from) and Agentic Protocol seeds (the dimensions this person would investigate when facing a novel question)
- Apply the triple gate to candidate mental models:
- Then use
to write:prompts/celebrity/budget_unfriendly/validation.mdknowledge/research/reviews/validation.md- Validation must produce an explicit
PASS / FAIL - Validation must perform: known-answer check (≥2 questions) + edge-case check (1 question) + voice check (100-word blind test) + copyright check + Agentic Protocol check
- If validation says
, revise the draft before continuingFAIL
- Budget-unfriendly celebrity analysis must prioritize:
- six-track raw notes
- merged research summary
- research audit
- synthesis review (with genealogy + Agentic Protocol seeds)
- validation review
- explicit user notes
Shared rules for both celebrity profiles:
- If external collection fails or a platform blocks access:
- tell the user exactly what was blocked
- preserve the raw research notes and merged summary
- continue generation with the available materials
- treat
as incompletesource_grounding - never invent URLs, quotes, titles, or generic homepage links just to satisfy the checker
- Do not store full transcripts, full subtitles, or long verbatim source passages in the repository
- Keep the stored notes paraphrased, structured, and copyright-safe
Once the family is resolved, analyze along two tracks:
Track A (Work Skill):
- Refer to
prompts/work_analyzer.md - Extract: responsible systems, technical standards, workflow, output preferences, experience
- For
, interpretcelebrity
as methods, judgment frameworks, and decision patterns rather than literal job scopework
Track B (Persona):
- Use the family-specific persona analyzer
- If
withcelebrity
, use:research_profile=budget-unfriendlyprompts/celebrity/budget_unfriendly/persona_analyzer.md
- Translate user-provided tags into concrete behavior rules
- Extract from materials: communication style, decision patterns, interpersonal behavior
- For
, retain:celebrity- mental models
- decision heuristics
- expression DNA
- contradictions
- honest boundaries
Step 4: Generate and Preview
Use
prompts/work_builder.md to generate Work content.
Use the family-specific persona builder to generate Persona content.
Mapping:
→colleagueprompts/persona_builder.md
→relationshipprompts/relationship/persona_builder.md
→celebrityprompts/celebrity/persona_builder.md
+celebrity
→budget-unfriendlyprompts/celebrity/budget_unfriendly/persona_builder.md
Show the user a summary (5-8 lines each), ask:
Work Skill Summary: - Responsible for: {xxx} - Tech stack: {xxx} - CR focus: {xxx} ... Persona Summary: - Core personality: {xxx} - Communication style: {xxx} - Decision pattern: {xxx} ... Confirm generation? Or need adjustments?
Step 5: Write Files
After user confirmation, do not hand-build a
skills/colleague/{slug}-style tree. Always go through the writer:
- Resolve the current storage root:
→colleague./skills/colleague
→relationship./skills/relationship
→celebrity./skills/celebrity
- Use the
tool to create three temporary files:Write/tmp/dot_skill_{slug}_meta.json/tmp/dot_skill_{slug}_work.md/tmp/dot_skill_{slug}_persona.md
- The temporary meta file must include at least:
namedisplay_namecharacter
(required whenresearch_profile
)character=celebrity
(must match the user's language, for exampleclassification.language
orzh-CN
)enprofiletagsknowledge_sources
- Then call:
python3 tools/skill_writer.py \ --action create \ --character {character} \ --research-profile {research_profile} \ --slug {slug} \ --name "{name}" \ --meta /tmp/dot_skill_{slug}_meta.json \ --work /tmp/dot_skill_{slug}_work.md \ --persona /tmp/dot_skill_{slug}_persona.md \ --base-dir {resolved_base_dir} - This command will generate:
SKILL.mdwork.mdpersona.mdwork_skill.mdpersona_skill.mdmanifest.jsonmeta.json- To install the generated role skill into a host, append the relevant flag:
- Claude Code:
--install-claude-skill - OpenClaw:
--install-openclaw-skill - Codex:
--install-codex-skill - Claude Code on Windows: optionally add
--install-claude-command-shim
- Claude Code:
- If the current family is
, run a quality check after creation:celebritypython3 tools/research/quality_check.py "{resolved_base_dir}/{slug}/SKILL.md" --profile {research_profile} - If
still fails for asource_grounding
skill:celebrity- you may add honest limitation notes and a grounded source summary
- only add URLs when they are real, specific, and traceable sources
- never use site roots, topic pages, search pages, or other generic links as fake grounding
- if no verified external sources exist, keep the FAIL state and explain what source material is still missing
When reporting success, return the correct family-specific location instead of assuming colleague storage.
Evolution Mode: Append Files
When user provides new files or text:
- Read new content using Step 2 methods
- Resolve the base dir for the current family
existingRead
and{resolved_base_dir}/{slug}/work.mdpersona.md- Use the family-specific merger prompt for incremental analysis
- Archive current version (Bash):
python3 tools/version_manager.py \ --action backup \ --character {character} \ --slug {slug} \ --base-dir {resolved_base_dir} - Write work/persona delta into temporary patch files
- Call:
python3 tools/skill_writer.py \ --action update \ --character {character} \ --slug {slug} \ --work-patch /tmp/dot_skill_{slug}_work_patch.md \ --persona-patch /tmp/dot_skill_{slug}_persona_patch.md \ --base-dir {resolved_base_dir} - If the current family is
, run the quality check again after the updatecelebrity
Evolution Mode: Conversation Correction
When user expresses "that's wrong" / "he should be":
- Refer to
to identify correction contentprompts/correction_handler.md - Determine if it belongs to Work (technical/workflow) or Persona (personality/communication)
- If it belongs to Work:
- Generate
/tmp/dot_skill_{slug}_work_patch.md - The patch must be one or more replaceable
sections## - Call:
python3 tools/skill_writer.py \ --action update \ --character {character} \ --slug {slug} \ --work-patch /tmp/dot_skill_{slug}_work_patch.md \ --base-dir {resolved_base_dir}
- Generate
- If it belongs to Persona:
- Write the correction record to
/tmp/dot_skill_{slug}_correction.json - For a single correction, write
{scene, wrong, correct} - For multiple persona corrections, write
{"persona_corrections": [{...}, {...}]} - Call:
python3 tools/skill_writer.py \ --action update \ --character {character} \ --slug {slug} \ --correction-json /tmp/dot_skill_{slug}_correction.json \ --base-dir {resolved_base_dir}
- Write the correction record to
- If the current family is
, run the quality check again after the updatecelebrity - Do not hand-edit
,work.md
,persona.md
, orSKILL.md
; always update throughmeta.jsonskill_writer.py
Management Operations
List skills across the three families:
python3 tools/skill_writer.py --action list --character colleague --base-dir ./skills/colleague python3 tools/skill_writer.py --action list --character relationship --base-dir ./skills/relationship python3 tools/skill_writer.py --action list --character celebrity --base-dir ./skills/celebrity
Roll back a specific skill version:
# colleague python3 tools/version_manager.py --action rollback --character colleague --slug {slug} --version {version} --base-dir ./skills/colleague # relationship python3 tools/version_manager.py --action rollback --character relationship --slug {slug} --version {version} --base-dir ./skills/relationship # celebrity python3 tools/version_manager.py --action rollback --character celebrity --slug {slug} --version {version} --base-dir ./skills/celebrity
Delete a specific skill: After confirming the character family:
# colleague rm -rf skills/colleague/{slug} # relationship rm -rf skills/relationship/{slug} # celebrity rm -rf skills/celebrity/{slug}