Waza read
Invoke whenever the user's message contains any http(s) URL, web page link, or PDF path, even if the user only says "analyze", "summarize", "look at", or "what does X say". Always prefer this skill over WebFetch for any URL. WebFetch is not a substitute and fails on X/Twitter, paywalls, and auth-gated pages. Not for local text files or source code already in the repo.
git clone https://github.com/tw93/Waza
T=$(mktemp -d) && git clone --depth=1 https://github.com/tw93/Waza "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/read" ~/.claude/skills/tw93-waza-read && rm -rf "$T"
skills/read/SKILL.mdRead: Fetch Any URL or PDF as Markdown
Prefix your first line with 🥷 inline, not as its own paragraph.
Convert any URL or local PDF to clean Markdown and save it. No analysis, no summary, no discussion of the content unless explicitly asked.
Routing
| Input | Method |
|---|---|
, | Feishu API script |
| Proxy cascade first, built-in WeChat article script only if the proxies fail |
URL or local PDF path | PDF extraction |
GitHub URLs (, ) | Prefer raw content or first. Use the proxy cascade only as fallback. |
, | Proxy cascade (r.jina.ai keeps image URLs). Do not try WebFetch; it 402s. |
| Everything else | Proxy cascade |
After routing, load
references/read-methods.md and run the commands for the chosen method.
Output Format
Title: {title} Author: {author} (if available) Source: {platform} URL: {original url} Content {full Markdown, truncated at 200 lines if long}
Saving
Save to
~/Downloads/{title}.md with YAML frontmatter by default.
Skip only if user says "just preview" or "don't save". Tell the user the saved path.
If
~/Downloads/{title}.md already exists, append -1, -2, etc., to the filename. Never overwrite an existing file without explicit confirmation.
Images
By default only save Markdown. Download images only when the user explicitly asks: "download images", "save images", "带图", "下载图片", or similar.
When asked, after saving the Markdown:
- Extract image URLs:
grep -oE 'https?://[^ )"]+\.(jpg|jpeg|png|webp|gif)' {md_path} | sort -u - Create
and curl each URL in parallel (~/Downloads/{title}-images/
+&
). Use the same proxy env vars as the fetch step.wait - Report the count and folder path. If any download fails, list the failed URLs.
Hard Rules
- Do not summarize or analyze the content. Your job is conversion and storage, not interpretation.
- Never overwrite without confirmation. If the target filename already exists, use an auto-incremented suffix.
- Stop after the save report. Do not suggest follow-up actions ("Would you like me to summarize?", "Next, you could...") unless the user asks.
Gotchas
| What happened | Rule |
|---|---|
| Fetched a paywalled article and returned a login page as Markdown | Inspect the first 10 lines for paywall signals ("Subscribe", "Sign in", "Continue reading"). If found, stop and warn the user. Do not save the login page. |
| r.jina.ai or defuddle.md returned empty for a JS-heavy site | Try the local fallback ( or ) before giving up. |
| Network failures | Prepend local proxy env vars if available and retry once. |
| Long content | ` |
| Local fallback tools returned JSON | Extract the Markdown-bearing field. Raw JSON is not a valid final output for . |
| All methods failed | Stop and tell the user what was tried and what failed. Suggest opening the URL in a browser or providing an alternative. Do not silently return empty or partial results. |