Claude-skill-registry open-web-unlocker
install
source · Clone the upstream repo
git clone https://github.com/majiayu000/claude-skill-registry
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/other/other/open-web-unlocker" ~/.claude/skills/majiayu000-claude-skill-registry-open-web-unlocker && rm -rf "$T"
manifest:
skills/other/other/open-web-unlocker/SKILL.mdsource content
Open Web Unlocker — Fetch Skill
Open Web Unlocker fetches web pages and returns clean markdown, raw HTML, or structured JSON. It handles static pages, JS-rendered pages, anti-bot challenges, and search engine results — all via a single CLI command with no installation required.
Quick start
bunx open-web-unlocker fetch <url>
Bun (
bunx) is preferred. Node (npx) also works.
Format selection
bunx open-web-unlocker fetch <url> --format markdown # default bunx open-web-unlocker fetch <url> --format json bunx open-web-unlocker fetch <url> --format html
Which format to use:
| Format | When to use |
|---|---|
| Default. Clean content with nav/footer/boilerplate stripped. Best for reading or summarizing pages. |
| When you need structured extracted fields (title, author, price, etc.) |
| When you need the full raw page including all elements |
50+ site-specific parsers produce high-quality markdown and JSON for common domains. Pages without a parser fall back to generic extraction, which may include more noise.
Search engines
Fetch search result pages directly — Open Web Unlocker parses them into structured results:
bunx open-web-unlocker fetch "https://search.brave.com/search?q=query" bunx open-web-unlocker fetch "https://www.bing.com/search?q=query" bunx open-web-unlocker fetch "https://duckduckgo.com/?q=query"
Google is not supported.
Timeout
bunx open-web-unlocker fetch <url> --timeout 45000
Default per-strategy timeouts: ~8s for fetch, ~15s for browser. Increase
--timeout for slow or JS-heavy pages.