Learn-skills.dev katana-web-crawling
Guides use of ProjectDiscovery Katana for web crawling and spidering in security testing and recon workflows. Covers installation, standard vs headless mode, scope and rate limits, JSONL output, and piping from httpx or URL lists. Use when the user mentions Katana, projectdiscovery/katana, web crawling, spidering, endpoint discovery, attack surface mapping, or chaining crawlers in automation pipelines.
git clone https://github.com/NeverSight/learn-skills.dev
T=$(mktemp -d) && git clone --depth=1 https://github.com/NeverSight/learn-skills.dev "$T" && mkdir -p ~/.claude/skills && cp -r "$T/data/skills-md/agentic-reserve/blockint-skills/katana-web-crawling" ~/.claude/skills/neversight-learn-skills-dev-katana-web-crawling && rm -rf "$T"
data/skills-md/agentic-reserve/blockint-skills/katana-web-crawling/SKILL.mdKatana web crawling
Katana is a fast crawler/spider from ProjectDiscovery, aimed at automation pipelines (URLs in → discovered endpoints out). Official docs and flags: repository README and
katana -h.
Scope and ethics
Use only on systems you own or are explicitly authorized to test (contract, bug bounty program rules, internal env). Crawl gently: set concurrency, rate limits, and depth to reduce load. Misuse can violate law and terms of service—you are responsible for your actions (tool ships with that warning).
Installation
Go (requires Go 1.25+ per upstream; verify current README if install fails):
CGO_ENABLED=1 go install github.com/projectdiscovery/katana/cmd/katana@latest
Docker:
docker pull projectdiscovery/katana:latest docker run projectdiscovery/katana:latest -u https://example.com
Headless in Docker often needs
-system-chrome and Chrome/Chromium available—see upstream Docker section.
Input
- Single/multiple URLs:
or comma-separated URLs-u https://a.com - File:
-list urls.txt - STDIN:
orecho https://example.com | katanacat domains | httpx | katana
Modes
| Mode | When |
|---|---|
| Standard (default) | Fast; uses Go HTTP client; no full JS/DOM render—may miss post-render routes |
Headless () | Browser context; better for JS-heavy apps; optional |
Enable JS file parsing for more endpoints:
-js-crawl (-jc). -jsluice is heavier.
Flags to know first
| Flag | Purpose |
|---|---|
, | Max crawl depth (default 3) |
, | Parallel fetchers |
, | Max requests per second |
, | Cap total crawl time (e.g. ) |
/ | In-scope / out-of-scope URL regex |
| Disable default host scope if you need cross-host (use carefully) |
| Ignore same path with different query strings |
, | Reduce near-duplicate paths |
, | / etc. (min depth 3 for full coverage per docs) |
, | JSONL output for scripting |
, | Write to file |
, | Store HTTP for review (disk use) |
| HTTP/SOCKS5 proxy |
| Extra headers (auth, cookies) via |
Run
katana -h for the full list (filters, form fill, tech detect, TLS options, etc.).
Minimal examples
katana -u https://example.com -d 2 -silent
katana -u https://example.com -jsonl -o endpoints.jsonl
katana -list seeds.txt -d 3 -cs '.*\.example\.com.*' -rl 30 -jsonl
Headless (JS-heavy target):
katana -u https://example.com -headless -d 2
Pipelines
Common pattern: resolve live HTTP first, then crawl:
cat domains.txt | httpx -silent | katana -jsonl -o crawl.jsonl
Combine with other PD tools (naabu, nuclei, etc.) only in authorized assessments.
Troubleshooting
required for go install per README.CGO_ENABLED=1- Headless failures: try
, ensure Chrome/Chromium installed, or use Docker image with documented Chrome setup.-system-chrome - Health check:
/-health-check
.-hc
References
- Source and releases: github.com/projectdiscovery/katana