Awesome-omni-skills ffuf-web-fuzzing-v2
FFUF (Fuzz Faster U Fool) Skill workflow skill. Use this skill when the user needs Expert guidance for ffuf web fuzzing during penetration testing, including authenticated fuzzing with raw requests, auto-calibration, and result analysis and the operator should preserve the upstream workflow, copied support files, and provenance before merging or handing off.
git clone https://github.com/diegosouzapw/awesome-omni-skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/ffuf-web-fuzzing-v2" ~/.claude/skills/diegosouzapw-awesome-omni-skills-ffuf-web-fuzzing-v2 && rm -rf "$T"
skills/ffuf-web-fuzzing-v2/SKILL.mdFFUF (Fuzz Faster U Fool) Skill
Overview
This public intake copy packages
plugins/antigravity-awesome-skills/skills/ffuf-web-fuzzing from https://github.com/sickn33/antigravity-awesome-skills into the native Omni Skills editorial shape without hiding its origin.
Use it when the operator needs the upstream workflow, support files, and repository context to stay intact while the public validator and private enhancer continue their normal downstream flow.
This intake keeps the copied upstream files intact and uses
metadata.json plus ORIGIN.md as the provenance anchor for review.
FFUF (Fuzz Faster U Fool) Skill
Imported source sections that did not map cleanly to the public headings are still preserved below or in the support files. Notable imported sections: Core Concepts, Common Use Cases, Filtering and Matching, Rate Limiting and Timing, Output Options, Advanced Techniques.
When to Use This Skill
Use this section as the trigger filter. It should make the activation boundary explicit before the operator loads files, runs commands, or opens a pull request.
- You are fuzzing web targets with ffuf during authorized security testing or penetration testing.
- The task involves content discovery, subdomain enumeration, parameter fuzzing, or authenticated request fuzzing.
- You need guidance on wordlists, filtering, calibration, and interpreting ffuf results efficiently.
- Use when the request clearly matches the imported source intent: Expert guidance for ffuf web fuzzing during penetration testing, including authenticated fuzzing with raw requests, auto-calibration, and result analysis.
- Use when the operator should preserve upstream workflow detail instead of rewriting the process from scratch.
- Use when provenance needs to stay visible in the answer, PR, or review packet.
Operating Table
| Situation | Start here | Why it matters |
|---|---|---|
| First-time use | | Confirms repository, branch, commit, and imported path before touching the copied workflow |
| Provenance review | | Gives reviewers a plain-language audit trail for the imported source |
| Workflow execution | | Starts with the smallest copied file that materially changes execution |
| Supporting context | | Adds the next most relevant copied source file without loading the entire package |
| Handoff decision | | Helps the operator switch to a stronger native skill when the task drifts |
Workflow
This workflow is intentionally editorial and operational at the same time. It keeps the imported source useful to the operator while still satisfying the public intake standards that feed the downstream enhancer flow.
- bash # Using Go go install github.com/ffuf/ffuf/v2@latest # Using Homebrew (macOS) brew install ffuf # Binary download # Download from: https://github.com/ffuf/ffuf/releases/latest
- Confirm the user goal, the scope of the imported workflow, and whether this skill is still the right router for the task.
- Read the overview and provenance files before loading any copied upstream support files.
- Load only the references, examples, prompts, or scripts that materially change the outcome for the current request.
- Execute the upstream workflow while keeping provenance and source boundaries explicit in the working notes.
- Validate the result against the upstream expectations and the evidence you can point to in the copied files.
- Escalate or hand off to a related skill when the work moves out of this imported workflow's center of gravity.
Imported Workflow Notes
Imported: Installation
# Using Go go install github.com/ffuf/ffuf/v2@latest # Using Homebrew (macOS) brew install ffuf # Binary download # Download from: https://github.com/ffuf/ffuf/releases/latest
Imported: Overview
FFUF is a fast web fuzzer written in Go, designed for discovering hidden content, directories, files, subdomains, and testing for vulnerabilities during penetration testing. It's significantly faster than traditional tools like dirb or dirbuster.
Imported: Core Concepts
The FUZZ Keyword
The
FUZZ keyword is used as a placeholder that gets replaced with entries from your wordlist. You can place it anywhere:
- URLs:
https://target.com/FUZZ - Headers:
-H "Host: FUZZ" - POST data:
-d "username=admin&password=FUZZ" - Multiple locations with custom keywords:
then use-w wordlist.txt:CUSTOM
instead ofCUSTOMFUZZ
Multi-wordlist Modes
- clusterbomb: Tests all combinations (default) - cartesian product
- pitchfork: Iterates through wordlists in parallel (1-to-1 matching)
- sniper: Tests one position at a time (for multiple FUZZ positions)
Examples
Example 1: Ask for the upstream workflow directly
Use @ffuf-web-fuzzing-v2 to handle <task>. Start from the copied upstream workflow, load only the files that change the outcome, and keep provenance visible in the answer.
Explanation: This is the safest starting point when the operator needs the imported workflow, but not the entire repository.
Example 2: Ask for a provenance-grounded review
Review @ffuf-web-fuzzing-v2 against metadata.json and ORIGIN.md, then explain which copied upstream files you would load first and why.
Explanation: Use this before review or troubleshooting when you need a precise, auditable explanation of origin and file selection.
Example 3: Narrow the copied support files before execution
Use @ffuf-web-fuzzing-v2 for <task>. Load only the copied references, examples, or scripts that change the outcome, and name the files explicitly before proceeding.
Explanation: This keeps the skill aligned with progressive disclosure instead of loading the whole copied package by default.
Example 4: Build a reviewer packet
Review @ffuf-web-fuzzing-v2 using the copied upstream files plus provenance, then summarize any gaps before merge.
Explanation: This is useful when the PR is waiting for human review and you want a repeatable audit packet.
Best Practices
Treat the generated public skill as a reviewable packaging layer around the upstream repository. The goal is to keep provenance explicit and load only the copied source material that materially improves execution.
- Directory discovery: SecLists Discovery/Web-Content (raft-large-directories.txt, directory-list-2.3-medium.txt)
- Subdomains: SecLists Discovery/DNS (subdomains-top1million-5000.txt)
- Parameters: SecLists Discovery/Web-Content (burp-parameter-names.txt)
- Usernames: SecLists Usernames
- Passwords: SecLists Passwords
- Source: https://github.com/danielmiessler/SecLists
- Check the default response first to identify common response sizes, status codes, or patterns
Imported Operating Notes
Imported: Best Practices
1. ALWAYS Use Auto-Calibration
Use
-ac by default for every scan. This is non-negotiable for productive pentesting:
ffuf -w wordlist.txt -u https://target.com/FUZZ -ac
2. Use Raw Requests for Authentication
Don't struggle with command-line flags for complex auth. Capture the full request and use
--request:
# 1. Capture authenticated request from Burp/DevTools # 2. Save to req.txt with FUZZ keyword in place # 3. Run with -ac ffuf --request req.txt -w wordlist.txt -ac -o results.json
3. Use Appropriate Wordlists
- Directory discovery: SecLists Discovery/Web-Content (raft-large-directories.txt, directory-list-2.3-medium.txt)
- Subdomains: SecLists Discovery/DNS (subdomains-top1million-5000.txt)
- Parameters: SecLists Discovery/Web-Content (burp-parameter-names.txt)
- Usernames: SecLists Usernames
- Passwords: SecLists Passwords
- Source: https://github.com/danielmiessler/SecLists
3. Rate Limiting for Stealth
Use
-rate to avoid triggering WAF/IDS or overwhelming the server:
ffuf -w wordlist.txt -u https://target.com/FUZZ -rate 2 -t 10
4. Filter Strategically
- Check the default response first to identify common response sizes, status codes, or patterns
- Use
to filter by size or-fs
to filter by status code-fc - Combine filters:
-fc 403,404 -fs 1234
5. Save Results Appropriately
Always save results to a file for later analysis:
ffuf -w wordlist.txt -u https://target.com/FUZZ -o results.json -of json
6. Use Interactive Mode
Press ENTER during execution to drop into interactive mode where you can:
- Adjust filters on the fly
- Save current results
- Restart the scan
- Manage the queue
7. Recursion Depth
Be careful with recursion depth to avoid getting stuck in infinite loops or overwhelming the server:
ffuf -w wordlist.txt -u https://target.com/FUZZ -recursion -recursion-depth 2 -maxtime-job 120
Troubleshooting
Problem: The operator skipped the imported context and answered too generically
Symptoms: The result ignores the upstream workflow in
plugins/antigravity-awesome-skills/skills/ffuf-web-fuzzing, fails to mention provenance, or does not use any copied source files at all.
Solution: Re-open metadata.json, ORIGIN.md, and the most relevant copied upstream files. Load only the files that materially change the answer, then restate the provenance before continuing.
Problem: The imported workflow feels incomplete during review
Symptoms: Reviewers can see the generated
SKILL.md, but they cannot quickly tell which references, examples, or scripts matter for the current task.
Solution: Point at the exact copied references, examples, scripts, or assets that justify the path you took. If the gap is still real, record it in the PR instead of hiding it.
Problem: The task drifted into a different specialization
Symptoms: The imported skill starts in the right place, but the work turns into debugging, architecture, design, security, or release orchestration that a native skill handles better. Solution: Use the related skills section to hand off deliberately. Keep the imported provenance visible so the next skill inherits the right context instead of starting blind.
Imported Troubleshooting Notes
Imported: Troubleshooting
Too Many False Positives
- Use
for auto-calibration-ac - Check default response and filter by size with
-fs - Use regex filtering with
-fr
Too Slow
- Increase threads:
-t 100 - Reduce wordlist size
- Use
if you don't need response content-ignore-body
Getting Blocked
- Reduce rate:
-rate 2 - Add delays:
-p 0.5-1.5 - Reduce threads:
-t 10 - Randomize User-Agent
- Use proxy rotation
Missing Results
- Check if you're filtering too aggressively
- Use
to see all responses-mc all - Disable auto-calibration temporarily
- Use verbose mode
to see what's happening-v
Related Skills
- Use when the work is better handled by that native specialization after this imported skill establishes context.@error-debugging-multi-agent-review-v2
- Use when the work is better handled by that native specialization after this imported skill establishes context.@error-detective-v2
- Use when the work is better handled by that native specialization after this imported skill establishes context.@error-diagnostics-error-analysis-v2
- Use when the work is better handled by that native specialization after this imported skill establishes context.@error-diagnostics-error-trace-v2
Additional Resources
Use this support matrix and the linked files below as the operator packet for this imported skill. They should reflect real copied source material, not generic scaffolding.
| Resource family | What it gives the reviewer | Example path |
|---|---|---|
| copied reference notes, guides, or background material from upstream | |
| worked examples or reusable prompts copied from upstream | |
| upstream helper scripts that change execution or validation | |
| routing or delegation notes that are genuinely part of the imported package | |
| supporting assets or schemas copied from the source package | |
Imported Reference Notes
Imported: Resources
- Official GitHub: https://github.com/ffuf/ffuf
- Wiki: https://github.com/ffuf/ffuf/wiki
- Codingo's Guide: https://codingo.io/tools/ffuf/bounty/2020/09/17/everything-you-need-to-know-about-ffuf.html
- Practice Lab: http://ffuf.me
- SecLists Wordlists: https://github.com/danielmiessler/SecLists
Imported: Quick Reference Card
| Task | Command Template |
|---|---|
| Directory Discovery | |
| Subdomain Discovery | |
| Parameter Fuzzing | |
| POST Data Fuzzing | |
| With Extensions | Add |
| Filter Status | Add |
| Filter Size | Add |
| Rate Limit | Add |
| Save Output | Add |
| Verbose | Add |
| Recursion | Add |
| Through Proxy | Add |
Imported: Additional Resources
This skill includes supplementary materials in the
resources/ directory:
Resource Files
- WORDLISTS.md: Comprehensive guide to SecLists wordlists, recommended lists for different scenarios, file extensions, and quick reference patterns
- REQUEST_TEMPLATES.md: Pre-built req.txt templates for common authentication scenarios (JWT, OAuth, session cookies, API keys, etc.) with usage examples
Helper Script
- ffuf_helper.py: Python script to assist with:
- Analyzing ffuf JSON results for anomalies and interesting findings
- Creating req.txt template files from command-line arguments
- Generating number-based wordlists for IDOR testing
Helper Script Usage:
# Analyze results to find interesting anomalies python3 ffuf_helper.py analyze results.json # Create authenticated request template python3 ffuf_helper.py create-req -o req.txt -m POST -u "https://api.target.com/users" \ -H "Authorization: Bearer TOKEN" -d '{"action":"FUZZ"}' # Generate IDOR testing wordlist python3 ffuf_helper.py wordlist -o ids.txt -t numbers -s 1 -e 10000
When to use resources:
- Users need wordlist recommendations → Reference WORDLISTS.md
- Users need help with authenticated requests → Reference REQUEST_TEMPLATES.md
- Users want to analyze results → Use ffuf_helper.py analyze
- Users need to generate req.txt → Use ffuf_helper.py create-req
- Users need number ranges for IDOR → Use ffuf_helper.py wordlist
Imported: Common Use Cases
1. Directory and File Discovery
# Basic directory fuzzing ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ # With file extensions ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -e .php,.html,.txt,.pdf # Colored and verbose output ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -c -v # With recursion (finds nested directories) ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -recursion -recursion-depth 2
2. Subdomain Enumeration
# Virtual host discovery ffuf -w /path/to/subdomains.txt -u https://target.com -H "Host: FUZZ.target.com" -fs 4242 # Note: -fs 4242 filters out responses of size 4242 (adjust based on default response size)
3. Parameter Fuzzing
# GET parameter names ffuf -w /path/to/params.txt -u https://target.com/script.php?FUZZ=test_value -fs 4242 # GET parameter values ffuf -w /path/to/values.txt -u https://target.com/script.php?id=FUZZ -fc 401 # Multiple parameters ffuf -w params.txt:PARAM -w values.txt:VAL -u https://target.com/?PARAM=VAL -mode clusterbomb
4. POST Data Fuzzing
# Basic POST fuzzing ffuf -w /path/to/passwords.txt -X POST -d "username=admin&password=FUZZ" -u https://target.com/login.php -fc 401 # JSON POST data ffuf -w entries.txt -u https://target.com/api -X POST -H "Content-Type: application/json" -d '{"name": "FUZZ", "key": "value"}' -fr "error" # Fuzzing multiple POST fields ffuf -w users.txt:USER -w passes.txt:PASS -X POST -d "username=USER&password=PASS" -u https://target.com/login -mode pitchfork
5. Header Fuzzing
# Custom headers ffuf -w /path/to/wordlist.txt -u https://target.com -H "X-Custom-Header: FUZZ" # Multiple headers ffuf -w /path/to/wordlist.txt -u https://target.com -H "User-Agent: FUZZ" -H "X-Forwarded-For: 127.0.0.1"
Imported: Filtering and Matching
Matchers (Include Results)
: Match status codes (default: 200-299,301,302,307,401,403,405,500)-mc
: Match line count-ml
: Match regex-mr
: Match response size-ms
: Match response time (e.g.,-mt
or>100
milliseconds)<100
: Match word count-mw
Filters (Exclude Results)
: Filter status codes (e.g.,-fc
)-fc 404,403,401
: Filter line count-fl
: Filter regex (e.g.,-fr
)-fr "error"
: Filter response size (e.g.,-fs
)-fs 42,4242
: Filter response time-ft
: Filter word count-fw
Auto-Calibration (USE BY DEFAULT!)
CRITICAL: Always use
-ac unless you have a specific reason not to. This is especially important when having Claude analyze results, as it dramatically reduces noise and false positives.
# Auto-calibration - ALWAYS USE THIS ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -ac # Per-host auto-calibration (useful for multiple hosts) ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -ach # Custom auto-calibration string (for specific patterns) ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -acc "404NotFound"
Why
is essential:-ac
- Automatically detects and filters repetitive false positive responses
- Removes noise from dynamic websites with random content
- Makes results analysis much easier for both humans and Claude
- Prevents thousands of identical 404/403 responses from cluttering output
- Adapts to the target's specific behavior
When Claude analyzes your ffuf results,
is MANDATORY - without it, Claude will waste time sifting through thousands of false positives instead of finding the interesting anomalies.-ac
Imported: Rate Limiting and Timing
Rate Control
# Limit to 2 requests per second (stealth mode) ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -rate 2 # Add delay between requests (0.1 to 2 seconds random) ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -p 0.1-2.0 # Set number of concurrent threads (default: 40) ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -t 10
Time Limits
# Maximum total execution time (60 seconds) ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -maxtime 60 # Maximum time per job (useful with recursion) ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -maxtime-job 60 -recursion
Imported: Output Options
Output Formats
# JSON output ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -o results.json # HTML output ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -of html -o results.html # CSV output ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -of csv -o results.csv # All formats ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -of all -o results # Silent mode (no progress, only results) ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -s # Pipe to file with tee ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -s | tee results.txt
Imported: Advanced Techniques
Using Raw HTTP Requests (Critical for Authenticated Fuzzing)
This is one of the most powerful features of ffuf, especially for authenticated requests with complex headers, cookies, or tokens.
Workflow:
- Capture a full authenticated request (from Burp Suite, browser DevTools, etc.)
- Save it to a file (e.g.,
)req.txt - Replace the value you want to fuzz with the
keywordFUZZ - Use the
flag--request
# From a file containing raw HTTP request ffuf --request req.txt -w /path/to/wordlist.txt -ac
Example req.txt file:
POST /api/v1/users/FUZZ HTTP/1.1 Host: target.com User-Agent: Mozilla/5.0 Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9... Cookie: session=abc123xyz; csrftoken=def456 Content-Type: application/json Content-Length: 27 {"action":"view","id":"1"}
Use Cases:
- Fuzzing authenticated endpoints with complex auth headers
- Testing API endpoints with JWT tokens
- Fuzzing with CSRF tokens, session cookies, and custom headers
- Testing endpoints that require specific User-Agents or Accept headers
- POST/PUT/DELETE requests with authentication
Pro Tips:
- You can place FUZZ in multiple locations: URL path, headers, body
- Use
if needed (default is https)-request-proto https - Always use
to filter out authenticated "not found" or error responses-ac - Great for IDOR testing: fuzz user IDs, document IDs, etc. in authenticated contexts
# Common authenticated fuzzing patterns ffuf --request req.txt -w user_ids.txt -ac -mc 200 -o results.json # With multiple FUZZ positions using custom keywords ffuf --request req.txt -w endpoints.txt:ENDPOINT -w ids.txt:ID -mode pitchfork -ac
Proxy Usage
# HTTP proxy (useful for Burp Suite) ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -x http://127.0.0.1:8080 # SOCKS5 proxy ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -x socks5://127.0.0.1:1080 # Replay matched requests through proxy ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -replay-proxy http://127.0.0.1:8080
Cookie and Authentication
# Using cookies ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -b "sessionid=abc123; token=xyz789" # Client certificate authentication ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -cc client.crt -ck client.key
Encoding
# URL encoding ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -enc 'FUZZ:urlencode' # Multiple encodings ffuf -w /path/to/wordlist.txt -u https://target.com/FUZZ -enc 'FUZZ:urlencode b64encode'
Testing for Vulnerabilities
# SQL injection testing ffuf -w sqli_payloads.txt -u https://target.com/page.php?id=FUZZ -fs 1234 # XSS testing ffuf -w xss_payloads.txt -u https://target.com/search?q=FUZZ -mr "<script>" # Command injection ffuf -w cmdi_payloads.txt -u https://target.com/execute?cmd=FUZZ -fr "error"
Batch Processing Multiple Targets
# Process multiple URLs cat targets.txt | xargs -I@ sh -c 'ffuf -w wordlist.txt -u @/FUZZ -ac' # Loop through multiple targets with results for url in $(cat targets.txt); do ffuf -w wordlist.txt -u $url/FUZZ -ac -o "results_$(echo $url | md5sum | cut -d' ' -f1).json" done
Imported: Common Patterns and One-Liners
Quick Directory Scan
ffuf -w ~/wordlists/common.txt -u https://target.com/FUZZ -mc 200,301,302,403 -ac -c -v
Comprehensive Scan with Extensions
ffuf -w ~/wordlists/raft-large-directories.txt -u https://target.com/FUZZ -e .php,.html,.txt,.bak,.old -ac -c -v -o results.json
Authenticated Fuzzing (Raw Request)
# 1. Save your authenticated request to req.txt with FUZZ keyword # 2. Run: ffuf --request req.txt -w ~/wordlists/api-endpoints.txt -ac -o results.json -of json
API Endpoint Discovery
ffuf -w ~/wordlists/api-endpoints.txt -u https://api.target.com/v1/FUZZ -H "Authorization: Bearer TOKEN" -mc 200,201 -ac -c
Subdomain Discovery with Auto-Calibration
ffuf -w ~/wordlists/subdomains-top5000.txt -u https://FUZZ.target.com -ac -c -v
POST Login Brute Force
ffuf -w ~/wordlists/passwords.txt -X POST -d "username=admin&password=FUZZ" -u https://target.com/login -fc 401 -rate 5 -ac
IDOR Testing with Auth
# Use req.txt with authenticated headers and FUZZ in the ID parameter ffuf --request req.txt -w numbers.txt -ac -mc 200 -fw 100-200
Imported: Configuration File
Create
~/.config/ffuf/ffufrc for default settings:
[http] headers = ["User-Agent: Mozilla/5.0"] timeout = 10 [general] colors = true threads = 40 [matcher] status = "200-299,301,302,307,401,403,405,500"
Imported: Notes for Claude
When helping users with ffuf:
- ALWAYS include
in every command - This is mandatory for productive pentesting and result analysis-ac - When users mention authenticated fuzzing or provide auth tokens/cookies:
- Suggest creating a
file with the full HTTP requestreq.txt - Show them how to insert FUZZ where they want to fuzz
- Use
ffuf --request req.txt -w wordlist.txt -ac
- Suggest creating a
- Always recommend starting with
for auto-calibration-ac - Suggest appropriate wordlists from SecLists based on the task
- Remind users to use rate limiting (
) for production targets-rate - Encourage saving output to files for documentation:
-o results.json - Suggest filtering strategies based on initial reconnaissance
- Always use the FUZZ keyword (case-sensitive)
- Consider stealth: lower threads, rate limiting, and delays for sensitive targets
- For pentesting reports, use
or-of html
for client-friendly formats-of csv - When analyzing ffuf results for users:
- Assume they used
(if not, results will be too noisy)-ac - Focus on anomalies: different status codes, response sizes, timing
- Look for interesting endpoints: admin, api, backup, config, .git, etc.
- Flag potential vulnerabilities: error messages, stack traces, version info
- Suggest follow-up fuzzing on interesting findings
- Assume they used
Imported: Limitations
- Use this skill only when the task clearly matches the scope described above.
- Do not treat the output as a substitute for environment-specific validation, testing, or expert review.
- Stop and ask for clarification if required inputs, permissions, safety boundaries, or success criteria are missing.