Skills pdf-ocr-layout

Multimodal document deep analysis tool based on Zhipu GLM-OCR, GLM-4.7, and GLM-4.6V.

install
source · Clone the upstream repo
git clone https://github.com/openclaw/skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/baokui/pdf-ocr-layout" ~/.claude/skills/openclaw-skills-pdf-ocr-layout && rm -rf "$T"
OpenClaw · Install into ~/.openclaw/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.openclaw/skills && cp -r "$T/skills/baokui/pdf-ocr-layout" ~/.openclaw/skills/openclaw-skills-pdf-ocr-layout && rm -rf "$T"
manifest: skills/baokui/pdf-ocr-layout/SKILL.md
source content

GLM-OCR Multimodal Deep Analysis

This tool builds a high-precision document parsing pipeline: using GLM-OCR for layout element extraction, calling GLM-4.7 for logical interpretation of table data, and calling GLM-4.6V for multimodal visual interpretation of images and charts.

Pipeline Implementation Architecture

This Skill consists of two core script stages, orchestrated through

glm_ocr_pipeline.py
:

1. Extraction Stage (
scripts/glm_ocr_extract.py
)

  • Core Model: GLM-OCR
  • Function: Responsible for physical layout analysis of documents
  • Output: Extract table HTML and clean to Markdown, automatically crop independent chart image files based on Bbox coordinates, and generate intermediate JSON containing full page reading order

2. Understanding Stage (
scripts/glm_understanding.py
)

  • Core Model: GLM-4.7 (text) / GLM-4.6V (visual)
  • Function: Responsible for deep semantic reasoning of content
  • Logic:
    • Tables: Combine full text context, use GLM-4.7 to analyze business meaning of Markdown table data
    • Charts: Combine full text context + cropped images, use GLM-4.6V for multimodal visual analysis

Invocation Methods

Command Line Invocation

# Run complete pipeline: extraction -> cropping -> understanding analysis, supports input in .pdf, .jpg, .png and other formats
python scripts/glm_ocr_pipeline.py \
  --file_path "/data/report_page.jpg" \
  --output_dir "/data/output"

API Parameter Description

ParameterTypeRequiredDescription
file_pathstringAbsolute path to input file (supports .pdf, .png, .jpg)
output_dirstringResult output directory (used to save cropped images and JSON reports)

Return Result Structure (JSON)

The tool returns a list containing layout elements and their deep understanding:

[
  {
    "type": "table",
    "bbox": [100, 200, 500, 600],
    "content_info": "| Revenue | Q1 |\n|---|---|\n| 100M | ... |",
    "deep_understanding": "(Generated by GLM-4.7) This table shows Q1 2024 revenue data. Combined with the 'market expansion strategy' mentioned in paragraph 3 of the body text, it can be seen that..."
  },
  {
    "type": "image",
    "bbox": [100, 700, 500, 900],
    "content_info": "/data/output/images/report_page_img_2.png",
    "deep_understanding": "(Generated by GLM-4.6V) This is a system architecture diagram. Visually, it shows the flow of clients connecting to servers through a Load Balancer. Combined with the title 'Fig 3' and context, this diagram is mainly used to illustrate..."
  }
]

Environment Requirements

  • Environment variable
    ZHIPU_API_KEY
    must be configured
  • Python 3.8+
  • Dependencies:
    zhipuai
    ,
    pillow
    ,
    beautifulsoup4

Notes

1. Model Routing Strategy

  • Table (表格): Content passed to GLM-4.7, combined with full text Markdown context for logical reasoning
  • Image (图片): Image Base64 encoded and passed to GLM-4.6V, combined with OCR-extracted titles and full text context for multimodal understanding

2. Context Association

All understanding is based on the complete layout logic of the document (Markdown Context), not isolated fragment analysis.

3. PDF Processing

Multi-page PDFs default to processing the first page. For batch processing, please extend the loop logic at the script level.