Skills llava-vision
install
source · Clone the upstream repo
git clone https://github.com/openclaw/skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/447992399/llava-vision" ~/.claude/skills/openclaw-skills-llava-vision && rm -rf "$T"
OpenClaw · Install into ~/.openclaw/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.openclaw/skills && cp -r "$T/skills/447992399/llava-vision" ~/.openclaw/skills/openclaw-skills-llava-vision && rm -rf "$T"
manifest:
skills/447992399/llava-vision/SKILL.mdsource content
LLaVA Vision Skill
This skill forwards an image to a locally running llama.cpp server that hosts a LLaVA model and returns the model’s text description of the image. It accepts either a local file path or a remote image URL.
Usage
clawhub llava-vision --image /path/to/photo.jpg # or clawhub llava-vision --image https://example.com/photo.jpg
The skill uses the built‑in vision_analyze tool, which expects an image file path. If the image cannot be read or the server is unreachable, an error message will be returned.
Dependencies
- Node.js (the skill itself)
- A local llama.cpp server with the LLaVA model exposed at the default endpoint.
Example
$ clawhub run llava-vision --image ./cat.png The image contains a cat sitting on a windowsill, looking out at a sunny garden.