DeepCamera yolo-detection-2026-openvino

OpenVINO — real-time object detection via Docker (NCS2, Intel GPU, CPU)

install
source · Clone the upstream repo
git clone https://github.com/SharpAI/DeepCamera
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/SharpAI/DeepCamera "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/detection/yolo-detection-2026-openvino" ~/.claude/skills/sharpai-deepcamera-yolo-detection-2026-openvino && rm -rf "$T"
manifest: skills/detection/yolo-detection-2026-openvino/SKILL.md
source content

OpenVINO Object Detection

Real-time object detection using Intel OpenVINO runtime. Runs inside Docker for cross-platform support. Supports Intel NCS2 USB stick, Intel integrated GPU, Intel Arc discrete GPU, and any x86_64 CPU.

Requirements

  • Docker Desktop 4.35+ (all platforms)
  • Optional hardware: Intel NCS2 USB, Intel iGPU, Intel Arc GPU
  • Falls back to CPU if no accelerator present

How It Works

┌─────────────────────────────────────────────────────┐
│ Host (Aegis-AI)                                     │
│   frame.jpg → /tmp/aegis_detection/                 │
│   stdin  ──→ ┌──────────────────────────────┐       │
│              │ Docker Container              │       │
│              │   detect.py                   │       │
│              │   ├─ loads OpenVINO IR model   │       │
│              │   ├─ reads frame from volume   │       │
│              │   └─ runs inference on device  │       │
│   stdout ←── │   → JSONL detections          │       │
│              └──────────────────────────────┘       │
│   USB ──→ /dev/bus/usb (NCS2)                       │
│   DRI ──→ /dev/dri (Intel GPU)                      │
└─────────────────────────────────────────────────────┘
  1. Aegis writes camera frame JPEG to shared
    /tmp/aegis_detection/
    volume
  2. Sends
    frame
    event via stdin JSONL to Docker container
  3. detect.py
    reads frame, runs inference via OpenVINO
  4. Returns
    detections
    event via stdout JSONL
  5. Same protocol as
    yolo-detection-2026
    — Aegis sees no difference

Platform Setup

Linux

# Intel GPU and NCS2 auto-detected via /dev/dri and /dev/bus/usb
# Docker uses --device flags for direct device access
./deploy.sh

macOS (Docker Desktop 4.35+)

# Docker Desktop USB/IP handles NCS2 passthrough
# CPU fallback always available
./deploy.sh

Windows

# Docker Desktop 4.35+ with USB/IP support
# Or WSL2 backend with usbipd-win for NCS2
.\deploy.bat

Model

Ships without a pre-compiled model by default. On first run,

detect.py
will auto-download
yolo26n.pt
and export to OpenVINO IR format. To pre-export:

# Runs on any platform (unlike Edge TPU compilation)
python scripts/compile_model.py --model yolo26n --size 640 --precision FP16

Supported Devices

DeviceFlagPrecision~Speed
Intel NCS2
MYRIAD
FP16~15ms
Intel iGPU
GPU
FP16/INT8~8ms
Intel Arc
GPU
FP16/INT8~4ms
Any CPU
CPU
FP32/INT8~25ms
Auto
AUTO
BestAuto

Protocol

Same JSONL as

yolo-detection-2026
:

Skill → Aegis (stdout)

{"event": "ready", "model": "yolo26n_openvino", "device": "GPU", "format": "openvino_ir", "classes": 80}
{"event": "detections", "frame_id": 42, "camera_id": "front_door", "objects": [{"class": "person", "confidence": 0.85, "bbox": [100, 50, 300, 400]}]}
{"event": "perf_stats", "total_frames": 50, "timings_ms": {"inference": {"avg": 8.1, "p50": 7.9, "p95": 10.2}}}

Bounding Box Format

[x_min, y_min, x_max, y_max]
— pixel coordinates (xyxy).

Installation

./deploy.sh

The deployer builds the Docker image locally, probes for OpenVINO devices, and sets the runtime command. No packages pulled from external registries beyond Docker base images and pip dependencies.