Marketplace routeros-container
RouterOS /container subsystem for running OCI containers on MikroTik devices. Use when: enabling containers on RouterOS, setting up VETH/bridge networking for containers, managing container lifecycle via CLI or REST API, building OCI images for RouterOS, configuring container environment variables, troubleshooting container issues, or when the user mentions RouterOS container, /container, VETH, device-mode container, or MikroTik Docker.
git clone https://github.com/aiskillstore/marketplace
T=$(mktemp -d) && git clone --depth=1 https://github.com/aiskillstore/marketplace "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/tikoci/routeros-container" ~/.claude/skills/aiskillstore-marketplace-routeros-container && rm -rf "$T"
skills/tikoci/routeros-container/SKILL.mdRouterOS Container Subsystem
Overview
RouterOS 7.x includes a container subsystem (
/container) that runs OCI-compatible container images directly on MikroTik hardware. It is NOT Docker — it's MikroTik's own implementation with significant differences.
Requirements:
- RouterOS 7.x with
extra package installedcontainer - Device-mode must be enabled (requires physical access for initial setup)
- Sufficient storage (external USB disk recommended, 100+ MB/s, 10K+ random IOPS)
- ARM, ARM64, or x86 architecture (MIPS not supported for containers)
Device-Mode — Physical Access Required
Container support is gated behind device-mode, which requires physical confirmation (reset button press or power cycle) to enable:
# Enable container mode /system/device-mode/update mode=advanced container=yes # After executing: physically confirm within activation-timeout # - Press reset button, OR # - Power cycle the device
Device-mode is a general RouterOS security feature — not container-specific. It gates many features (scheduler, fetch, sniffer, etc.) across four modes (
home, basic, advanced, rose) with device-dependent factory defaults.
For the full feature matrix, modes, update properties, and physical confirmation details: see the Device-mode reference in the
routeros-fundamentals skill.
Mode script bypass (7.22+): During netinstall, a mode script (
-sm) can set device-mode on first boot, automatically triggering a reboot. See the routeros-netinstall skill.
Installing the Container Package
# Check if container package is already installed /system/package/print where name=container
Method 1: Upload .npk file + apply-changes (offline)
# Upload via SCP (or Winbox drag-and-drop, or WebFig file upload) scp container-7.22-arm64.npk admin@router:/
# Apply changes (triggers reboot AND activates — /system/reboot does NOT work!) /system/package/apply-changes
⚠️ Critical:
was added in RouterOS 7.18. On 7.18+, always use it — a plain /system/package/apply-changes
/system/reboot discards uploaded packages. On versions <7.18, /system/reboot IS the correct (and only) method. (Lab-verified: 7.22.1 uses apply-changes, 7.10 requires reboot. Version check via rosetta command tree.)
Method 2: Online package update (requires internet)
/system/package/update check-for-updates /system/package/update install
This downloads and installs all available updates including extra packages. To enable a specific package already uploaded but not active, use
/system/package/enable container then /system/package/apply-changes.
Networking Setup
VETH (Virtual Ethernet)
Containers connect to RouterOS networking via VETH interfaces:
# Create VETH pair /interface/veth/add name=veth-myapp address=172.17.0.2/24 gateway=172.17.0.1 # The VETH name IS the container's interface name (RouterOS 7.21+)
Bridge Setup
# Create a bridge for containers /interface/bridge/add name=containers # Add VETH to the bridge /interface/bridge/port/add bridge=containers interface=veth-myapp # Assign IP to bridge (acts as gateway for containers) /ip/address/add address=172.17.0.1/24 interface=containers
NAT / Firewall
# Masquerade container traffic for internet access /ip/firewall/nat/add chain=srcnat action=masquerade src-address=172.17.0.0/24 # Port forwarding from host to container /ip/firewall/nat/add chain=dstnat action=dst-nat \ dst-port=8080 protocol=tcp to-addresses=172.17.0.2 to-ports=80 # Allow container bridge in interface list (if firewall restricts) /interface/list/member/add list=LAN interface=containers
Layer 2 Networking (Bridge Mode)
For containers that need to be on the same L2 network as physical interfaces (e.g., netinstall):
# Add both physical port and VETH to the same bridge /interface/bridge/port/add bridge=mybridge interface=ether5 /interface/bridge/port/add bridge=mybridge interface=veth-netinstall
This gives the container direct L2 access to devices on ether5.
Environment Variables and Mounts
There are two ways to attach env vars and mounts to a container (from 7.21+):
Inline (preferred for 7.21+)
Set
env= and mount= directly on /container/add — keeps the container self-contained:
# Inline env vars and mount (7.21+) /container/add remote-image=pihole/pihole:latest interface=veth1 \ env="TZ=Europe/Riga,WEBPASSWORD=secret" \ mount="src=disk1/pihole,dst=/etc/pihole" \ root-dir=disk1/images/pihole logging=yes
This is also how
/app YAML works under the hood — inline is the modern pattern and easier for automation (no separate linked objects to manage).
Named Lists (works across all versions)
Create env vars and mounts as separate objects, then reference by name:
# Create named env list (7.20+ — the 'list=' property groups envs together) /container/envs/add list=MYAPP key=TZ value="Europe/Riga" /container/envs/add list=MYAPP key=WEBPASSWORD value="secret" # Create named mount /container/mounts/add name=appdata src=disk1/appdata dst=/data # Reference from container (7.20+ uses 'envlists=', pre-7.20 used 'envlist=') /container/add file=myimage.tar interface=veth1 \ envlists=MYAPP mountlists=appdata root-dir=disk1/myapp
Best practice: Always place container volumes on external disk (
disk1/), never on internal flash storage.
Property Name History
The naming of env/mount reference properties changed at version boundaries:
| Version | Env list grouping () | Container env reference () | Container mount reference |
|---|---|---|---|
| Pre-7.20 | , only (no grouping property) | (no env reference property) | (not available) |
| 7.20 | added | (plural) added | (not available) |
| 7.21+ | | + inline | + inline |
Version note: Property names for 7.20+ are confirmed against
command tree data. Pre-7.20,/console/inspecthad only/container/envs/addandkeywith no grouping mechanism;valuehad no env reference property. Inline/container/addandenv=were added at 7.21.mount=
Container Image Formats
RouterOS accepts container images in these formats:
Option A: Pull from Registry
/container/config/set registry-url=https://registry-1.docker.io tmpdir=disk1/pull /container/add remote-image=library/alpine:latest interface=veth-myapp
Option B: Import Local Tar File
Upload a Docker v1 tar to the router, then:
/container/add file=myimage.tar interface=veth-myapp
OCI Image Requirements for Local Import
RouterOS's container loader has specific requirements for local tar files:
- Single layer only — multi-layer images are not supported
- No gzip compression — layers must be uncompressed tar
- Docker v1 manifest format —
+manifest.json
+config.jsonlayer.tar
myimage.tar ├── manifest.json # [{"Config":"config.json","RepoTags":["name:tag"],"Layers":["layer.tar"]}] ├── config.json # {"architecture":"arm64","os":"linux","config":{...},"rootfs":{...}} └── layer.tar # Uncompressed tar of the full filesystem
These constraints are the key difference from standard OCI images — most base images from public registries already meet requirement 1 and 2 via registry pull; local tar builds must satisfy all three.
Container Lifecycle
CLI
# Create container (7.21+ inline syntax) /container/add file=myimage.tar interface=veth-myapp \ env="MY_VAR=hello" mount="src=disk1/appdata,dst=/data" \ root-dir=disk1/myapp logging=yes # Start /container/start [find tag~"myapp"] # Stop /container/stop [find tag~"myapp"] # View status /container/print # View logs (if logging=yes) /log/print where topics~"container" # Remove (must be fully stopped first) /container/remove [find tag~"myapp"]
REST API
const base = "http://192.168.1.1/rest"; const auth = { headers: { Authorization: `Basic ${btoa("admin:")}` } }; // List containers const containers = await fetch(`${base}/container`, auth).then(r => r.json()); // Start container by ID await fetch(`${base}/container/start`, { method: "POST", ...auth, headers: { ...auth.headers, "Content-Type": "application/json" }, body: JSON.stringify({ ".id": "*1" }), }); // Check status — .running field is "true"/"false" (strings!) const status = await fetch(`${base}/container/*1`, auth).then(r => r.json()); if (status.running === "true") { /* container is running */ } // Stop container await fetch(`${base}/container/stop`, { method: "POST", ...auth, body: JSON.stringify({ ".id": "*1" }), }); // Delete — must be fully stopped. Poll .running and retry. async function deleteContainer(id) { for (let i = 0; i < 5; i++) { const c = await fetch(`${base}/container/${id}`, auth).then(r => r.json()); if (c.running === "false") { await fetch(`${base}/container/${id}`, { method: "DELETE", ...auth }); return; } await new Promise(r => setTimeout(r, 3000)); } throw new Error("Container did not stop in time"); }
REST API Gotchas for Containers
field is the status indicator — values are strings.running
/"true"
, not booleans"false"- No
field exists — only check.stopped.running - Delete while stopping = HTTP 400 — must poll
until.running
before DELETE"false"
for local tar,file=
for registry pullremote-image=- Container
(plural, 7.20+) references the env list name — note the plural. Pre-7.20 usedenvlists=
(singular). See env/mount version history above.envlist=
Container Properties (from 7.22)
Selected properties from
/container/add. This is not exhaustive — use rosetta MCP tools (routeros_command_tree at /container/add) for the full list on a specific version.
| Property | Description |
|---|---|
| VETH interface |
| Inline environment variables (7.21+). Comma-separated pairs |
| Named env list reference (7.20+). See env/mount section above |
| Inline volume mount (7.21+). |
| Named mount list reference (7.21+). See env/mount section above |
| Storage location for container filesystem |
| Container tar file (local import) |
| Container image name (registry pull) |
| Override container CMD |
| Override container ENTRYPOINT |
| Container hostname |
| DNS server for container |
| Enable container stdout/stderr to RouterOS log (/) |
| Auto-start container on device boot (/) |
| Override working directory |
| Container name (for ) |
| Pass through physical devices (7.20+) |
| CPU core affinity |
| RAM usage limit in bytes |
Architecture Mapping
When pulling from registries or building images, map RouterOS architecture to Docker platform:
RouterOS | Docker Platform |
|---|---|
| |
| |
| |
Query the router's architecture:
const resource = await fetch(`${base}/system/resource`, auth).then(r => r.json()); const arch = resource["architecture-name"]; // "arm64", "arm", "x86"
/app System (7.21+/7.22+)
RouterOS 7.21 introduced the
/app path (built-in app listing). Full YAML app creation (/app/add) was added in 7.22. See the routeros-app-yaml skill for the full YAML specification.
# List available apps /app/print # Add app from URL /app/add yaml-url=https://example.com/myapp.tikapp.yaml
/app vs Manual Container Setup
| Concern | Manual (this page) | /app YAML |
|---|---|---|
| Networking | Full control — any bridge/VETH/L2 topology | Docker-style: subnet with port forwarding (NAT) |
| L2 bridge access | Yes — add VETH + physical port to same bridge | Not directly — but can assign a bridge post-creation via |
| Multi-container | Manual per-container setup | Declarative YAML, multiple services |
| Use case | Raw L2 access (netinstall, DHCP relay, etc.) | Standard app deployment with port forwarding |
Netinstall specifically requires L2 bridge access for BOOTP/TFTP, which is why the manual VETH+bridge approach is used rather than /app. For typical containers that only need port-forwarded TCP/UDP services,
/app is simpler.
Additional Resources
Related skills:
- For netinstall and device-mode automation: see the
skillrouteros-netinstall - For the /app YAML format: see the
skillrouteros-app-yaml - For general RouterOS fundamentals (CLI, REST, scripting): see the
skillrouteros-fundamentals
MCP tools:
- For RouterOS documentation and property lookups: use the
MCP server tools (rosetta
,routeros_search
,routeros_get_page
)routeros_search_properties
External docs:
- MikroTik official docs: https://help.mikrotik.com/docs/spaces/ROS/pages/84901929/Container