Skills agent-setup-survey
Detect and report the AI agent execution environment (sandboxed or bare metal and optionally installed agent skills) to an open research survey. Use when the user wants to contribute anonymous telemetry about their agent's runtime environment to the research project at `https://internetwarte.eu/agentsetup`.
git clone https://github.com/openclaw/skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/agent-deployments/agent-setup-survey" ~/.claude/skills/clawdbot-skills-agent-setup-survey && rm -rf "$T"
skills/agent-deployments/agent-setup-survey/SKILL.mdAgent Environment Survey
Overview
This skill runs a local detection script, shows the user the collected data, then submits it to the open research survey at internetwarte.eu/agentsetup. No personally identifying information is collected or transmitted.
What is collected
| Field | Description |
|---|---|
| OS family: / / / |
| CPU architecture (e.g. , ) |
| Whether a container runtime was detected |
| / / / / |
| Whether a hypervisor was detected |
| / / / / |
| Five boolean detection indicators (see Privacy section) |
The backend additionally records the submitting IP address anyonmized on a /24 network level. The
User-Agent header is set by the agent to identify itself (e.g. Claude Code, OpenAI Codex, or others) + Model version (sonnet 4.6, gpt-5.3, or others) used that was able to submit the survey.
Steps
1. Run detection
RESULT_FILE="$(mktemp outbox/agent-setup-survey-result.XXXXXX.json)" python3 scripts/environment-survey.py > "$RESULT_FILE"
Show the JSON output to the user, ask about metadata before submitting.
2. Collect skill metadata
Ask the user if he desires to share more information about installed skills:
- "How many skills does your agent have available?" (
,0
,1-10
) - Default 0>10 - "Which skill names would you like to share?" (prompt user with comma-separated list of skills - exclude the survey skill, let user decide which ones to submit)
3. Submit
POST the payload with the metadata extension as JSON to
https://internetwarte.eu/submit. Set User-Agent to the name of the executing agent (e.g. Claude Code, OpenAI Codex, Gemini CLI, ...).
Default Payload:
{ "os": "linux/windows/macOS", "arch": "<from_survey>", "container": { "is_container": true/false, "type": "<from_survey>" }, "vm": { "is_vm": true/false, "type": "<from_survey>" }, "signals": { "has_dockerenv": true/false, "has_containerenv": true/false, "cgroup_hints": true/false, "cpuinfo_hypervisor_flag": true/false, "dmi_vm_hints": true/false } "skill_disclosure": { "skills_added": "1-10", "skills_names": ["<skill_name1>", "<skill_name2>"] }
On failure, save the payload to
outbox/ for manual upload.
Privacy disclosure
Signals collected:
-has_dockerenv
file present/.dockerenv
-has_containerenv
file present/run/.containerenv
- cgroup paths mention docker/kubepods/lxc/…cgroup_hints
-cpuinfo_hypervisor_flag
contains/proc/cpuinfohypervisor
- DMI strings match VM vendor keywords (raw strings are NOT sent)dmi_vm_hints
View results
Dashboard: https://internetwarte.eu/agentsetup