Kurtosis k8s-clean-cluster
Force-clean all Kurtosis resources from a Kubernetes cluster when kurtosis clean hangs or fails. Removes all kurtosis namespaces, pods, daemonsets, cluster roles, and cluster role bindings. Use when kurtosis clean -a hangs or leaves behind orphaned resources.
install
source · Clone the upstream repo
git clone https://github.com/kurtosis-tech/kurtosis
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/kurtosis-tech/kurtosis "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/k8s-clean-cluster" ~/.claude/skills/kurtosis-tech-kurtosis-k8s-clean-cluster && rm -rf "$T"
manifest:
skills/k8s-clean-cluster/SKILL.mdsource content
K8s Clean Cluster
Force-clean all Kurtosis resources from a Kubernetes cluster when the normal
kurtosis clean -a command hangs or fails.
When to use
hangs for more than a few minuteskurtosis clean -a- Orphaned kurtosis namespaces remain after a failed clean
pods are stuck in Pending stateremove-dir-pod-*- Engine start fails because old resources exist
Steps
1. Kill any running kurtosis processes
pkill -f "kurtosis gateway" 2>/dev/null pkill -f "kurtosis clean" 2>/dev/null
2. Stop the engine gracefully (if possible)
kurtosis engine stop || true
3. Delete all kurtosis namespaces
# List them first kubectl get ns | grep kurtosis # Delete all kurtosis namespaces (engine, enclaves, logs) kubectl get ns | grep kurtosis | awk '{print $1}' | xargs -r kubectl delete ns --force --grace-period=0
4. Clean up cluster-scoped resources
# Delete kurtosis cluster roles kubectl get clusterrole | grep kurtosis | awk '{print $1}' | xargs -r kubectl delete clusterrole # Delete kurtosis cluster role bindings kubectl get clusterrolebinding | grep kurtosis | awk '{print $1}' | xargs -r kubectl delete clusterrolebinding
5. Clean up stuck pods
# Force-delete any remaining kurtosis pods kubectl get pods -A | grep kurtosis | awk '{print $2 " -n " $1}' | xargs -L1 kubectl delete pod --force --grace-period=0 # Clean up evicted pods kubectl get pods -A | grep Evicted | awk '{print $2 " -n " $1}' | xargs -L1 kubectl delete pod --force --grace-period=0
6. Verify clean state
kubectl get ns | grep kurtosis kubectl get pods -A | grep kurtosis kubectl get ds -A | grep kurtosis
All three commands should return empty results.
7. Restart
kurtosis engine start kurtosis gateway &
Why clean hangs
The most common cause is the fluentbit logs collector
Clean method which:
- Evicts all DaemonSet pods by adding a non-existent node selector
- Waits for each pod to terminate (up to 5 min per pod, sequentially)
- Creates
cleanup pods targeted at each noderemove-dir-pod - Cleanup pods on tainted/unhealthy nodes get stuck in Pending
The fix in the codebase makes these operations best-effort with timeouts and detects unschedulable pods early, but if running an unfixed version, manual cleanup is needed.