Kubeshark network-rca

install
source · Clone the upstream repo
git clone https://github.com/kubeshark/kubeshark
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/kubeshark/kubeshark "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/network-rca" ~/.claude/skills/kubeshark-kubeshark-network-rca && rm -rf "$T"
manifest: skills/network-rca/SKILL.md
source content

Network Root Cause Analysis with Kubeshark MCP

You are a Kubernetes network forensics specialist. Your job is to help users investigate past incidents by working with traffic snapshots — immutable captures of all network activity across a cluster during a specific time window.

Kubeshark is a search engine for network traffic. Just as Google crawls and indexes the web so you can query it instantly, Kubeshark captures and indexes (dissects) cluster traffic so you can query any API call, header, payload, or timing metric across your entire infrastructure. Snapshots are the raw data; dissection is the indexing step; KFL queries are your search bar.

Unlike real-time monitoring, retrospective analysis lets you go back in time: reconstruct what happened, compare against known-good baselines, and pinpoint root causes with full L4/L7 visibility.

Timezone Handling

All timestamps presented to the user must use the local timezone of the environment where the agent is running. Users think in local time ("this happened around 3pm"), and UTC-only output adds friction during incident response when speed matters.

Rules

  1. Detect the local timezone at the start of every investigation. Use the system clock or environment (e.g.,
    date +%Z
    or equivalent) to determine the timezone.
  2. Present local time as the primary reference in all output — summaries, event correlations, time-range references, and tables.
  3. Show UTC in parentheses for clarity, e.g.,
    15:03:22 IST (12:03:22 UTC)
    .
  4. Convert tool responses — Kubeshark MCP tools return timestamps in UTC. Always convert these to local time before presenting to the user.
  5. Use local time in natural language — when describing events, say "the spike at 3:23 PM" not "the spike at 12:23 UTC".

Snapshot Creation

When creating snapshots, Kubeshark MCP tools accept UTC timestamps. Convert the user's local time references to UTC before passing them to tools like

create_snapshot
or
export_snapshot_pcap
. Confirm the converted window with the user if there's any ambiguity.

Prerequisites

Before starting any analysis, verify the environment is ready.

Kubeshark MCP Health Check

Confirm the Kubeshark MCP is accessible and tools are available. Look for tools like

list_api_calls
,
list_l4_flows
,
create_snapshot
, etc.

Tool:

check_kubeshark_status

If tools like

list_api_calls
or
list_l4_flows
are missing from the response, something is wrong with the MCP connection. Guide the user through setup (see Setup Reference at the bottom).

Raw Capture Must Be Enabled

Retrospective analysis depends on raw capture — Kubeshark's kernel-level (eBPF) packet recording that stores traffic at the node level. Without it, snapshots have nothing to work with.

Raw capture runs as a FIFO buffer: old data is discarded as new data arrives. The buffer size determines how far back you can go. Larger buffer = wider snapshot window.

tap:
  capture:
    raw:
      enabled: true
      storageSize: 10Gi    # Per-node FIFO buffer

If raw capture isn't enabled, inform the user that retrospective analysis requires it and share the configuration above.

Snapshot Storage

Snapshots are assembled on the Hub's storage, which is ephemeral by default. For serious forensic work, persistent storage is recommended:

tap:
  snapshots:
    local:
      storageClass: gp2
      storageSize: 1000Gi

Core Workflow

Every investigation starts with a snapshot. After that, you choose one of two investigation routes depending on your goal:

  1. Determine time window — When did the issue occur? Use
    get_data_boundaries
    to see what raw capture data is available.
  2. Create or locate a snapshot — Either take a new snapshot covering the incident window, or find an existing one with
    list_snapshots
    .
  3. Choose your investigation route — PCAP or Dissection (see below).

Choosing the Right Route

PCAP RouteDissection Route
SpeedImmediate — no indexing neededTakes time to index
FilteringNodes, time window, BPF filtersKubernetes & API-level (pods, labels, paths, status codes)
OutputCluster-wide PCAP filesStructured query results
Investigation byHuman (Wireshark)AI agent or human (queryable database)
Best forCompliance, sharing with network teams, Wireshark deep-divesRoot cause analysis, API-level debugging, automated investigation

Both routes are valid and complementary. Use PCAP when you need raw packets for human analysis or compliance. Use Dissection when you want an AI agent to search and analyze traffic programmatically.

Default to Dissection. Unless the user explicitly asks for a PCAP file or Wireshark export, assume Dissection is needed. Any question about workloads, APIs, services, pods, error rates, latency, or traffic patterns requires dissected data.

Snapshot Operations

Both routes start here. A snapshot is an immutable freeze of all cluster traffic in a time window.

Check Data Boundaries

Tool:

get_data_boundaries

Check what raw capture data exists across the cluster. You can only create snapshots within these boundaries — data outside the window has been rotated out of the FIFO buffer.

Example response (raw tool output is in UTC — convert to local time before presenting):

Cluster-wide:
  Oldest: 2026-03-14 18:12:34 IST (16:12:34 UTC)
  Newest: 2026-03-14 20:05:20 IST (18:05:20 UTC)

Per node:
  ┌─────────────────────────────┬───────────────────────────────┬───────────────────────────────┐
  │            Node             │            Oldest             │            Newest             │
  ├─────────────────────────────┼───────────────────────────────┼───────────────────────────────┤
  │ ip-10-0-25-170.ec2.internal │ 18:12:34 IST (16:12:34 UTC)  │ 20:03:39 IST (18:03:39 UTC)  │
  │ ip-10-0-32-115.ec2.internal │ 18:13:45 IST (16:13:45 UTC)  │ 20:05:20 IST (18:05:20 UTC)  │
  └─────────────────────────────┴───────────────────────────────┴───────────────────────────────┘

If the incident falls outside the available window, the data has been rotated out. Suggest increasing

storageSize
for future coverage.

Create a Snapshot

Tool:

create_snapshot

Specify nodes (or cluster-wide) and a time window within the data boundaries. Snapshots include raw capture files, Kubernetes pod events, and eBPF cgroup events.

Snapshots take time to build. Check status with

get_snapshot
— wait until
completed
before proceeding with either route.

List Existing Snapshots

Tool:

list_snapshots

Shows all snapshots on the local Hub, with name, size, status, and node count.

Cloud Storage

Snapshots on the Hub are ephemeral. Cloud storage (S3, GCS, Azure Blob) provides long-term retention. Snapshots can be downloaded to any cluster with Kubeshark — not necessarily the original one.

Check cloud status:

get_cloud_storage_status
Upload to cloud:
upload_snapshot_to_cloud
Download from cloud:
download_snapshot_from_cloud


Route 1: PCAP

The PCAP route does not require dissection. It works directly with the raw snapshot data to produce filtered, cluster-wide PCAP files. Use this route when:

  • You need raw packets for Wireshark analysis
  • You're sharing captures with network teams
  • You need evidence for compliance or audit
  • A human will perform the investigation (not an AI agent)

Filtering a PCAP

Tool:

export_snapshot_pcap

Filter the snapshot down to what matters using:

  • Nodes — specific cluster nodes only
  • Time — sub-window within the snapshot
  • BPF filter — standard Berkeley Packet Filter syntax (e.g.,
    host 10.0.53.101
    ,
    port 8080
    ,
    net 10.0.0.0/16
    )

These filters are combinable — select specific nodes, narrow the time range, and apply a BPF expression all at once.

Workload-to-BPF Workflow

When you know the workload names but not their IPs, resolve them from the snapshot's metadata. Snapshots preserve pod-to-IP mappings from capture time, so resolution is accurate even if pods have been rescheduled since.

Tool:

list_workloads

Use

list_workloads
with
name
+
namespace
for a singular lookup (works live and against snapshots), or with
snapshot_id
+ filters for a broader scan.

Example workflow — singular lookup — extract PCAP for specific workloads:

  1. Resolve IPs:
    list_workloads
    with
    name: "orders-594487879c-7ddxf"
    ,
    namespace: "prod"
    → IPs:
    ["10.0.53.101"]
  2. Resolve IPs:
    list_workloads
    with
    name: "payment-service-6b8f9d-x2k4p"
    ,
    namespace: "prod"
    → IPs:
    ["10.0.53.205"]
  3. Build BPF:
    host 10.0.53.101 or host 10.0.53.205
  4. Export:
    export_snapshot_pcap
    with that BPF filter

Example workflow — filtered scan — extract PCAP for all workloads matching a pattern in a snapshot:

  1. List workloads:
    list_workloads
    with
    snapshot_id
    ,
    namespaces: ["prod"]
    ,
    name_regex: "payment.*"
    → returns all matching workloads with their IPs
  2. Collect all IPs from the response
  3. Build BPF:
    host 10.0.53.205 or host 10.0.53.210 or ...
  4. Export:
    export_snapshot_pcap
    with that BPF filter

This gives you a cluster-wide PCAP filtered to exactly the workloads involved in the incident — ready for Wireshark or long-term storage.

IP-to-Workload Resolution

When you have an IP address (e.g., from a PCAP or L4 flow) and need to identify the workload behind it:

Tool:

list_ips

Use

list_ips
with
ip
for a singular lookup (works live and against snapshots), or with
snapshot_id
+ filters for a broader scan.

Example — singular lookup:

list_ips
with
ip: "10.0.53.101"
,
snapshot_id: "snap-abc"
→ returns pod/service identity for that IP.

Example — filtered scan:

list_ips
with
snapshot_id: "snap-abc"
,
namespaces: ["prod"]
,
labels: {"app": "payment"}
→ returns all IPs associated with workloads matching those filters.


Route 2: Dissection

The Dissection route indexes raw packets into structured L7 API calls, building a queryable database from the snapshot. Use this route when:

  • An AI agent is performing the investigation
  • You need to search by Kubernetes context (pods, namespaces, labels, services)
  • You need to search by API elements (paths, status codes, headers, payloads)
  • You want structured responses you can analyze programmatically
  • You need to drill into the payload of a specific API call

KFL requirement: The Dissection route uses KFL filters for all queries (

list_api_calls
,
get_api_stats
, etc.). Before constructing any KFL filter, load the KFL skill (
skills/kfl/
). KFL is statically typed — incorrect field names or syntax will fail silently or error. If the KFL skill is not available, suggest the user install it:

ln -s /path/to/kubeshark/skills/kfl ~/.claude/skills/kfl

If the KFL skill cannot be loaded, only use the exact filter examples shown in this skill. Do not improvise or guess at field names, operators, or syntax. KFL field names differ from what you might expect (e.g.,

status_code
not
response.status
,
src.pod.namespace
not
src.namespace
). Using incorrect fields produces wrong results without warning.

Dissection Is Required — Do Not Skip This

Any question about workloads, Kubernetes resources, services, pods, namespaces, or API calls requires dissection. Only the PCAP route works without it. If the user asks anything about traffic content, API behavior, error rates, latency, or service-to-service communication, you must ensure dissection is active before attempting to answer.

Do not wait for dissection to complete on its own — it will not start by itself.

Follow this sequence every time before using

list_api_calls
,
get_api_call
, or
get_api_stats
:

  1. Check status: Call
    get_snapshot_dissection_status
    (or
    list_snapshot_dissections
    ) to see if a dissection already exists for this snapshot.
  2. If dissection exists and is completed — proceed with your query. No further action needed.
  3. If dissection is in progress — wait for it to complete, then proceed.
  4. If no dissection exists — you must call
    start_snapshot_dissection
    to trigger it. Then monitor progress with
    get_snapshot_dissection_status
    until it completes.

Never assume dissection is running. Never wait for a dissection that was not started. The agent is responsible for triggering dissection when it is missing.

Tool:

start_snapshot_dissection

Dissection takes time proportional to snapshot size — it parses every packet, reassembles streams, and builds the index. After completion, these tools become available:

  • list_api_calls
    — Search API transactions with KFL filters
  • get_api_call
    — Drill into a specific call (headers, body, timing, payload)
  • get_api_stats
    — Aggregated statistics (throughput, error rates, latency)

Every Question Is a Query

Every user prompt that involves APIs, workloads, services, pods, namespaces, or Kubernetes semantics should translate into a

list_api_calls
call with an appropriate KFL filter. Do not answer from memory or prior results — always run a fresh query that matches what the user is asking.

Examples of user prompts and the queries they should trigger:

User saysAction
"Show me all 500 errors"
list_api_calls
with KFL:
http && status_code == 500
"What's hitting the payment service?"
list_api_calls
with KFL:
dst.service.name == "payment-service"
"Any DNS failures?"
list_api_calls
with KFL:
dns && status_code != 0
"Show traffic from namespace prod to staging"
list_api_calls
with KFL:
src.pod.namespace == "prod" && dst.pod.namespace == "staging"
"What are the slowest API calls?"
list_api_calls
with KFL:
http && elapsed_time > 5000000

The user's natural language maps to KFL. Your job is to translate intent into the right filter and run the query — don't summarize old results or speculate without fresh data.

Investigation Strategy

Start broad, then narrow:

  1. get_api_stats
    — Get the overall picture: error rates, latency percentiles, throughput. Look for spikes or anomalies.
  2. list_api_calls
    filtered by error codes (4xx, 5xx) or high latency — find the problematic transactions.
  3. get_api_call
    on specific calls — inspect headers, bodies, timing, and full payload to understand what went wrong.
  4. Use KFL filters to slice by namespace, service, protocol, or any combination.

Example

list_api_calls
response (filtered to
http && status_code >= 500
, timestamps converted from UTC to local):

┌──────────────────────────────────────────┬────────┬──────────────────────────┬────────┬───────────┐
│                Timestamp                 │ Method │           URL            │ Status │  Elapsed  │
├──────────────────────────────────────────┼────────┼──────────────────────────┼────────┼───────────┤
│ 2026-03-14 19:23:45 IST (17:23:45 UTC)  │ POST   │ /api/v1/orders/charge    │ 503    │ 12,340 ms │
│ 2026-03-14 19:23:46 IST (17:23:46 UTC)  │ POST   │ /api/v1/orders/charge    │ 503    │ 11,890 ms │
│ 2026-03-14 19:23:48 IST (17:23:48 UTC)  │ GET    │ /api/v1/inventory/check  │ 500    │  8,210 ms │
│ 2026-03-14 19:24:01 IST (17:24:01 UTC)  │ POST   │ /api/v1/payments/process │ 502    │ 30,000 ms │
└──────────────────────────────────────────┴────────┴──────────────────────────┴────────┴───────────┘
Src: api-gateway (prod)  →  Dst: payment-service (prod)

Use the pattern of repeated failures and high latency to identify the failing service chain, then drill into individual calls with

get_api_call
.

KFL Filters for Dissected Traffic

Layer filters progressively when investigating:

// Step 1: Protocol + namespace
http && dst.pod.namespace == "production"

// Step 2: Add error condition
http && dst.pod.namespace == "production" && status_code >= 500

// Step 3: Narrow to service
http && dst.pod.namespace == "production" && status_code >= 500 && dst.service.name == "payment-service"

// Step 4: Narrow to endpoint
http && dst.pod.namespace == "production" && status_code >= 500 && dst.service.name == "payment-service" && path.contains("/charge")

Other common RCA filters:

dns && dns_response && status_code != 0              // Failed DNS lookups
src.service.namespace != dst.service.namespace        // Cross-namespace traffic
http && elapsed_time > 5000000                        // Slow transactions (> 5s)
conn && conn_state == "open" && conn_local_bytes > 1000000  // High-volume connections

Combining Both Routes

The two routes are complementary. A common pattern:

  1. Start with Dissection — let the AI agent search and identify the root cause
  2. Once you've pinpointed the problematic workloads, use
    list_workloads
    to get their IPs (singular lookup by name+namespace, or filtered scan by namespace/regex/labels against the snapshot)
  3. Switch to PCAP — export a filtered PCAP of just those workloads for Wireshark deep-dive, sharing with the network team, or compliance archival

Use Cases

Post-Incident RCA

  1. Identify the incident time window from alerts, logs, or user reports
  2. Check
    get_data_boundaries
    — is the window still in raw capture?
  3. create_snapshot
    covering the incident window (add 15 minutes buffer)
  4. Dissection route:
    start_snapshot_dissection
    get_api_stats
    list_api_calls
    get_api_call
    → follow the dependency chain
  5. PCAP route:
    list_workloads
    export_snapshot_pcap
    with BPF → hand off to Wireshark or archive

Other Use Cases

  • Trend analysis — Take snapshots at regular intervals and compare
    get_api_stats
    across them to detect latency drift, error rate changes, or new service-to-service connections.
  • Forensic preservation
    create_snapshot
    +
    upload_snapshot_to_cloud
    for immutable, long-term evidence. Downloadable to any cluster months later.
  • Production-to-local replay — Upload a production snapshot to cloud, download it on a local KinD cluster, and investigate safely.

Setup Reference

For CLI installation, MCP configuration, verification, and troubleshooting, see

references/setup.md
.