Skills-for-fabric eventhouse-consumption-cli

install
source · Clone the upstream repo
git clone https://github.com/microsoft/skills-for-fabric
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/microsoft/skills-for-fabric "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/eventhouse-consumption-cli" ~/.claude/skills/microsoft-skills-for-fabric-eventhouse-consumption-cli && rm -rf "$T"
manifest: skills/eventhouse-consumption-cli/SKILL.md
source content

Update Check — ONCE PER SESSION (mandatory) The first time this skill is used in a session, run the check-updates skill before proceeding.

  • GitHub Copilot CLI / VS Code: invoke the
    check-updates
    skill.
  • Claude Code / Cowork / Cursor / Windsurf / Codex: compare local vs remote package.json version.
  • Skip if the check was already performed earlier in this session.

CRITICAL NOTES

  1. To find the workspace details (including its ID) from workspace name: list all workspaces and, then, use JMESPath filtering
  2. To find the item details (including its ID) from workspace ID, item type, and item name: list all items of that type in that workspace and, then, use JMESPath filtering

eventhouse-consumption-cli — Read-Only KQL Queries via CLI

Table of Contents

TaskReferenceNotes
Finding Workspaces and Items in FabricCOMMON-CLI.md § Finding Workspaces and Items in FabricMandatoryREAD link first [needed for finding workspace id by its name or item id by its name, item type, and workspace id]
Fabric Topology & Key ConceptsCOMMON-CORE.md § Fabric Topology & Key Concepts
Environment URLsCOMMON-CORE.md § Environment URLsKQL Cluster URI is per-item
Authentication & Token AcquisitionCOMMON-CORE.md § Authentication & Token AcquisitionWrong audience = 401; read before any auth issue
Core Control-Plane REST APIsCOMMON-CORE.md § Core Control-Plane REST APIs
PaginationCOMMON-CORE.md § Pagination
Long-Running Operations (LRO)COMMON-CORE.md § Long-Running Operations (LRO)
Rate Limiting & ThrottlingCOMMON-CORE.md § Rate Limiting & Throttling
OneLake Data AccessCOMMON-CORE.md § OneLake Data AccessRequires
storage.azure.com
token, not Fabric token
Job ExecutionCOMMON-CORE.md § Job Execution
Capacity ManagementCOMMON-CORE.md § Capacity Management
Gotchas & TroubleshootingCOMMON-CORE.md § Gotchas & Troubleshooting
Best PracticesCOMMON-CORE.md § Best Practices
Tool Selection RationaleCOMMON-CLI.md § Tool Selection Rationale
Authentication RecipesCOMMON-CLI.md § Authentication Recipes
az login
flows and token acquisition
Fabric Control-Plane API via
az rest
COMMON-CLI.md § Fabric Control-Plane API via az restAlways pass
--resource https://api.fabric.microsoft.com
or
az rest
fails
Pagination PatternCOMMON-CLI.md § Pagination Pattern
Long-Running Operations (LRO) PatternCOMMON-CLI.md § Long-Running Operations (LRO) Pattern
OneLake Data Access via
curl
COMMON-CLI.md § OneLake Data Access via curlUse
curl
not
az rest
(different token audience)
Job Execution (CLI)COMMON-CLI.md § Job Execution
OneLake ShortcutsCOMMON-CLI.md § OneLake Shortcuts
Capacity Management (CLI)COMMON-CLI.md § Capacity Management
Composite RecipesCOMMON-CLI.md § Composite Recipes
Gotchas & Troubleshooting (CLI-Specific)COMMON-CLI.md § Gotchas & Troubleshooting (CLI-Specific)
az rest
audience, shell escaping, token expiry
Quick Reference:
az rest
Template
COMMON-CLI.md § Quick Reference: az rest Template
Quick Reference: Token Audience / CLI Tool MatrixCOMMON-CLI.md § Quick Reference: Token Audience ↔ CLI Tool MatrixWhich
--resource
+ tool for each service
Connection FundamentalsEVENTHOUSE-CONSUMPTION-CORE.md § Connection FundamentalsCluster URI discovery,
az rest
, REST API
Schema Discovery and SecurityEVENTHOUSE-CONSUMPTION-CORE.md § Schema Discovery and SecuritySchema Discovery, Security — workspace roles + KQL DB roles
Monitoring and DiagnosticsEVENTHOUSE-CONSUMPTION-CORE.md § Monitoring and Diagnostics
Performance Best PracticesEVENTHOUSE-CONSUMPTION-CORE.md § Performance Best PracticesRead before writing KQL — time filters,
has
vs
contains
Common Consumption PatternsEVENTHOUSE-CONSUMPTION-CORE.md § Common Consumption PatternsTime-series, Top-N, percentile, dynamic fields
Gotchas, Troubleshooting, and Quick ReferenceEVENTHOUSE-CONSUMPTION-CORE.md § Gotchas, Troubleshooting, and Quick ReferenceGotchas and Troubleshooting (12 issues), Quick Reference: Consumption Capabilities by Scenario
Table and Column Discoverydiscovery-queries.md § Table and Column DiscoveryTable Discovery, Column Statistics
Function and View Discoverydiscovery-queries.md § Function and View DiscoveryFunction Discovery, Materialized View Discovery
Policy Discoverydiscovery-queries.md § Policy Discovery
External Tables and Ingestion Mappingsdiscovery-queries.md § External Tables and Ingestion MappingsExternal Table Discovery, Ingestion Mapping Discovery
Security Discoverydiscovery-queries.md § Security Discovery
Database Overview Scriptdiscovery-queries.md § Database Overview Script
Tool StackSKILL.md § Tool Stack
ConnectionSKILL.md § Connectioneventhouse-specific
az rest
connection steps
Agentic Exploration ("Chat With My Data")SKILL.md § Agentic ExplorationStart here for data exploration
Running QueriesSKILL.md § Running Queries
az rest
, output formatting, export
MonitoringSKILL.md § Monitoring
Must / Prefer / Avoid / TroubleshootingSKILL.md § Must / Prefer / Avoid / TroubleshootingMUST DO / AVOID / PREFER checklists
ExamplesSKILL.md § Examples
Agent Integration NotesSKILL.md § Agent Integration Notes

Tool Stack

ToolPurposeInstall
az cliKQL queries and management commands via Kusto REST API; Fabric control-plane discovery
winget install Microsoft.AzureCLI
jqJSON processing and output formatting
winget install jqlang.jq

Connection

Step 1 — Discover KQL Database Query URI

# Get workspace ID (if not known)
WS_ID=$(az rest --method GET \
  --url "https://api.fabric.microsoft.com/v1/workspaces" \
  --resource "https://api.fabric.microsoft.com" \
  | jq -r '.value[] | select(.displayName=="MyWorkspace") | .id')

# List KQL Databases and get connection properties
az rest --method GET \
  --url "https://api.fabric.microsoft.com/v1/workspaces/${WS_ID}/kqlDatabases" \
  --resource "https://api.fabric.microsoft.com" \
  | jq '.value[] | {name: .displayName, id: .id, queryUri: .properties.queryServiceUri, dbName: .properties.databaseName}'

Step 2 — Set Connection Variables

CLUSTER_URI="https://<cluster>.kusto.fabric.microsoft.com"
DB_NAME="MyKqlDatabase"

Step 3 — Verify Connection

Important — body file pattern: KQL queries contain

|
(pipe) characters which break shell escaping in both bash and PowerShell. Always write the JSON body to a temp file and reference it with
--body @<file>
. This is the recommended approach for all
az rest
KQL calls. On PowerShell, use
@{db="X";csl="..."} | ConvertTo-Json -Compress | Out-File $env:TEMP\kql_body.json -Encoding utf8NoBOM
then
--body "@$env:TEMP\kql_body.json"
.

# Write body to temp file (avoids pipe escaping issues)
cat > /tmp/kql_body.json << 'EOF'
{"db":"MyKqlDatabase","csl":"print Message = 'Connected successfully', Cluster = current_cluster_endpoint(), Timestamp = now()"}
EOF

az rest --method POST \
  --url "${CLUSTER_URI}/v1/rest/query" \
  --resource "https://kusto.kusto.windows.net" \
  --headers "Content-Type=application/json" \
  --body @/tmp/kql_body.json \
  | jq '.Tables[0].Rows'

Agentic Exploration

"Chat With My Data" — Discovery Sequence

When the user asks to explore or query an Eventhouse without specifying tables:

Step 1 → .show tables                                    // discover tables
Step 2 → .show table <TABLE> schema as json              // understand columns + types
Step 3 → <TABLE> | take 10                               // see sample data
Step 4 → <TABLE> | summarize count() by bin(Timestamp, 1h) | render timechart  // shape of data
Step 5 → Formulate targeted query based on user's question

Schema-Aware Query Generation

After schema discovery, generate queries using actual column names and types:

// Example: user asks "show me errors in the last hour"
// After discovering table "AppEvents" with columns: Timestamp, Level, Message, Source
AppEvents
| where Timestamp > ago(1h)
| where Level == "Error"
| summarize ErrorCount = count() by Source, bin(Timestamp, 5m)
| order by ErrorCount desc

Running Queries

Via
az rest

Always use the temp-file pattern for

--body
— KQL pipes (
|
) break inline shell escaping.

# Run a KQL query
cat > /tmp/kql_body.json << 'EOF'
{"db":"MyDB","csl":"Events | where Timestamp > ago(1h) | count"}
EOF

az rest --method POST \
  --url "${CLUSTER_URI}/v1/rest/query" \
  --resource "https://kusto.kusto.windows.net" \
  --headers "Content-Type=application/json" \
  --body @/tmp/kql_body.json \
  | jq '.Tables[0].Rows'

Output Formatting

# Pretty-print results as a table with jq
cat > /tmp/kql_body.json << 'EOF'
{"db":"MyDB","csl":".show tables"}
EOF

az rest --method POST \
  --url "${CLUSTER_URI}/v1/rest/query" \
  --resource "https://kusto.kusto.windows.net" \
  --headers "Content-Type=application/json" \
  --body @/tmp/kql_body.json \
  | jq '.Tables[0] | [.Columns[].ColumnName] as $cols | .Rows[] | [$cols, .] | transpose | map({(.[0]): .[1]}) | add'

# Save results to file
cat > /tmp/kql_body.json << 'EOF'
{"db":"MyDB","csl":"Events | where Timestamp > ago(1h) | summarize count() by EventType"}
EOF

az rest --method POST \
  --url "${CLUSTER_URI}/v1/rest/query" \
  --resource "https://kusto.kusto.windows.net" \
  --headers "Content-Type=application/json" \
  --body @/tmp/kql_body.json \
  --output-file results.json

Monitoring

// Active queries
.show queries

// Recent commands (last hour)
.show commands
| where StartedOn > ago(1h)
| project StartedOn, CommandType, Text = substring(Text, 0, 80), Duration, State
| order by StartedOn desc

// Ingestion failures (for context when data seems stale)
.show ingestion failures
| where FailedOn > ago(24h)
| summarize count() by ErrorCode
| top 5 by count_

Must / Prefer / Avoid / Troubleshooting

Must

  • Always include time filters
    where Timestamp > ago(...)
    must be present on time-series tables.
  • Discover schema before querying — run
    .show tables
    and
    .show table T schema as json
    first.
  • Use
    has
    for term search
    — indexed and fast; only fall back to
    contains
    for substring needs.
  • Verify cluster URI — KQL Database URIs are per-item; always resolve via Fabric REST API.

Prefer

  • az rest
    for CLI query sessions; Fabric KQL MCP server for agent-integrated workflows.
  • project
    early
    to drop unneeded columns before aggregation.
  • materialize()
    when a sub-expression is used multiple times.
  • take 100
    for initial exploration; avoid full table scans.
  • render timechart
    for time-series;
    render piechart
    for distribution.

Avoid

  • contains
    on large tables — full scan, not indexed. Use
    has
    or
    has_cs
    .
  • join
    without filtering both sides first — causes memory explosion.
  • SELECT *
    equivalent (
    project
    all columns) on wide tables.
  • Missing
    bin()
    in time-series
    summarize
    — produces one row per unique timestamp.
  • Hardcoded cluster URIs — always resolve from Fabric REST API or environment variables.

Troubleshooting

SymptomFix
az rest
auth fails
Run
az login
first; ensure
--resource "https://kusto.kusto.windows.net"
is set
Empty results on valid tableCheck database context; may need
database("name").table
Query timeoutAdd tighter time filter; check
.show queries
for competing queries
Forbidden (403)
Request
viewer
role on the KQL Database
Results truncatedDefault limit is 500K rows; add
set truncationmaxrecords = N;
before query
KQL pipe
|
breaks PowerShell or bash
Never inline KQL in
--body
. Write JSON to a temp file and use
--body @file.json
(see Running Queries)

Examples

Example 1: Discover and Query

# 1. Set connection variables (after discovering URI via Step 1)
CLUSTER_URI="https://<your-cluster>.kusto.fabric.microsoft.com"
DB_NAME="SalesDB"

# 2. Discover tables
cat > /tmp/kql_body.json << EOF
{"db":"${DB_NAME}","csl":".show tables"}
EOF
az rest --method POST \
  --url "${CLUSTER_URI}/v1/rest/query" \
  --resource "https://kusto.kusto.windows.net" \
  --headers "Content-Type=application/json" \
  --body @/tmp/kql_body.json \
  | jq '.Tables[0].Rows'

# 3. Explore schema
cat > /tmp/kql_body.json << EOF
{"db":"${DB_NAME}","csl":".show table Orders schema as json"}
EOF
az rest --method POST \
  --url "${CLUSTER_URI}/v1/rest/query" \
  --resource "https://kusto.kusto.windows.net" \
  --headers "Content-Type=application/json" \
  --body @/tmp/kql_body.json \
  | jq '.Tables[0].Rows'

# 4. Sample data
cat > /tmp/kql_body.json << EOF
{"db":"${DB_NAME}","csl":"Orders | take 10"}
EOF
az rest --method POST \
  --url "${CLUSTER_URI}/v1/rest/query" \
  --resource "https://kusto.kusto.windows.net" \
  --headers "Content-Type=application/json" \
  --body @/tmp/kql_body.json \
  | jq '.Tables[0].Rows'
// 5. Analytical query (via az rest --body @file)
Orders
| where OrderDate > ago(30d)
| summarize
    TotalOrders = count(),
    TotalRevenue = sum(Amount)
    by bin(OrderDate, 1d)
| render timechart

Example 2: Cross-Database Query

// Query across KQL databases in the same Eventhouse
let orders = database("SalesDB").Orders | where OrderDate > ago(7d);
let products = database("CatalogDB").Products;
orders
| join kind=inner (products) on ProductId
| summarize Revenue = sum(Amount) by ProductName
| top 10 by Revenue desc

Example 3: Export Results to File

# Run query and save results to JSON
cat > /tmp/kql_body.json << 'EOF'
{"db":"MyDB","csl":"Events | where Timestamp > ago(1d) | summarize count() by EventType"}
EOF

az rest --method POST \
  --url "${CLUSTER_URI}/v1/rest/query" \
  --resource "https://kusto.kusto.windows.net" \
  --headers "Content-Type=application/json" \
  --body @/tmp/kql_body.json \
  --output-file results.json

# Convert to CSV with jq
cat results.json \
  | jq -r '.Tables[0] | (.Columns | map(.ColumnName)), (.Rows[]) | @csv' > results.csv

Agent Integration Notes

  • This skill is read-only — it does not create, alter, or drop database objects.
  • For authoring operations (table management, ingestion, policies), delegate to eventhouse-authoring-cli.
  • For cross-workload orchestration (Spark + SQL + KQL), delegate to the FabricDataEngineer agent.
  • The Fabric KQL MCP server (
    fabric-kql
    in
    mcp-setup/mcp-config-template.json
    ) can be used as an alternative to
    az rest
    for agent-integrated query execution.