Skills-for-fabric eventhouse-authoring-cli

install
source · Clone the upstream repo
git clone https://github.com/microsoft/skills-for-fabric
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/microsoft/skills-for-fabric "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/eventhouse-authoring-cli" ~/.claude/skills/microsoft-skills-for-fabric-eventhouse-authoring-cli && rm -rf "$T"
manifest: skills/eventhouse-authoring-cli/SKILL.md
source content

Update Check — ONCE PER SESSION (mandatory) The first time this skill is used in a session, run the check-updates skill before proceeding.

  • GitHub Copilot CLI / VS Code: invoke the
    check-updates
    skill.
  • Claude Code / Cowork / Cursor / Windsurf / Codex: compare local vs remote package.json version.
  • Skip if the check was already performed earlier in this session.

CRITICAL NOTES

  1. To find the workspace details (including its ID) from workspace name: list all workspaces and, then, use JMESPath filtering
  2. To find the item details (including its ID) from workspace ID, item type, and item name: list all items of that type in that workspace and, then, use JMESPath filtering

eventhouse-authoring-cli — Eventhouse Authoring and Management via CLI

Table of Contents

TaskReferenceNotes
Finding Workspaces and Items in FabricCOMMON-CLI.md § Finding Workspaces and Items in FabricMandatoryREAD link first [needed for workspace/item ID resolution]
Fabric Topology & Key ConceptsCOMMON-CORE.md § Fabric Topology & Key ConceptsHierarchy, Finding Things in Fabric
Environment URLsCOMMON-CORE.md § Environment URLsKQL Cluster URI, KQL Ingestion URI
Authentication & Token AcquisitionCOMMON-CORE.md § Authentication & Token AcquisitionWrong audience = 401; KQL audience:
kusto.kusto.windows.net
Core Control-Plane REST APIsCOMMON-CORE.md § Core Control-Plane REST APIsList Workspaces, List Items, Item Creation
PaginationCOMMON-CORE.md § Pagination
Long-Running Operations (LRO)COMMON-CORE.md § Long-Running Operations (LRO)
Rate Limiting & ThrottlingCOMMON-CORE.md § Rate Limiting & Throttling
OneLake Data AccessCOMMON-CORE.md § OneLake Data AccessRequires
storage.azure.com
token, not Fabric token
Job ExecutionCOMMON-CORE.md § Job Execution
Capacity ManagementCOMMON-CORE.md § Capacity Management
Gotchas & TroubleshootingCOMMON-CORE.md § Gotchas & Troubleshooting
Best PracticesCOMMON-CORE.md § Best Practices
Tool Selection RationaleCOMMON-CLI.md § Tool Selection Rationale
Authentication RecipesCOMMON-CLI.md § Authentication Recipes
az login
flows and token acquisition
Fabric Control-Plane API via
az rest
COMMON-CLI.md § Fabric Control-Plane API via az restAlways pass
--resource https://api.fabric.microsoft.com
or
az rest
fails
Pagination PatternCOMMON-CLI.md § Pagination Pattern
Long-Running Operations (LRO) PatternCOMMON-CLI.md § Long-Running Operations (LRO) Pattern
OneLake Data Access via
curl
COMMON-CLI.md § OneLake Data Access via curlUse
curl
not
az rest
(different token audience)
SQL / TDS Data-Plane AccessCOMMON-CLI.md § SQL / TDS Data-Plane Access
sqlcmd
(Go) — not for KQL, but useful for cross-workload
Job Execution (CLI)COMMON-CLI.md § Job Execution
OneLake ShortcutsCOMMON-CLI.md § OneLake Shortcuts
Capacity Management (CLI)COMMON-CLI.md § Capacity Management
Composite RecipesCOMMON-CLI.md § Composite Recipes
Gotchas & Troubleshooting (CLI-Specific)COMMON-CLI.md § Gotchas & Troubleshooting (CLI-Specific)
az rest
audience, shell escaping, token expiry
Quick Reference:
az rest
Template
COMMON-CLI.md § Quick Reference: az rest Template
Quick Reference: Token Audience / CLI Tool MatrixCOMMON-CLI.md § Quick Reference: Token Audience ↔ CLI Tool MatrixWhich
--resource
+ tool for each service
Authoring Capability MatrixEVENTHOUSE-AUTHORING-CORE.md § Authoring Capability MatrixRead first — KQL Database vs Shortcut (read-only); connection requires Admin/Ingestor role
Table Management and Schema EvolutionEVENTHOUSE-AUTHORING-CORE.md § Table Management and Schema EvolutionCreate Table, Create-Merge (idempotent), Alter / Rename / Drop, Schema Evolution (Rename, Swap/Blue-Green)
Ingestion and Data MappingsEVENTHOUSE-AUTHORING-CORE.md § Ingestion and Data MappingsInline, Set-or-Append/Replace, From Storage, Streaming, Data Mappings (CSV, JSON)
PoliciesEVENTHOUSE-AUTHORING-CORE.md § PoliciesRetention, Caching, Partitioning, Merge
Materialized ViewsEVENTHOUSE-AUTHORING-CORE.md § Materialized ViewsCreate, Alter, Lifecycle, Supported aggregations
Stored Functions and Update PoliciesEVENTHOUSE-AUTHORING-CORE.md § Stored Functions and Update PoliciesStored Functions, Update Policies (auto-transform on ingestion)
External TablesEVENTHOUSE-AUTHORING-CORE.md § External TablesOneLake / ADLS External Table, Query External Table
Permission ModelEVENTHOUSE-AUTHORING-CORE.md § Permission ModelDatabase Roles, Grant Permissions
Authoring Gotchas and TroubleshootingEVENTHOUSE-AUTHORING-CORE.md § Authoring Gotchas and Troubleshooting Reference10 numbered issues with cause + fix
Bash Templatesauthoring-script-templates.md § Bash TemplatesCreate Table + Ingest, Schema Deployment, Export Schema, Set Retention/Caching
PowerShell Templatesauthoring-script-templates.md § PowerShell TemplatesCreate Table + Ingest, Schema Deployment
Tool StackSKILL.md § Tool Stack
ConnectionSKILL.md § Connection
Authoring ScopeSKILL.md § Authoring Scope
Execute KQL CommandSKILL.md § Execute KQL Command
az rest
pattern
— write JSON body, then execute
Table Management via CLISKILL.md § Table Management via CLICreate Table, Add Column, Drop Table
Data Ingestion via CLISKILL.md § Data Ingestion via CLIInline, From Storage, From OneLake, Set-or-Append
Policies via CLISKILL.md § Policies via CLIRetention, Caching, Streaming Ingestion
Materialized Views via CLISKILL.md § Materialized Views via CLI
Functions and Update Policies via CLISKILL.md § Functions and Update Policies via CLICreate Function, Create Update Policy
Schema Evolution via CLISKILL.md § Schema Evolution via CLISafe Schema Deployment Script, Export Current Schema
Monitoring Authoring OperationsSKILL.md § Monitoring Authoring Operations
Must / Prefer / Avoid / TroubleshootingSKILL.md § Must / Prefer / Avoid / TroubleshootingMUST DO / AVOID / PREFER checklists
Agentic WorkflowsSKILL.md § Agentic WorkflowsExploration Before Authoring, Script Generation Workflow
ExamplesSKILL.md § Examples
Agent Integration NotesSKILL.md § Agent Integration Notes

Tool Stack

ToolPurposeInstall
az cliKQL management commands via Kusto REST API; Fabric control-plane discovery
winget install Microsoft.AzureCLI
jqJSON processing and output formatting
winget install jqlang.jq

Connection

Same as eventhouse-consumption-cli. Authoring requires elevated roles:

# Discover KQL Database query URI
WS_ID="<workspace-id>"
az rest --method GET \
  --url "https://api.fabric.microsoft.com/v1/workspaces/${WS_ID}/kqlDatabases" \
  --resource "https://api.fabric.microsoft.com" \
  | jq '.value[] | {name: .displayName, queryUri: .properties.queryServiceUri}'

# Set connection variables
CLUSTER_URI="https://<cluster>.kusto.fabric.microsoft.com"
DB_NAME="MyDatabase"

# Verify admin access
cat > /tmp/kql_body.json << EOF
{"db":"${DB_NAME}","csl":".show database ${DB_NAME} principals | where Role == 'Admin'"}
EOF
az rest --method POST \
  --url "${CLUSTER_URI}/v1/rest/mgmt" \
  --resource "https://kusto.kusto.windows.net" \
  --headers "Content-Type=application/json" \
  --body @/tmp/kql_body.json \
  | jq '.Tables[0].Rows'

Authoring Scope

OperationCommand Pattern
Create table
.create-merge table T (cols)
Add column
.alter-merge table T (NewCol: type)
Drop table
.drop table T ifexists
Ingest data
.ingest into table T (...)
Set retention
.alter table T policy retention ...
Set caching
.alter table T policy caching hot = Nd
Create function
.create-or-alter function F() { ... }
Create materialized view
.create materialized-view MV on table T { ... }
Create update policy
.alter table T policy update ...
Create data mapping
.create table T ingestion csv mapping ...

Execute KQL Command

All KQL management commands in this skill follow the same

az rest
pattern. After setting
CLUSTER_URI
and
DB
, write the JSON body to
/tmp/kql_body.json
and execute:

cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":"<KQL management command>"}
EOF
az rest --method POST \
  --url "${CLUSTER_URI}/v1/rest/mgmt" \
  --resource "https://kusto.kusto.windows.net" \
  --headers "Content-Type=application/json" \
  --body @/tmp/kql_body.json \
  | jq '.Tables[0].Rows'

Nested JSON — For commands whose KQL contains embedded JSON (policies, mappings), use

<< 'EOF'
(single-quoted) to prevent shell expansion of backslash-escaped quotes, and replace
${DB}
with the literal database name.

PowerShell equivalent

@{db=$Database;csl=$Command} | ConvertTo-Json -Compress | Out-File $env:TEMP\kql_body.json -Encoding utf8NoBOM
then
--body "@$env:TEMP\kql_body.json"
. See PowerShell Templates.


Table Management via CLI

Create Table (Idempotent)

cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".create-merge table Events (Timestamp: datetime, EventType: string, UserId: string, Properties: dynamic, Duration: real)"}
EOF

Execute

/tmp/kql_body.json
— see Execute KQL Command

Add Column

cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".alter-merge table Events (Region: string)"}
EOF

Execute

/tmp/kql_body.json
— see Execute KQL Command

Drop Table

cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".drop table Events ifexists"}
EOF

Execute

/tmp/kql_body.json
— see Execute KQL Command


Data Ingestion via CLI

Inline Ingestion (Testing)

cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".ingest inline into table Events <| 2025-01-15T10:00:00Z,Login,user1,{},0.5\n2025-01-15T10:01:00Z,Click,user2,{},0.2"}
EOF

Execute

/tmp/kql_body.json
— see Execute KQL Command

Ingest from Storage

cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".ingest into table Events (h'https://mystorage.blob.core.windows.net/data/events.csv.gz;impersonate') with (format='csv', ingestionMappingReference='EventsCsvMapping', ignoreFirstRecord=true)"}
EOF

Execute

/tmp/kql_body.json
— see Execute KQL Command

Ingest from OneLake

cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".ingest into table Events (h'abfss://workspace@onelake.dfs.fabric.microsoft.com/lakehouse.Lakehouse/Files/events.parquet;impersonate') with (format='parquet')"}
EOF

Execute

/tmp/kql_body.json
— see Execute KQL Command

Set-or-Append from Query

cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".set-or-append CleanEvents <| RawEvents | where IsValid == true | project Timestamp, EventType, UserId"}
EOF

Execute

/tmp/kql_body.json
— see Execute KQL Command


Policies via CLI

Retention

# Set 365-day retention
cat > /tmp/kql_body.json << 'EOF'
{"db":"MyDB","csl":".alter table Events policy retention '{\"SoftDeletePeriod\":\"365.00:00:00\",\"Recoverability\":\"Enabled\"}'"}
EOF

Execute

/tmp/kql_body.json
— see Execute KQL Command

Caching (Hot Cache)

# Keep last 30 days in hot cache
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".alter table Events policy caching hot = 30d"}
EOF

Execute

/tmp/kql_body.json
— see Execute KQL Command

Streaming Ingestion

cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".alter table Events policy streamingingestion enable"}
EOF

Execute

/tmp/kql_body.json
— see Execute KQL Command


Materialized Views via CLI

# Create materialized view with backfill
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".create materialized-view with (backfill=true) HourlyEventCounts on table Events { Events | summarize Count = count(), LastSeen = max(Timestamp) by EventType, bin(Timestamp, 1h) }"}
EOF

Execute

/tmp/kql_body.json
— see Execute KQL Command

# Check health
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".show materialized-view HourlyEventCounts statistics"}
EOF

Execute

/tmp/kql_body.json
— see Execute KQL Command


Functions and Update Policies via CLI

Create Function

cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".create-or-alter function with (docstring='Parse raw events', folder='ETL') ParseRawEvents() { RawEvents | extend Parsed = parse_json(RawData) | project Timestamp = todatetime(Parsed.timestamp), EventType = tostring(Parsed.eventType), UserId = tostring(Parsed.userId) }"}
EOF

Execute

/tmp/kql_body.json
— see Execute KQL Command

Create Update Policy

cat > /tmp/kql_body.json << 'EOF'
{"db":"MyDB","csl":".alter table ParsedEvents policy update @'[{\"IsEnabled\":true,\"Source\":\"RawEvents\",\"Query\":\"ParseRawEvents()\",\"IsTransactional\":true}]'"}
EOF

Execute

/tmp/kql_body.json
— see Execute KQL Command


Schema Evolution via CLI

Safe Schema Deployment Script

Save management commands in a

.kql
file (one per line), then execute each command via
az rest
:

# deploy_schema.kql contains one command per line:
# .create-merge table Events (Timestamp: datetime, EventType: string, UserId: string, Properties: dynamic)
# .create-merge table ParsedEvents (Timestamp: datetime, EventType: string, UserId: string, PageName: string)
# .alter table Events policy retention '{\"SoftDeletePeriod\":\"365.00:00:00\",\"Recoverability\":\"Enabled\"}'
# .alter table Events policy caching hot = 30d

# Execute each command from the file (see "Execute KQL Command" section)
while IFS= read -r cmd; do
  [[ "$cmd" =~ ^// ]] && continue   # skip comment lines
  [[ -z "$cmd" ]] && continue        # skip blank lines
  cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":"${cmd}"}
EOF
  az rest --method POST \
    --url "${CLUSTER_URI}/v1/rest/mgmt" \
    --resource "https://kusto.kusto.windows.net" \
    --headers "Content-Type=application/json" \
    --body @/tmp/kql_body.json \
    | jq '.Tables[0].Rows'
done < deploy_schema.kql

Export Current Schema

cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".show database ${DB} schema as csl script"}
EOF
az rest --method POST \
  --url "${CLUSTER_URI}/v1/rest/mgmt" \
  --resource "https://kusto.kusto.windows.net" \
  --headers "Content-Type=application/json" \
  --body @/tmp/kql_body.json \
  | jq -r '.Tables[0].Rows[][0]' > current_schema.kql

Monitoring Authoring Operations

// Recent management commands
.show commands
| where StartedOn > ago(1h)
| project StartedOn, CommandType, Text = substring(Text, 0, 100), State, Duration
| order by StartedOn desc

// Ingestion failures
.show ingestion failures
| where FailedOn > ago(24h)
| summarize FailureCount = count() by ErrorCode, Table
| order by FailureCount desc

// Materialized view health
.show materialized-views
| project Name, IsEnabled, IsHealthy, MaterializedTo

Must / Prefer / Avoid / Troubleshooting

Must

  • Clarify before acting on ambiguous prompts — if the request does not specify a target table, operation type, or schema (e.g. "set up my Eventhouse", "configure my database"), ask the user what they want to do. Never infer intent and apply management commands autonomously. Irreversible side-effects (policy changes, schema mutations, data ingestion) require explicit user intent.
  • Use idempotent commands
    .create-merge table
    ,
    .create-or-alter function
    ,
    .create table ifnotexists
    .
  • Verify permissions before authoring — must have
    Admin
    or
    Ingestor
    role.
  • Test update policies by running the function independently before attaching.
  • Include
    impersonate
    in storage URIs when ingesting from OneLake or Blob Storage.

Prefer

  • az rest
    with loop
    for deploying multi-command schema files.
  • Fabric KQL MCP server for agent-integrated ingestion and management workflows.
  • .create-merge table
    over
    .create table
    for safe schema evolution.
  • Materialized views over repeated expensive aggregation queries.
  • Script-based CI/CD — export schema with
    .show database DB schema as csl script
    , store in git.

Avoid

  • .drop table
    without
    ifexists
    — fails on missing tables.
  • .alter table
    to add columns — use
    .alter-merge table
    instead (additive only).
  • Ingestion without mappings for CSV/JSON — column order or field names may not match.
  • Hardcoded storage URIs — parameterise in scripts.
  • Disabling materialized views without understanding the re-backfill cost.

Troubleshooting

SymptomFix
.create table
fails "already exists"
Use
.create-merge table
or
.create table ifnotexists
Ingestion succeeds but table emptyCheck data mappings:
.show table T ingestion csv mappings
Update policy not firingVerify function runs standalone; check
.show table T policy update
Forbidden (403)
on management commands
Request
admin
or
ingestor
database role
Materialized view stuckCheck
.show materialized-view MV statistics
; may need
.disable
/
.enable
OneLake ingest auth errorAdd
;impersonate
to
abfss://
URI

Agentic Workflows

Exploration Before Authoring

Always check for explicit intent before doing anything:

Step 0 → Is the request specific? Does it name a table, operation, and/or schema?
         → NO  → Ask: "What would you like to set up? Options: create tables,
                  configure policies, set up ingestion mappings, create materialized views."
                  STOP — do not proceed until user specifies.
         → YES → Continue to Step 1.
Step 1 → .show tables details                        // what exists?
Step 2 → .show table <TABLE> schema as json          // current columns
Step 3 → .show table <TABLE> policy retention        // current policies
Step 4 → Plan changes (create-merge, alter, etc.)
Step 5 → Execute changes
Step 6 → Verify: .show table <TABLE> schema as json  // confirm changes

Script Generation Workflow

Step 1 → Understand requirements from user
Step 2 → Generate KQL management commands
Step 3 → Save to .kql file
Step 4 → Deploy via az rest (one command at a time)
Step 5 → Verify deployed state matches intent

Examples

Example 1: Create Table with Policies and Mapping

# Create table
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".create-merge table SensorData (Timestamp: datetime, DeviceId: string, Temperature: real, Humidity: real, Location: dynamic)"}
EOF

Execute

/tmp/kql_body.json
— see Execute KQL Command

# Set retention
cat > /tmp/kql_body.json << 'EOF'
{"db":"MyDB","csl":".alter table SensorData policy retention '{\"SoftDeletePeriod\":\"90.00:00:00\",\"Recoverability\":\"Enabled\"}'"}
EOF

Execute

/tmp/kql_body.json
— see Execute KQL Command

# Set caching
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".alter table SensorData policy caching hot = 7d"}
EOF

Execute

/tmp/kql_body.json
— see Execute KQL Command

# Create JSON mapping
cat > /tmp/kql_body.json << 'EOF'
{"db":"MyDB","csl":".create table SensorData ingestion json mapping 'SensorJsonMapping' '[{\"column\":\"Timestamp\",\"path\":\"$.ts\",\"datatype\":\"datetime\"},{\"column\":\"DeviceId\",\"path\":\"$.deviceId\",\"datatype\":\"string\"},{\"column\":\"Temperature\",\"path\":\"$.temp\",\"datatype\":\"real\"},{\"column\":\"Humidity\",\"path\":\"$.humidity\",\"datatype\":\"real\"},{\"column\":\"Location\",\"path\":\"$.location\",\"datatype\":\"dynamic\"}]'"}
EOF

Execute

/tmp/kql_body.json
— see Execute KQL Command

Example 2: ETL with Update Policy

// 1. Target table
.create-merge table ParsedLogs (Timestamp: datetime, Level: string, Message: string, Source: string)

// 2. Transform function
.create-or-alter function ParseRawLogs() {
    RawLogs
    | extend J = parse_json(RawMessage)
    | project
        Timestamp = todatetime(J.timestamp),
        Level = tostring(J.level),
        Message = tostring(J.message),
        Source = tostring(J.source)
}

// 3. Attach update policy
.alter table ParsedLogs policy update
@'[{"IsEnabled":true,"Source":"RawLogs","Query":"ParseRawLogs()","IsTransactional":true}]'

Agent Integration Notes

  • This skill covers authoring operations — creating/altering database objects and ingesting data.
  • For read-only queries and data exploration, delegate to eventhouse-consumption-cli.
  • For cross-workload orchestration, delegate to the FabricDataEngineer agent.
  • All management commands require elevated database roles (
    Admin
    or
    Ingestor
    ).