Skills-for-fabric eventhouse-authoring-cli
install
source · Clone the upstream repo
git clone https://github.com/microsoft/skills-for-fabric
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/microsoft/skills-for-fabric "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/eventhouse-authoring-cli" ~/.claude/skills/microsoft-skills-for-fabric-eventhouse-authoring-cli && rm -rf "$T"
manifest:
skills/eventhouse-authoring-cli/SKILL.mdsource content
Update Check — ONCE PER SESSION (mandatory) The first time this skill is used in a session, run the check-updates skill before proceeding.
- GitHub Copilot CLI / VS Code: invoke the
skill.check-updates- Claude Code / Cowork / Cursor / Windsurf / Codex: compare local vs remote package.json version.
- Skip if the check was already performed earlier in this session.
CRITICAL NOTES
- To find the workspace details (including its ID) from workspace name: list all workspaces and, then, use JMESPath filtering
- To find the item details (including its ID) from workspace ID, item type, and item name: list all items of that type in that workspace and, then, use JMESPath filtering
eventhouse-authoring-cli — Eventhouse Authoring and Management via CLI
Table of Contents
| Task | Reference | Notes |
|---|---|---|
| Finding Workspaces and Items in Fabric | COMMON-CLI.md § Finding Workspaces and Items in Fabric | Mandatory — READ link first [needed for workspace/item ID resolution] |
| Fabric Topology & Key Concepts | COMMON-CORE.md § Fabric Topology & Key Concepts | Hierarchy, Finding Things in Fabric |
| Environment URLs | COMMON-CORE.md § Environment URLs | KQL Cluster URI, KQL Ingestion URI |
| Authentication & Token Acquisition | COMMON-CORE.md § Authentication & Token Acquisition | Wrong audience = 401; KQL audience: |
| Core Control-Plane REST APIs | COMMON-CORE.md § Core Control-Plane REST APIs | List Workspaces, List Items, Item Creation |
| Pagination | COMMON-CORE.md § Pagination | |
| Long-Running Operations (LRO) | COMMON-CORE.md § Long-Running Operations (LRO) | |
| Rate Limiting & Throttling | COMMON-CORE.md § Rate Limiting & Throttling | |
| OneLake Data Access | COMMON-CORE.md § OneLake Data Access | Requires token, not Fabric token |
| Job Execution | COMMON-CORE.md § Job Execution | |
| Capacity Management | COMMON-CORE.md § Capacity Management | |
| Gotchas & Troubleshooting | COMMON-CORE.md § Gotchas & Troubleshooting | |
| Best Practices | COMMON-CORE.md § Best Practices | |
| Tool Selection Rationale | COMMON-CLI.md § Tool Selection Rationale | |
| Authentication Recipes | COMMON-CLI.md § Authentication Recipes | flows and token acquisition |
Fabric Control-Plane API via | COMMON-CLI.md § Fabric Control-Plane API via az rest | Always pass or fails |
| Pagination Pattern | COMMON-CLI.md § Pagination Pattern | |
| Long-Running Operations (LRO) Pattern | COMMON-CLI.md § Long-Running Operations (LRO) Pattern | |
OneLake Data Access via | COMMON-CLI.md § OneLake Data Access via curl | Use not (different token audience) |
| SQL / TDS Data-Plane Access | COMMON-CLI.md § SQL / TDS Data-Plane Access | (Go) — not for KQL, but useful for cross-workload |
| Job Execution (CLI) | COMMON-CLI.md § Job Execution | |
| OneLake Shortcuts | COMMON-CLI.md § OneLake Shortcuts | |
| Capacity Management (CLI) | COMMON-CLI.md § Capacity Management | |
| Composite Recipes | COMMON-CLI.md § Composite Recipes | |
| Gotchas & Troubleshooting (CLI-Specific) | COMMON-CLI.md § Gotchas & Troubleshooting (CLI-Specific) | audience, shell escaping, token expiry |
Quick Reference: Template | COMMON-CLI.md § Quick Reference: az rest Template | |
| Quick Reference: Token Audience / CLI Tool Matrix | COMMON-CLI.md § Quick Reference: Token Audience ↔ CLI Tool Matrix | Which + tool for each service |
| Authoring Capability Matrix | EVENTHOUSE-AUTHORING-CORE.md § Authoring Capability Matrix | Read first — KQL Database vs Shortcut (read-only); connection requires Admin/Ingestor role |
| Table Management and Schema Evolution | EVENTHOUSE-AUTHORING-CORE.md § Table Management and Schema Evolution | Create Table, Create-Merge (idempotent), Alter / Rename / Drop, Schema Evolution (Rename, Swap/Blue-Green) |
| Ingestion and Data Mappings | EVENTHOUSE-AUTHORING-CORE.md § Ingestion and Data Mappings | Inline, Set-or-Append/Replace, From Storage, Streaming, Data Mappings (CSV, JSON) |
| Policies | EVENTHOUSE-AUTHORING-CORE.md § Policies | Retention, Caching, Partitioning, Merge |
| Materialized Views | EVENTHOUSE-AUTHORING-CORE.md § Materialized Views | Create, Alter, Lifecycle, Supported aggregations |
| Stored Functions and Update Policies | EVENTHOUSE-AUTHORING-CORE.md § Stored Functions and Update Policies | Stored Functions, Update Policies (auto-transform on ingestion) |
| External Tables | EVENTHOUSE-AUTHORING-CORE.md § External Tables | OneLake / ADLS External Table, Query External Table |
| Permission Model | EVENTHOUSE-AUTHORING-CORE.md § Permission Model | Database Roles, Grant Permissions |
| Authoring Gotchas and Troubleshooting | EVENTHOUSE-AUTHORING-CORE.md § Authoring Gotchas and Troubleshooting Reference | 10 numbered issues with cause + fix |
| Bash Templates | authoring-script-templates.md § Bash Templates | Create Table + Ingest, Schema Deployment, Export Schema, Set Retention/Caching |
| PowerShell Templates | authoring-script-templates.md § PowerShell Templates | Create Table + Ingest, Schema Deployment |
| Tool Stack | SKILL.md § Tool Stack | |
| Connection | SKILL.md § Connection | |
| Authoring Scope | SKILL.md § Authoring Scope | |
| Execute KQL Command | SKILL.md § Execute KQL Command | pattern — write JSON body, then execute |
| Table Management via CLI | SKILL.md § Table Management via CLI | Create Table, Add Column, Drop Table |
| Data Ingestion via CLI | SKILL.md § Data Ingestion via CLI | Inline, From Storage, From OneLake, Set-or-Append |
| Policies via CLI | SKILL.md § Policies via CLI | Retention, Caching, Streaming Ingestion |
| Materialized Views via CLI | SKILL.md § Materialized Views via CLI | |
| Functions and Update Policies via CLI | SKILL.md § Functions and Update Policies via CLI | Create Function, Create Update Policy |
| Schema Evolution via CLI | SKILL.md § Schema Evolution via CLI | Safe Schema Deployment Script, Export Current Schema |
| Monitoring Authoring Operations | SKILL.md § Monitoring Authoring Operations | |
| Must / Prefer / Avoid / Troubleshooting | SKILL.md § Must / Prefer / Avoid / Troubleshooting | MUST DO / AVOID / PREFER checklists |
| Agentic Workflows | SKILL.md § Agentic Workflows | Exploration Before Authoring, Script Generation Workflow |
| Examples | SKILL.md § Examples | |
| Agent Integration Notes | SKILL.md § Agent Integration Notes |
Tool Stack
| Tool | Purpose | Install |
|---|---|---|
| az cli | KQL management commands via Kusto REST API; Fabric control-plane discovery | |
| jq | JSON processing and output formatting | |
Connection
Same as eventhouse-consumption-cli. Authoring requires elevated roles:
# Discover KQL Database query URI WS_ID="<workspace-id>" az rest --method GET \ --url "https://api.fabric.microsoft.com/v1/workspaces/${WS_ID}/kqlDatabases" \ --resource "https://api.fabric.microsoft.com" \ | jq '.value[] | {name: .displayName, queryUri: .properties.queryServiceUri}' # Set connection variables CLUSTER_URI="https://<cluster>.kusto.fabric.microsoft.com" DB_NAME="MyDatabase" # Verify admin access cat > /tmp/kql_body.json << EOF {"db":"${DB_NAME}","csl":".show database ${DB_NAME} principals | where Role == 'Admin'"} EOF az rest --method POST \ --url "${CLUSTER_URI}/v1/rest/mgmt" \ --resource "https://kusto.kusto.windows.net" \ --headers "Content-Type=application/json" \ --body @/tmp/kql_body.json \ | jq '.Tables[0].Rows'
Authoring Scope
| Operation | Command Pattern |
|---|---|
| Create table | |
| Add column | |
| Drop table | |
| Ingest data | |
| Set retention | |
| Set caching | |
| Create function | |
| Create materialized view | |
| Create update policy | |
| Create data mapping | |
Execute KQL Command
All KQL management commands in this skill follow the same
az rest pattern. After setting CLUSTER_URI and DB, write the JSON body to /tmp/kql_body.json and execute:
cat > /tmp/kql_body.json << EOF {"db":"${DB}","csl":"<KQL management command>"} EOF az rest --method POST \ --url "${CLUSTER_URI}/v1/rest/mgmt" \ --resource "https://kusto.kusto.windows.net" \ --headers "Content-Type=application/json" \ --body @/tmp/kql_body.json \ | jq '.Tables[0].Rows'
Nested JSON — For commands whose KQL contains embedded JSON (policies, mappings), use
(single-quoted) to prevent shell expansion of backslash-escaped quotes, and replace<< 'EOF'with the literal database name.${DB}
PowerShell equivalent —
then@{db=$Database;csl=$Command} | ConvertTo-Json -Compress | Out-File $env:TEMP\kql_body.json -Encoding utf8NoBOM. See PowerShell Templates.--body "@$env:TEMP\kql_body.json"
Table Management via CLI
Create Table (Idempotent)
cat > /tmp/kql_body.json << EOF {"db":"${DB}","csl":".create-merge table Events (Timestamp: datetime, EventType: string, UserId: string, Properties: dynamic, Duration: real)"} EOF
Execute
— see Execute KQL Command/tmp/kql_body.json
Add Column
cat > /tmp/kql_body.json << EOF {"db":"${DB}","csl":".alter-merge table Events (Region: string)"} EOF
Execute
— see Execute KQL Command/tmp/kql_body.json
Drop Table
cat > /tmp/kql_body.json << EOF {"db":"${DB}","csl":".drop table Events ifexists"} EOF
Execute
— see Execute KQL Command/tmp/kql_body.json
Data Ingestion via CLI
Inline Ingestion (Testing)
cat > /tmp/kql_body.json << EOF {"db":"${DB}","csl":".ingest inline into table Events <| 2025-01-15T10:00:00Z,Login,user1,{},0.5\n2025-01-15T10:01:00Z,Click,user2,{},0.2"} EOF
Execute
— see Execute KQL Command/tmp/kql_body.json
Ingest from Storage
cat > /tmp/kql_body.json << EOF {"db":"${DB}","csl":".ingest into table Events (h'https://mystorage.blob.core.windows.net/data/events.csv.gz;impersonate') with (format='csv', ingestionMappingReference='EventsCsvMapping', ignoreFirstRecord=true)"} EOF
Execute
— see Execute KQL Command/tmp/kql_body.json
Ingest from OneLake
cat > /tmp/kql_body.json << EOF {"db":"${DB}","csl":".ingest into table Events (h'abfss://workspace@onelake.dfs.fabric.microsoft.com/lakehouse.Lakehouse/Files/events.parquet;impersonate') with (format='parquet')"} EOF
Execute
— see Execute KQL Command/tmp/kql_body.json
Set-or-Append from Query
cat > /tmp/kql_body.json << EOF {"db":"${DB}","csl":".set-or-append CleanEvents <| RawEvents | where IsValid == true | project Timestamp, EventType, UserId"} EOF
Execute
— see Execute KQL Command/tmp/kql_body.json
Policies via CLI
Retention
# Set 365-day retention cat > /tmp/kql_body.json << 'EOF' {"db":"MyDB","csl":".alter table Events policy retention '{\"SoftDeletePeriod\":\"365.00:00:00\",\"Recoverability\":\"Enabled\"}'"} EOF
Execute
— see Execute KQL Command/tmp/kql_body.json
Caching (Hot Cache)
# Keep last 30 days in hot cache cat > /tmp/kql_body.json << EOF {"db":"${DB}","csl":".alter table Events policy caching hot = 30d"} EOF
Execute
— see Execute KQL Command/tmp/kql_body.json
Streaming Ingestion
cat > /tmp/kql_body.json << EOF {"db":"${DB}","csl":".alter table Events policy streamingingestion enable"} EOF
Execute
— see Execute KQL Command/tmp/kql_body.json
Materialized Views via CLI
# Create materialized view with backfill cat > /tmp/kql_body.json << EOF {"db":"${DB}","csl":".create materialized-view with (backfill=true) HourlyEventCounts on table Events { Events | summarize Count = count(), LastSeen = max(Timestamp) by EventType, bin(Timestamp, 1h) }"} EOF
Execute
— see Execute KQL Command/tmp/kql_body.json
# Check health cat > /tmp/kql_body.json << EOF {"db":"${DB}","csl":".show materialized-view HourlyEventCounts statistics"} EOF
Execute
— see Execute KQL Command/tmp/kql_body.json
Functions and Update Policies via CLI
Create Function
cat > /tmp/kql_body.json << EOF {"db":"${DB}","csl":".create-or-alter function with (docstring='Parse raw events', folder='ETL') ParseRawEvents() { RawEvents | extend Parsed = parse_json(RawData) | project Timestamp = todatetime(Parsed.timestamp), EventType = tostring(Parsed.eventType), UserId = tostring(Parsed.userId) }"} EOF
Execute
— see Execute KQL Command/tmp/kql_body.json
Create Update Policy
cat > /tmp/kql_body.json << 'EOF' {"db":"MyDB","csl":".alter table ParsedEvents policy update @'[{\"IsEnabled\":true,\"Source\":\"RawEvents\",\"Query\":\"ParseRawEvents()\",\"IsTransactional\":true}]'"} EOF
Execute
— see Execute KQL Command/tmp/kql_body.json
Schema Evolution via CLI
Safe Schema Deployment Script
Save management commands in a
.kql file (one per line), then execute each command via az rest:
# deploy_schema.kql contains one command per line: # .create-merge table Events (Timestamp: datetime, EventType: string, UserId: string, Properties: dynamic) # .create-merge table ParsedEvents (Timestamp: datetime, EventType: string, UserId: string, PageName: string) # .alter table Events policy retention '{\"SoftDeletePeriod\":\"365.00:00:00\",\"Recoverability\":\"Enabled\"}' # .alter table Events policy caching hot = 30d # Execute each command from the file (see "Execute KQL Command" section) while IFS= read -r cmd; do [[ "$cmd" =~ ^// ]] && continue # skip comment lines [[ -z "$cmd" ]] && continue # skip blank lines cat > /tmp/kql_body.json << EOF {"db":"${DB}","csl":"${cmd}"} EOF az rest --method POST \ --url "${CLUSTER_URI}/v1/rest/mgmt" \ --resource "https://kusto.kusto.windows.net" \ --headers "Content-Type=application/json" \ --body @/tmp/kql_body.json \ | jq '.Tables[0].Rows' done < deploy_schema.kql
Export Current Schema
cat > /tmp/kql_body.json << EOF {"db":"${DB}","csl":".show database ${DB} schema as csl script"} EOF az rest --method POST \ --url "${CLUSTER_URI}/v1/rest/mgmt" \ --resource "https://kusto.kusto.windows.net" \ --headers "Content-Type=application/json" \ --body @/tmp/kql_body.json \ | jq -r '.Tables[0].Rows[][0]' > current_schema.kql
Monitoring Authoring Operations
// Recent management commands .show commands | where StartedOn > ago(1h) | project StartedOn, CommandType, Text = substring(Text, 0, 100), State, Duration | order by StartedOn desc // Ingestion failures .show ingestion failures | where FailedOn > ago(24h) | summarize FailureCount = count() by ErrorCode, Table | order by FailureCount desc // Materialized view health .show materialized-views | project Name, IsEnabled, IsHealthy, MaterializedTo
Must / Prefer / Avoid / Troubleshooting
Must
- Clarify before acting on ambiguous prompts — if the request does not specify a target table, operation type, or schema (e.g. "set up my Eventhouse", "configure my database"), ask the user what they want to do. Never infer intent and apply management commands autonomously. Irreversible side-effects (policy changes, schema mutations, data ingestion) require explicit user intent.
- Use idempotent commands —
,.create-merge table
,.create-or-alter function
..create table ifnotexists - Verify permissions before authoring — must have
orAdmin
role.Ingestor - Test update policies by running the function independently before attaching.
- Include
in storage URIs when ingesting from OneLake or Blob Storage.impersonate
Prefer
with loop for deploying multi-command schema files.az rest- Fabric KQL MCP server for agent-integrated ingestion and management workflows.
over.create-merge table
for safe schema evolution..create table- Materialized views over repeated expensive aggregation queries.
- Script-based CI/CD — export schema with
, store in git..show database DB schema as csl script
Avoid
without.drop table
— fails on missing tables.ifexists
to add columns — use.alter table
instead (additive only)..alter-merge table- Ingestion without mappings for CSV/JSON — column order or field names may not match.
- Hardcoded storage URIs — parameterise in scripts.
- Disabling materialized views without understanding the re-backfill cost.
Troubleshooting
| Symptom | Fix |
|---|---|
fails "already exists" | Use or |
| Ingestion succeeds but table empty | Check data mappings: |
| Update policy not firing | Verify function runs standalone; check |
on management commands | Request or database role |
| Materialized view stuck | Check ; may need / |
| OneLake ingest auth error | Add to URI |
Agentic Workflows
Exploration Before Authoring
Always check for explicit intent before doing anything:
Step 0 → Is the request specific? Does it name a table, operation, and/or schema? → NO → Ask: "What would you like to set up? Options: create tables, configure policies, set up ingestion mappings, create materialized views." STOP — do not proceed until user specifies. → YES → Continue to Step 1. Step 1 → .show tables details // what exists? Step 2 → .show table <TABLE> schema as json // current columns Step 3 → .show table <TABLE> policy retention // current policies Step 4 → Plan changes (create-merge, alter, etc.) Step 5 → Execute changes Step 6 → Verify: .show table <TABLE> schema as json // confirm changes
Script Generation Workflow
Step 1 → Understand requirements from user Step 2 → Generate KQL management commands Step 3 → Save to .kql file Step 4 → Deploy via az rest (one command at a time) Step 5 → Verify deployed state matches intent
Examples
Example 1: Create Table with Policies and Mapping
# Create table cat > /tmp/kql_body.json << EOF {"db":"${DB}","csl":".create-merge table SensorData (Timestamp: datetime, DeviceId: string, Temperature: real, Humidity: real, Location: dynamic)"} EOF
Execute
— see Execute KQL Command/tmp/kql_body.json
# Set retention cat > /tmp/kql_body.json << 'EOF' {"db":"MyDB","csl":".alter table SensorData policy retention '{\"SoftDeletePeriod\":\"90.00:00:00\",\"Recoverability\":\"Enabled\"}'"} EOF
Execute
— see Execute KQL Command/tmp/kql_body.json
# Set caching cat > /tmp/kql_body.json << EOF {"db":"${DB}","csl":".alter table SensorData policy caching hot = 7d"} EOF
Execute
— see Execute KQL Command/tmp/kql_body.json
# Create JSON mapping cat > /tmp/kql_body.json << 'EOF' {"db":"MyDB","csl":".create table SensorData ingestion json mapping 'SensorJsonMapping' '[{\"column\":\"Timestamp\",\"path\":\"$.ts\",\"datatype\":\"datetime\"},{\"column\":\"DeviceId\",\"path\":\"$.deviceId\",\"datatype\":\"string\"},{\"column\":\"Temperature\",\"path\":\"$.temp\",\"datatype\":\"real\"},{\"column\":\"Humidity\",\"path\":\"$.humidity\",\"datatype\":\"real\"},{\"column\":\"Location\",\"path\":\"$.location\",\"datatype\":\"dynamic\"}]'"} EOF
Execute
— see Execute KQL Command/tmp/kql_body.json
Example 2: ETL with Update Policy
// 1. Target table .create-merge table ParsedLogs (Timestamp: datetime, Level: string, Message: string, Source: string) // 2. Transform function .create-or-alter function ParseRawLogs() { RawLogs | extend J = parse_json(RawMessage) | project Timestamp = todatetime(J.timestamp), Level = tostring(J.level), Message = tostring(J.message), Source = tostring(J.source) } // 3. Attach update policy .alter table ParsedLogs policy update @'[{"IsEnabled":true,"Source":"RawLogs","Query":"ParseRawLogs()","IsTransactional":true}]'
Agent Integration Notes
- This skill covers authoring operations — creating/altering database objects and ingesting data.
- For read-only queries and data exploration, delegate to eventhouse-consumption-cli.
- For cross-workload orchestration, delegate to the FabricDataEngineer agent.
- All management commands require elevated database roles (
orAdmin
).Ingestor