Skills-for-fabric spark-authoring-cli
git clone https://github.com/microsoft/skills-for-fabric
T=$(mktemp -d) && git clone --depth=1 https://github.com/microsoft/skills-for-fabric "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/spark-authoring-cli" ~/.claude/skills/microsoft-skills-for-fabric-spark-authoring-cli && rm -rf "$T"
skills/spark-authoring-cli/SKILL.mdUpdate Check — ONCE PER SESSION (mandatory) The first time this skill is used in a session, run the check-updates skill before proceeding.
- GitHub Copilot CLI / VS Code: invoke the
skill.check-updates- Claude Code / Cowork / Cursor / Windsurf / Codex: compare local vs remote package.json version.
- Skip if the check was already performed earlier in this session.
CRITICAL NOTES
- To find the workspace details (including its ID) from workspace name: list all workspaces and, then, use JMESPath filtering
- To find the item details (including its ID) from workspace ID, item type, and item name: list all items of that type in that workspace and, then, use JMESPath filtering
Spark Authoring — CLI Skill
Table of Contents
| Task | Reference | Notes |
|---|---|---|
| RULES — Read these first, follow them always | SKILL.md § RULES | MUST read — 3 rules for this skill |
| Finding Workspaces and Items in Fabric | COMMON-CLI.md § Finding Workspaces and Items in Fabric | Mandatory — READ link first [needed for finding workspace id by its name or item id by its name, item type, and workspace id] |
| Fabric Topology & Key Concepts | COMMON-CORE.md § Fabric Topology & Key Concepts | |
| Environment URLs | COMMON-CORE.md § Environment URLs | |
| Authentication & Token Acquisition | COMMON-CORE.md § Authentication & Token Acquisition | Wrong audience = 401; read before any auth issue |
| Core Control-Plane REST APIs | COMMON-CORE.md § Core Control-Plane REST APIs | |
| Pagination | COMMON-CORE.md § Pagination | |
| Long-Running Operations (LRO) | COMMON-CORE.md § Long-Running Operations (LRO) | |
| Rate Limiting & Throttling | COMMON-CORE.md § Rate Limiting & Throttling | |
| OneLake Data Access | COMMON-CORE.md § OneLake Data Access | Requires token, not Fabric token |
| Definition Envelope | ITEM-DEFINITIONS-CORE.md § Definition Envelope | Definition payload structure |
| Per-Item-Type Definitions | ITEM-DEFINITIONS-CORE.md § Per-Item-Type Definitions | Support matrix, decoded content, part paths — REST specs, CLI recipes |
| Job Execution | COMMON-CORE.md § Job Execution | |
| Capacity Management | COMMON-CORE.md § Capacity Management | |
| Gotchas & Troubleshooting | COMMON-CORE.md § Gotchas & Troubleshooting | |
| Best Practices | COMMON-CORE.md § Best Practices | |
| Tool Selection Rationale | COMMON-CLI.md § Tool Selection Rationale | |
| Authentication Recipes | COMMON-CLI.md § Authentication Recipes | flows and token acquisition |
Fabric Control-Plane API via | COMMON-CLI.md § Fabric Control-Plane API via az rest | Always pass or fails |
| Pagination Pattern | COMMON-CLI.md § Pagination Pattern | |
| Long-Running Operations (LRO) Pattern | COMMON-CLI.md § Long-Running Operations (LRO) Pattern | |
OneLake Data Access via | COMMON-CLI.md § OneLake Data Access via curl | Use not (different token audience) |
| SQL / TDS Data-Plane Access | COMMON-CLI.md § SQL / TDS Data-Plane Access | |
| Job Execution (CLI) | COMMON-CLI.md § Job Execution | |
| Job Scheduling | COMMON-CLI.md § Job Scheduling | URL is ; required |
| OneLake Shortcuts | COMMON-CLI.md § OneLake Shortcuts | |
| Capacity Management (CLI) | COMMON-CLI.md § Capacity Management | |
| Composite Recipes | COMMON-CLI.md § Composite Recipes | |
| Gotchas & Troubleshooting (CLI-Specific) | COMMON-CLI.md § Gotchas & Troubleshooting (CLI-Specific) | audience, shell escaping, token expiry |
Quick Reference: Template | COMMON-CLI.md § Quick Reference: az rest Template | |
| Quick Reference: Token Audience / CLI Tool Matrix | COMMON-CLI.md § Quick Reference: Token Audience ↔ CLI Tool Matrix | Which + tool for each service |
| Relationship to SPARK-CONSUMPTION-CORE.md | SPARK-AUTHORING-CORE.md § Relationship to SPARK-CONSUMPTION-CORE.md | |
| Data Engineering Authoring Capability Matrix | SPARK-AUTHORING-CORE.md § Data Engineering Authoring Capability Matrix | |
| Lakehouse Management | SPARK-AUTHORING-CORE.md § Lakehouse Management | |
| Notebook Management | SPARK-AUTHORING-CORE.md § Notebook Management | |
| Notebook Execution & Job Management | SPARK-AUTHORING-CORE.md § Notebook Execution & Job Management | |
| CI/CD & Automation Patterns | SPARK-AUTHORING-CORE.md § CI/CD & Automation Patterns | |
| Infrastructure-as-Code | SPARK-AUTHORING-CORE.md § Infrastructure-as-Code | |
| Performance Optimization & Resource Management | SPARK-AUTHORING-CORE.md § Performance Optimization & Resource Management | |
| Authoring Gotchas and Troubleshooting | SPARK-AUTHORING-CORE.md § Authoring Gotchas and Troubleshooting | |
| Quick Reference: Authoring Decision Guide | SPARK-AUTHORING-CORE.md § Quick Reference: Authoring Decision Guide | |
| Recommended Patterns (Data Engineering) | data-engineering-patterns.md § Recommended patterns | |
| Data Ingestion Principles | data-engineering-patterns.md § Data Ingestion Principles | |
| Transformation Patterns | data-engineering-patterns.md § Transformation Patterns | |
| Delta Lake Best Practices | data-engineering-patterns.md § Delta Lake Best Practices | |
| Quality Assurance Strategies | data-engineering-patterns.md § Quality Assurance Strategies | |
| Recommended Patterns (Development Workflow) | development-workflow.md § Recommended patterns | |
| Notebook Lifecycle | development-workflow.md § Notebook Lifecycle | |
| Parameterization Patterns | development-workflow.md § Parameterization Patterns | |
| Variable Library (notebook + pipeline usage) | development-workflow.md § Method 4: Variable Library | + dot notation in notebooks; + in pipelines |
| Variable Library Definition | ITEM-DEFINITIONS-CORE.md § VariableLibrary | Definition parts, decoded content, types, pipeline mappings, gotchas |
| Local Testing Strategy | development-workflow.md § Local Testing Strategy | |
| Debugging Patterns | development-workflow.md § Debugging Patterns | |
| Recommended Patterns (Infrastructure) | infrastructure-orchestration.md § Recommended patterns | |
| Workspace Provisioning Principles | infrastructure-orchestration.md § Workspace Provisioning Principles | |
| Lakehouse Configuration Guidance | infrastructure-orchestration.md § Lakehouse Configuration Guidance | |
| Pipeline Design Patterns | infrastructure-orchestration.md § Pipeline Design Patterns | |
| CI/CD Integration Strategy | infrastructure-orchestration.md § CI/CD Integration Strategy | |
| Notebook API — Which Endpoint to Use | notebook-api-operations.md § Quick Decision | Start here for remote notebook edits — getDefinition vs updateDefinition |
| Notebook Modification Workflow | notebook-api-operations.md § Workflow | Five-step flow: retrieve, decode, modify, encode, upload |
| Notebook API Error Reference | notebook-api-operations.md § Error Reference | 411, 400 (updateMetadata), 401, 403 explained |
| Notebook API Gotchas | notebook-api-operations.md § Gotchas | suffix, empty body, per-line rule, |
| Default Lakehouse Binding | notebook-api-operations.md § Default Lakehouse Binding | metadata vs block; discover IDs dynamically |
| Public URL Data Ingestion | notebook-api-operations.md § Public URL Data Ingestion | Use real source URL, stage into , then read with Spark |
| getDefinition (read notebook content) | notebook-api-operations.md § Step 1 — Retrieve Notebook Content | LRO flow, , empty body () requirement |
| Decode Base64 Notebook Payload | notebook-api-operations.md § Step 2 — Decode the Notebook Content | Extract payload, base64 decode, ipynb JSON structure |
| Modify Notebook Cells | notebook-api-operations.md § Step 3 — Modify the Notebook Content | Find cell, insert/replace lines, per-line rule |
| updateDefinition (write notebook content) | notebook-api-operations.md § Step 4 — Re-encode and Upload | Re-encode, upload, LRO poll, updateMetadata flag pitfall |
| Verify Notebook Update (Optional) | notebook-api-operations.md § Step 5 — Verify the Update | Skip unless you suspect a silent failure — from updateDefinition is sufficient (see Rule 2) |
| Notebook API Error Reference | notebook-api-operations.md § Error Reference | 411, 400 (updateMetadata), 401, 403 explained |
| Notebook API End-to-End Script | notebook-api-operations.md § Complete End-to-End Script | Full bash: get → decode → modify → encode → update → verify |
| Quick Start Examples | SKILL.md § Quick Start Examples | Minimal examples for common operations |
Must/Prefer/Avoid
MUST DO
- Check for recent jobs BEFORE creating new notebook runs — Query job instances from last 5 minutes; if recent job exists, monitor it instead of creating duplicate
- Capture job instance ID immediately after POST — Store job ID before any other operations to enable proper monitoring
- Verify workspace capacity assignment before operations — Workspace must have capacity assigned and active
- When user provides a public data URL, follow the Public URL Data Ingestion policy — keep detailed behavior in the linked resource section to avoid drift/duplication
- Format notebook cells correctly — Each line in cell source array MUST end with
to prevent code merging\n - Use correct Livy session body format — Send a FLAT JSON with
,name
,driverMemory
,driverCores
,executorMemory
. Do NOT wrap inexecutorCores
or send only{"payload": ...}
— that causes HTTP 500. Use valid memory values (28g, 56g, 112g, 224g). See Create Livy Session example below and SPARK-CONSUMPTION-CORE.md.{"kind": "pyspark"}
PREFER
- Poll job status with proper intervals — 10-30 seconds between polls; timeout after reasonable duration (e.g., 30 minutes)
- Check job history when POST response is unreadable — If POST returns "No Content" or unreadable response, query recent jobs (last 1 minute) before retrying
- Use Starter Pool for development — Development/testing workloads should use
useStarterPool: true - Use Workspace Pool for production — Production workloads need consistent performance with
useWorkspacePool: true - Enable lakehouse schemas during creation — Set
for better table organizationcreationPayload.enableSchemas: true - Implement idempotency checks — Prevent duplicate operations by checking existing state first
AVOID
- Never retry POST with same parameters — If you have a job ID, only use GET to check status; don't create duplicate job instances
- Don't skip capacity verification — Operations will fail if workspace capacity is paused or unassigned
- Avoid immediate POST retries on failures — Check for existing/active jobs first to prevent duplicates
- Don't create new runs if monitoring existing job — One job at a time; wait for completion before submitting new runs
- Don't hardcode workspace/lakehouse IDs — Discover dynamically via item listing or catalog search APIs
RULES — Read these first, follow them always
Rule 1 — Validate prerequisites before operations. Verify workspace has capacity assigned (see COMMON-CORE.md Create Workspace and Capacity Management) and resource IDs exist before attempting operations.
Rule 2 — Trust updateDefinition success. A
poll result fromSucceededis sufficient confirmation that content and lakehouse bindings persisted. Do NOT callupdateDefinitionafter every upload — it is an async LRO that adds significant latency. Only usegetDefinitionfor its intended purpose: reading current notebook content before making modifications.getDefinitionRule 3 — Prevent duplicate jobs and monitor execution properly. Before submitting new notebook run, ALWAYS check for recent job instances first (last 5 minutes). If recent job exists, monitor it instead of creating duplicate. After submission, capture job instance ID immediately and poll status - never retry POST. See SPARK-AUTHORING-CORE.md Job Monitoring for patterns.
Quick Start Examples
For detailed patterns, authentication, and comprehensive API usage, see:
- COMMON-CORE.md — Fabric REST API patterns, authentication, item discovery
- COMMON-CLI.md —
usage, environment detection, token acquisitionaz rest - SPARK-AUTHORING-CORE.md — Notebook deployment, lakehouse creation, job execution
Below are minimal quick-start examples. Always reference the COMMON- files for production use.*
Create Workspace & Lakehouse
# See COMMON-CORE.md Environment URLs and SPARK-AUTHORING-CORE.md for full patterns cat > /tmp/body.json << 'EOF' {"displayName": "DataEng-Dev"} EOF workspace_id=$(az rest --method post --resource "https://api.fabric.microsoft.com" \ --url "https://api.fabric.microsoft.com/v1/workspaces" \ --body @/tmp/body.json --query "id" --output tsv) cat > /tmp/body.json << 'EOF' {"displayName": "DevLakehouse", "type": "Lakehouse", "creationPayload": {"enableSchemas": true}} EOF lakehouse_id=$(az rest --method post --resource "https://api.fabric.microsoft.com" \ --url "https://api.fabric.microsoft.com/v1/workspaces/$workspace_id/items" \ --body @/tmp/body.json --query "id" --output tsv)
Organize Lakehouse Tables with Schemas
# See SPARK-AUTHORING-CORE.md Lakehouse Schema Organization for table organization patterns # Create schemas for medallion architecture spark.sql("CREATE SCHEMA IF NOT EXISTS bronze") spark.sql("CREATE SCHEMA IF NOT EXISTS silver") spark.sql("CREATE SCHEMA IF NOT EXISTS gold")
Create Livy Session
# See SPARK-CONSUMPTION-CORE.md for Livy session configuration and management # IMPORTANT: Body MUST be flat JSON with memory/cores — do NOT wrap in {"payload": ...} cat > /tmp/body.json << 'EOF' {"name": "dev-session", "driverMemory": "56g", "driverCores": 8, "executorMemory": "56g", "executorCores": 8, "conf": {"spark.dynamicAllocation.enabled": "true", "spark.fabric.pool.name": "Starter Pool"}} EOF az rest --method post --resource "https://api.fabric.microsoft.com" \ --url "https://api.fabric.microsoft.com/v1/workspaces/$workspace_id/lakehouses/$lakehouse_id/livyapi/versions/2023-12-01/sessions" \ --body @/tmp/body.json
Livy Session Body — Common Mistakes
- ❌
→ HTTP 500 (wrong wrapper, missing required fields){"payload": {"kind": "pyspark"}}- ❌
→ HTTP 500 (missing{"kind": "pyspark"},driverMemory, etc.)executorMemory- ✅ Flat JSON with
,name,driverMemory,driverCores,executorMemory(and optionallyexecutorCoreswith Starter Pool)conf
Spark Performance Configs
For detailed workload-specific configurations, see data-engineering-patterns.md Delta Lake Best Practices.
Quick reference:
# Write-heavy (Bronze): Disable V-Order, enable autoCompact # Balanced (Silver): Enable V-Order, adaptive execution # Read-heavy (Gold): Vectorized reads, optimal parallelism # See data-engineering-patterns.md for complete config tables
Variable Library in Notebooks
Use a Variable Library to centralize lakehouse names, workspace IDs, and feature flags.
# ✅ CORRECT — getLibrary() + dot notation lib = notebookutils.variableLibrary.getLibrary("MyConfig") lakehouse_name = lib.lakehouse_name enable_logging = lib.enable_logging # returns string "true"/"false" # Boolean: compare as string (bool("false") is True in Python!) if enable_logging.lower() == "true": print("Logging enabled") # ❌ WRONG — .get() does not exist, causes runtime failure # notebookutils.variableLibrary.get("MyConfig", "lakehouse_name")
Focus: Essential CLI patterns for Spark/data engineering development with intelligent routing to specialized resources. For comprehensive patterns, always reference COMMON-* files and resource documents.