Dotfiles databricks-spark-structured-streaming

Comprehensive guide to Spark Structured Streaming for production workloads. Use when building streaming pipelines, working with Kafka ingestion, implementing Real-Time Mode (RTM), configuring triggers (processingTime, availableNow), handling stateful operations with watermarks, optimizing checkpoints, performing stream-stream or stream-static joins, writing to multiple sinks, or tuning streaming cost and performance.

install
source · Clone the upstream repo
git clone https://github.com/msbaek/dotfiles
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/msbaek/dotfiles "$T" && mkdir -p ~/.claude/skills && cp -r "$T/.claude/skills/databricks-spark-structured-streaming" ~/.claude/skills/msbaek-dotfiles-databricks-spark-structured-streaming && rm -rf "$T"
manifest: .claude/skills/databricks-spark-structured-streaming/SKILL.md
source content

Spark Structured Streaming

Production-ready streaming pipelines with Spark Structured Streaming. This skill provides navigation to detailed patterns and best practices.

Quick Start

from pyspark.sql.functions import col, from_json

# Basic Kafka to Delta streaming
df = (spark
    .readStream
    .format("kafka")
    .option("kafka.bootstrap.servers", "broker:9092")
    .option("subscribe", "topic")
    .load()
    .select(from_json(col("value").cast("string"), schema).alias("data"))
    .select("data.*")
)

df.writeStream \
    .format("delta") \
    .outputMode("append") \
    .option("checkpointLocation", "/Volumes/catalog/checkpoints/stream") \
    .trigger(processingTime="30 seconds") \
    .start("/delta/target_table")

Core Patterns

PatternDescriptionReference
Kafka StreamingKafka to Delta, Kafka to Kafka, Real-Time ModeSee kafka-streaming.md
Stream JoinsStream-stream joins, stream-static joinsSee stream-stream-joins.md, stream-static-joins.md
Multi-Sink WritesWrite to multiple tables, parallel mergesSee multi-sink-writes.md
Merge OperationsMERGE performance, parallel merges, optimizationsSee merge-operations.md

Configuration

TopicDescriptionReference
CheckpointsCheckpoint management and best practicesSee checkpoint-best-practices.md
Stateful OperationsWatermarks, state stores, RocksDB configurationSee stateful-operations.md
Trigger & CostTrigger selection, cost optimization, RTMSee trigger-and-cost-optimization.md

Best Practices

TopicDescriptionReference
Production ChecklistComprehensive best practicesSee streaming-best-practices.md

Production Checklist

  • Checkpoint location is persistent (UC volumes, not DBFS)
  • Unique checkpoint per stream
  • Fixed-size cluster (no autoscaling for streaming)
  • Monitoring configured (input rate, lag, batch duration)
  • Exactly-once verified (txnVersion/txnAppId)
  • Watermark configured for stateful operations
  • Left joins for stream-static (not inner)