Babysitter Apache Spark Optimizer
Analyzes and optimizes Apache Spark jobs for performance, cost, and resource utilization
install
source · Clone the upstream repo
git clone https://github.com/a5c-ai/babysitter
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/a5c-ai/babysitter "$T" && mkdir -p ~/.claude/skills && cp -r "$T/library/specializations/data-engineering-analytics/skills/apache-spark-optimizer" ~/.claude/skills/a5c-ai-babysitter-apache-spark-optimizer && rm -rf "$T"
manifest:
library/specializations/data-engineering-analytics/skills/apache-spark-optimizer/SKILL.mdtags
source content
Apache Spark Optimizer
Overview
Analyzes and optimizes Apache Spark jobs for performance, cost, and resource utilization. This skill provides deep expertise in Spark execution plans, partitioning strategies, and resource configuration to maximize efficiency.
Capabilities
- Spark execution plan analysis and optimization
- Partition strategy recommendations
- Shuffle reduction techniques
- Memory and executor configuration tuning
- Catalyst optimizer hints generation
- Data skew detection and mitigation
- Broadcast join optimization
- Caching strategy recommendations
Input Schema
{ "sparkCode": "string", "clusterConfig": "object", "executionMetrics": "object", "dataCharacteristics": { "volumeGB": "number", "partitionCount": "number", "skewFactor": "number" } }
Output Schema
{ "optimizedCode": "string", "recommendations": ["string"], "expectedImprovement": { "executionTime": "percentage", "resourceUsage": "percentage", "cost": "percentage" }, "configChanges": "object" }
Target Processes
- ETL/ELT Pipeline
- Streaming Pipeline
- Feature Store Setup
- Pipeline Migration
Usage Guidelines
- Provide the Spark code or job definition for analysis
- Include cluster configuration details (executors, memory, cores)
- Share execution metrics if available (from Spark UI or history server)
- Describe data characteristics including volume, partitions, and known skew
Best Practices
- Always analyze execution plans before and after optimization
- Test optimizations on representative data samples first
- Monitor resource utilization during optimization validation
- Document configuration changes for reproducibility
- Consider cost implications alongside performance gains