Claude-skill-registry flowerpower
Create and manage data pipelines using the FlowerPower framework with Hamilton DAGs and uv. Use when users request creating flowerpower projects, pipelines, Hamilton dataflows, or ask about flowerpower configuration, execution, or CLI commands.
install
source · Clone the upstream repo
git clone https://github.com/majiayu000/claude-skill-registry
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/flowerpower" ~/.claude/skills/majiayu000-claude-skill-registry-flowerpower && rm -rf "$T"
manifest:
skills/data/flowerpower/SKILL.mdsource content
FlowerPower Pipeline Skill
Create and manage data processing pipelines using FlowerPower with Hamilton DAGs.
Quick Start
# Install flowerpower uv pip install flowerpower # Initialize project flowerpower init --name my-project # Create pipeline flowerpower pipeline new my_pipeline # Run pipeline flowerpower pipeline run my_pipeline
Project Initialization
Use
scripts/init_project.py or CLI:
# CLI flowerpower init --name <project-name> # Python from flowerpower import FlowerPowerProject project = FlowerPowerProject.init(name='my-project')
Creates structure:
my-project/ ├── conf/ │ ├── project.yml │ └── pipelines/ ├── pipelines/ └── hooks/
Creating Pipelines
Use
scripts/create_pipeline.py or CLI:
flowerpower pipeline new <name>
Creates:
- Hamilton functionspipelines/<name>.py
- Configurationconf/pipelines/<name>.yml
Pipeline Module Template
from hamilton.function_modifiers import parameterize from pathlib import Path from flowerpower.cfg import Config PARAMS = Config.load( Path(__file__).parents[1], pipeline_name="my_pipeline" ).pipeline.h_params @parameterize(**PARAMS.input_config) def load_data(source: str) -> dict: """Load data from source.""" return {"source": source} def process_data(load_data: dict) -> dict: """Process loaded data.""" return {"processed": load_data} def final_result(process_data: dict) -> str: """Return final result.""" return str(process_data)
Pipeline Config Template
params: input_config: source: "data.csv" run: final_vars: - final_result executor: type: threadpool max_workers: 4 retry: max_retries: 3 retry_delay: 1.0
Running Pipelines
# Basic run flowerpower pipeline run my_pipeline # With inputs flowerpower pipeline run my_pipeline --inputs '{"key": "value"}' # With executor flowerpower pipeline run my_pipeline --executor threadpool --executor-max-workers 8 # With retries flowerpower pipeline run my_pipeline --max-retries 3 --retry-delay 2.0
Python API:
from flowerpower import FlowerPowerProject project = FlowerPowerProject.load('.') result = project.run('my_pipeline') # With RunConfig from flowerpower.cfg.pipeline.run import RunConfig config = RunConfig(inputs={"key": "value"}, final_vars=["output"]) result = project.run('my_pipeline', run_config=config)
CLI Commands
| Command | Description |
|---|---|
| Initialize project |
| Create pipeline |
| Run pipeline |
| List pipelines |
| Visualize DAG |
| Delete pipeline |
Executor Types
| Type | Use Case | Config |
|---|---|---|
| Default, sequential | - |
| I/O-bound tasks | |
| CPU-bound tasks | |
| Distributed computing | |
| Distributed computing | |
Optional Dependencies
uv pip install flowerpower[io] # I/O plugins uv pip install flowerpower[ui] # Hamilton UI uv pip install flowerpower[all] # All extras
Resources
- references/overview.md - Key concepts, architecture, project structure
- references/configuration.md - Complete YAML configuration patterns
- references/hamilton-patterns.md - Hamilton function decorators and patterns
Scripts
- scripts/init_project.py - Initialize new flowerpower project
- scripts/create_pipeline.py - Create new pipeline with template
- scripts/run_pipeline.py - Execute pipeline with options
- scripts/list_pipelines.py - List available pipelines