Skills beddel

install
source · Clone the upstream repo
git clone https://github.com/openclaw/skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/botanarede/beddel" ~/.claude/skills/openclaw-skills-beddel && rm -rf "$T"
OpenClaw · Install into ~/.openclaw/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.openclaw/skills && cp -r "$T/skills/botanarede/beddel" ~/.openclaw/skills/openclaw-skills-beddel && rm -rf "$T"
manifest: skills/botanarede/beddel/SKILL.md
source content

Beddel

Declarative YAML workflow engine for AI pipelines — run multi-step LLM chains with branching, guardrails, retry, and observability out of the box.

Prerequisites

  • Python 3.11+ (
    python3.11 --version
    )
  • pip for Python 3.11 (
    python3.11 -m pip --version
    )
  • An LLM API key — any LiteLLM-supported provider works. Gemini recommended:
export GEMINI_API_KEY="your-key"

Installation

python3.11 -m pip install "beddel[all]"
beddel version

Note: System Python may be 3.10. Always use

python3.11
explicitly.

Quick Start

  1. Write a workflow file
    hello.yaml
    :
id: hello
name: Hello World
input_schema:
  topic: { type: str, required: true }
steps:
  - id: greet
    primitive: llm
    config:
      model: gemini/gemini-2.0-flash
      prompt: "Write a one-sentence greeting about $input.topic"
      max_tokens: 50
  1. Run it:
beddel run hello.yaml -i topic="AI agents" --json-output

Tool Integration (OpenClaw Plugin)

The

beddel
tool is available via the OpenClaw plugin
@botanarede/beddel
:

openclaw plugins install @botanarede/beddel

Once installed, the agent can invoke

beddel
with actions:
run
,
validate
,
list-primitives
.

The bundled example

examples/setup-beddel.yaml
automates this installation — see Bundled Example below.

CLI Reference

CommandDescription
beddel run <file> [-i key=val] [--json-output]
Execute a workflow
beddel validate <file>
Validate YAML syntax and schema
beddel list-primitives
Show available primitives
beddel serve -w <file> [--port 8000]
Serve workflow as HTTP endpoint
beddel version
Print installed version

Core Concepts

A workflow is a YAML file with an

id
,
name
, optional
input_schema
, and a list of
steps
. Each step declares a primitive (the unit of work) and a config (primitive-specific parameters).

Steps execute sequentially. Each step's output is available to subsequent steps via

$stepResult.<step_id>.<path>
.

See

references/
for full schema documentation.

Primitives

PrimitivePurpose
llm
Single-turn LLM call with streaming support
chat
Multi-turn conversation with message history
output-generator
Template-based output rendering (JSON, Markdown, text)
guardrail
Data validation with strategies: raise, return_errors, correct, delegate
call-agent
Nested workflow invocation with depth tracking
tool
External function call —
shell_exec
is built-in
agent-exec
Unified adapter for external agent delegation

Execution Strategies

Each step can declare an

execution_strategy
to control error handling:

StrategyBehavior
fail
Stop workflow on error (default)
skip
Log error, continue to next step
retry
Retry with exponential backoff and jitter
fallback
Execute an alternative step on failure
delegate
Delegate error recovery to agent judgment

Variable Resolution

NamespaceExampleSource
$input
$input.topic
Runtime inputs (
-i key=val
)
$stepResult
$stepResult.greet.content
Previous step outputs
$env
$env.GEMINI_API_KEY
Environment variables

Key paths for step results:

  • tool steps:
    $stepResult.<id>.result.stdout
    ,
    .result.exit_code
  • llm steps:
    $stepResult.<id>.content
  • guardrail steps:
    $stepResult.<id>.data.<field>
    ,
    .valid

Bundled Example: setup-beddel

This workflow checks whether the

@botanarede/beddel
OpenClaw plugin is installed and installs it if needed. It demonstrates 3 of the 7 primitives:
tool
,
guardrail
, and conditional execution via
if
.

id: setup-beddel
name: Beddel Plugin Setup
description: Install or update the @botanarede/beddel OpenClaw plugin and verify it loads.

steps:
  - id: check_plugin
    primitive: tool
    config:
      tool: shell_exec
      arguments:
        cmd: "python3.11 -c \"import subprocess,json,re;r=subprocess.run(['openclaw','plugins','list'],capture_output=True,text=True);has=bool(re.search(r'beddel',r.stdout));loaded=bool(re.search(r'beddel.*loaded',r.stdout));print(json.dumps({'action':'OK'if loaded else'REINSTALL'if has else'INSTALL'}))\""

  - id: validate_check
    primitive: guardrail
    config:
      data: "$stepResult.check_plugin.result.stdout"
      schema:
        fields:
          action: { type: str, required: true }
      strategy: correct

  - id: install_plugin
    primitive: tool
    config:
      tool: shell_exec
      arguments:
        cmd: "openclaw plugins install @botanarede/beddel"
      timeout: 120
    if: "$stepResult.validate_check.data.action != 'OK'"

  - id: verify
    primitive: tool
    config:
      tool: shell_exec
      arguments:
        cmd: "openclaw plugins info beddel"

What each step demonstrates

StepPrimitiveFeature
check_plugin
tool
Deterministic check via
shell_exec
— outputs JSON without LLM
validate_check
guardrail
correct
strategy — parses JSON string, strips markdown fences, validates schema
install_plugin
tool
Conditional execution (
if
) — skips when plugin already loaded.
timeout: 120
for network ops
verify
tool
Post-install verification

Run it:

beddel run examples/setup-beddel.yaml --json-output

Security & Privacy

  • Secrets: Use
    $env.*
    variables — never hardcode API keys in workflow YAML
  • shell_exec: Runs with
    shell=False
    (no shell injection). Commands are split via
    shlex.split()
    . Shell operators (
    |
    ,
    &&
    ,
    >
    ) are sanitized in beddel 0.1.1+
  • Subprocess sandbox: Default timeout 60s, max stdout 1MB per stream, configurable per step

External Endpoints

EndpointWhenPurpose
LLM provider API (e.g.
generativelanguage.googleapis.com
)
llm
,
chat
,
guardrail
(delegate) steps
Model inference
PyPI (
pypi.org
)
Installation onlyPackage download
npm registry (
registry.npmjs.org
)
Plugin install stepPlugin download

Trust Statement

Beddel executes user-defined YAML workflows. It does not phone home, collect telemetry by default, or transmit data beyond the configured LLM provider endpoints. OpenTelemetry export is opt-in.

Observability

Beddel emits OpenTelemetry spans for every workflow and step execution:

  • beddel.workflow.execute
    — root span per workflow run
  • beddel.step.<primitive>
    — child span per step
  • gen_ai.usage.*
    attributes on LLM steps (prompt/completion tokens)

Enable with any OTel-compatible collector via standard

OTEL_*
environment variables.

Troubleshooting

ErrorCauseFix
BEDDEL-PRIM-300
Tool not foundEnsure tool name is
shell_exec
(built-in). Custom tools need
-t name=module:func
BEDDEL-RESOLVE-001
Unresolvable variableCheck step id spelling and result path. Tool results use
.result.stdout
, LLM uses
.content
BEDDEL-GUARD-201
Guardrail validation failedCheck schema field types. Use
strategy: correct
for JSON string inputs
python3.11: not found
Wrong Python versionInstall Python 3.11+. System Python may be 3.10
Step shows
SKIPPED
if
condition was false or
execution_strategy: skip
Expected behavior — downstream steps should handle SKIPPED values

Advanced: Python SDK

from beddel import WorkflowExecutor, VariableResolver

resolver = VariableResolver()
resolver.register_namespace("secrets", lambda path, ctx: get_secret(path))

executor = WorkflowExecutor(resolver=resolver)
result = await executor.execute(workflow, {"topic": "AI"})

For FastAPI integration:

beddel serve -w workflow.yaml --port 8000

References

Additional documentation in

references/
(loaded on demand):

  • workflow-format.md
    — Complete YAML schema
  • primitives.md
    — All 7 primitives with full config options
  • execution-strategies.md
    — 5 strategies with examples
  • variable-resolution.md
    — Namespaces, custom resolvers, error handling